Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
12,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
12,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check
Step6: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
Step8: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
Step9: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
Step10: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
Step11: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment
Step12: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%. | Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print param_grad_num.shape
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
pass
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation |
12,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: HW4
Step2: Description of the data set
The data set has been extracted from the Yelp Phoenix restaurants dataset. It is available here.
Step3: The data frame is a frame of reviews. We have joined in information about users and businesses into this frame so that you have only one frame to work with.
This information is for the reviews themselves
Step4: Of the total 4503 restaurants in the data, about 75% of the users have reviewed only fewer than 3 of them. Also, it seems a minority of the users contribute the most number of reviews. A similar trend is observed in the number of reviews per restaurant as well. About 75% of the restaurants have fewer than 37 reviews.
1.2 Compute the average rating of reviews in the data set and a histogram of all the ratings in the dataset.
Step6: The following function is used to re-compute review counts and averages whenever you subset a reviews data frame. We'll use it soon to construct a smaller, more computationally tractable data frame.
Step7: 1.3 Create a smaller data set in dataframe smalldf by looking for those businesses with more than 150 reviews and those users with more than 60 reviews. Include all the columns that were there in the parent dataframe. Since you have created a subset of the data set, use the method provided above to recalculate the averages. Print the number of unique users and items in this data set.
Note that while this cut makes sure we have prolific users, the cut on businesses restores sparsity by reducing the number of reviews per user.
Step8: How does this compare to the parent data set, in terms of size and sparsity? Once again, plot histograms of the review count grouped by user, and by the review count grouped by business, respectively, and describe the results
Step9: Data in smalldf looks less sparse than the fulldf.
1.4 Compute histograms of the average user rating in the smaller data set, and the average business rating in the smaller data set. Print the overall mean.
Step10: Common Support
Lets now make a histogram of the common user support (the number of common reviewers) of each pair of restaurants on the smaller set, and print the mean. Pay attention to the code, as you will use parts of it later. (This code takes a bit of time to run, so be patient).
The common support is an important concept, as for each pair of restaurants, its the number of people who reviewed both. It will be used to modify similarity between restaurants. If the common support is low, the similarity is less believable.
Step12: As you can see, even though we chose a subset of the dataframe in which every restaurant had 150 reviews and every user had atleast made 60, the common support of most pairs of restaurants is really low, indeed less than 10!.
Calculating Similarity
Users rate restaurants on a scale of 1-5. Even though this rating is integer valued, for the purposes of this assignment we shall treat it as a real number.
Even though each reviewer uses the same 5-star scale when rating restaurants, comparing two users by comparing their raw user ratings can be problematic. Consider a user whose average rating is 2. This is a curmudgeonly user. Consider another whose average rating is 4. This is a rather enthusiastic one. How should we compare a 3 rating by the curmudgeonly one to a 5 rating of the enthusiastic one?
It is for this purpose that we must subtract the average rating of the user from the actual rating of the restaurants in computing the similarity of two restaurants. This makes the above ratings by the two users comparable. We do this in the function pearson_sim defined below.
If there is no common support (n_common=0), we have no basis for making a similarity estimate, and so we set the similarity to 0. In the case that the individual restaurant rating variance is 0, such as in the case where there is only one common reviewer (n_common=1), we return the NaN that the scipy pearsonr returns. We will deal with it soon,
Step14: The function get_restaurant_reviews defined below takes a restaurant business_id and a set of users, and returns the reviews of that restaurant by those users. You will use this function in calculating a similarity function, in 1.5.
Step16: 1.5 Write a function calculate_similarity that operates between two restaurants and calculates a similarity for them, taking a dataframe and a similarity function similarity_func. An example of the similarity_func is the pearson_sim we defined above. calculate_similarity operates as follows
Step18: Making a database of similarities
We now move to calculating a global database of pairwise restaurant similarities.
We provide you here with a function to make a database of the similarities for each pair of restaurants in the database. The class Database is initialized in its constructor by taking as arguments a dataframe of reviews. The method populate_by calculating iterates over every possible pair of business_id's in the dataframe and populates the database with similarities and common supports. It takes as arguments a function the similarity function similarity_func like pearson_sim (calculate_similarity then uses this to calculate the similarity). The get method on the database can be used to retrieve the similarity for two business ids.
(See Thu Oct 17th's class video for information about classes)
Step19: Lets run make_database and store the result in the global variable db. Lets print out an example entry. Running this function will take a bit of time.
Step20: K-Nearest restaurants (in similarity)
We are now going to find the k-nearest restaurants to a given restaurant based on the database of similarities that we calculated. But we have a problem.
Consider the two cases where there is just one common reviewer, and where there are 40. In the former case, we might get a artificially high similarity based on the tastes of just this user, and thus we must reduce its importance in the nearest-neighbor calculation. In the latter case, we would get a much more unbiased estimator of the similarity of the two restaurants.
To control the effect of small common supports, we can shrink our pearson co-efficients. We shall do this by using the "regularization" parameter reg
Step22: 1.6 Now we can move to writing a knearest function, which finds the k nearest neighbors of a given restaurant based on the shrunk similarities we calculate. Note that as defined here, the nearest neighbors are global over the entire set of restaurants, as opposed to being restricted to the restaurants a user has reviewed(we shall do that in the next problem). Thus, this is an expensive function!
Write a knearest that returns a k-length sorted list of 3-tuples each corresponding to a restaurant. The tuple structure is (business_id, shrunken similarity score, common support) where the similarity score and common support are with respect to the restaurant whose neighbors we are finding, and the business_id is the id of the "nearby" restaurant found. The nearby restaurants are found from a supplied numpy array of restaurants set_of_restaurants. The spec for the function is given below. HINT
Step23: Ok it's time to recommend!
Lets choose the two very different businesses in the dataframe
Step24: We provide functions to look up a business name given a business id, and a username given a user id.
Step25: Get top matches
Its now time to answer the question
Step26: We can see that these two restaurants are in somewhat different orbits
Step28: Get top recommendations for user.
1.7 Its your job now to write a function get_top_recos_for_user which takes as arguments a userid, the n top choices for the user, the dataframe, k, and a regularizer, and returns the top recommendations obtained from combining the restaurants that are neighbors of each of the n choices, in the way described in the previous paragraph. This returned list is a list of tuples (restaurant_id, business_avg) sorted by business_avg where business_avg is the average rating of the restaurant over the dataframe.
Step29: Lets print the top recommendations for testuserid, with a regularization of 3.
Step31: Problem 2
Step33: 2.2 Now write a function that returns the predicted rating for a user and an item using the formula at the beginning of this problem. Include code to deal with the possibility that the sum of scores that goes in the denominator is 0
Step34: For the top-recommendations in the variable toprecos from the previous section, we compute the predicted rating and compare it with the average rating over all users available inside the tuples that make up toprecos. We use a k of 7 and regularization 3. For comparision we also print this users' average rating. Do you notice anything interesting about how the order has changed from when we did this with the global similarities? (for you to think, not to answer)
Step35: Testing the ratings
Let us compare the predicted ratings with a user's ratings. Note that we are doing this on the same set that we constructed the predictions with, so this is not a validation of the procedure, but simply a check of the procedure's fit. We first write a helper function to return the user score for a restaurant, and the restaurant's average score over all users.
Step36: For the user testuserid, we loop over the variable bizs (which is a set of restaurants the user has rated) and print the predicted rating, and the actual rating and restaurant average rating obtained using the function above. We again use k=7 and a regularization of 3.
Step38: 2.3 Explain in words why the predicted ratings are lower than the actual ratings. How do the user average rating and restaurant average rating affect this? How does sparsity affect the predicted ratings?
your answer here
Recall that bizs (defined just above question 1.7) has restaurants sorted by Vern's actual star ratings. This means that in this sample, we are looking at Vern's top rated restaurants.
The predicted ratings are lower because these are Vern's top 5 choices, which represent the largest positive deviations away from Vern's mean rating of 3.57. Because we are looking at the upper tail of Vern's rating distribution, but pooling information together from the K nearest neighbors among Vern's rated restaurants to construct the predicted rating, the predicted ratings should fall closer to Vern's user mean than the true ones do. Taking into account the average restaurant rating helps a little bit here because we can adjust the predicted rating to reflect an overall very good restaurant, but it does not counteract the effect of looking at the upper tail of Vern's ratings.
Note that if we were to take Vern's bottom 5 restaurants, we would see the opposite effect.
In general, the larger K is (assuming that the similarities within this neighborhood are positive), the closer the predicted rating will be to Vern's user average (this is the bias limit in the bias-variance tradeoff). Similarly, the smaller K is, the more likely we are to have user ratings that are close to the observed rating (the variance limit). The sparsity of the data affects how quickly we move from the variance limit to the bias limit as we increase K. If there were a lot of very similar restaurants in the dataset that Vern had ranked very highly, even with K relatively large, it would be possible to see a predicted rating much closer to the extremely positive ratings we see here in Vern's top 5 (see the results in question 4.4). As these data are now, however, even the most similar 7 restaurants to these that Vern rated so highly lie closer to Vern's mean.
Error Analysis
This next function takes a set of actual ratings, and a set of predicted ratings, and plots the latter against the former. We can use a graph of this kind to see how well or badly we do in our predictions. Since the nearest neighbor models can have alternating positive and negative similarities (the sum of similarity weights in the denominator can get large), the ratings can get very large. Thus we restrict ourselves to be between -10 and 15 in our ratings and calculate the fraction within these bounds. We also plot the line with unit slope, line segments joining the means, and a filled in area representing one standard deviation from the mean.
The first argument to compare_results is a numpy array of the actual star ratings obtained from the dataframe, while the second argument is the numpy array of the predicted ones. (Feel free to improve this function for your display)
Step39: 2.4 For each review in the data set, obtain a prediction from the entire dataframe smalldf. Use the function compare_results above to plot the predicted ratings against the observed ones. Make 4 such graphs, at k=3 and k=10, and for reg=3. and reg=15.
Note that this analysis is not strictly a model check because we are testing on the training set. However, since the user averages would change each time a cross-validation split was done on the set, we would incur the prohibitive expense of redoing the database each time. This would be better done on a cluster, using map-reduce or other techniques. While we explore map-reduce later in this homework, we shall not do any cross-validation.
Explain the results you get in the graphs in words.
Step42: your answer here
If you are a bit confused by the look of these graphs, you should be!
For k=3, the predicted values are quite well-behaved, with the exception of several predictions that have extremely large magnitudes. It appears that the predicted values are pulled into the mean star rating, which sits somewhere around 3.8, so ratings on the low end are overestimated, and similarly ratings on the high end underestimated. The regularization does not appear to have a strong effect when k = 3.
For k=10, the predicted values are much less stable, with many more extreme predictions. The means appear to track better with the true means. The regularization has a much more extreme, although indirect, effect on the appearance of the plot. Since regularization has stronger effects on similarity scores between restaurants that have small common support, we can see that increasing k makes the predictions more sensitive to the regularization because in a small dataset, the common support between a restaurant and it's 10-nearest one will be quite small.
Note that this example does not seem to follow the standard bias-variance tradeoff, where we would expect small k to give a unbiased estimates that capture the extremes, while we would expect large k to give biased estimates that pull extreme values toward the mean. A large reason for this failure for this example to capture this behavior is that we have defined similarity scores that can be positive or negative, and the bias-variance logic is based on the more standard setting where we average together values that have strictly positive weights. When you have negative weights, it's possible that the sum of sij's in a neighborhood can get close to zero, making our estimator Ŷ um explode in the positive or negative direction since this would entail dividing by (nearly) zero. Thus for those restaurants where the denominator goes to 0 (more likely to happen at larger k as you have more chances of it there being more weights to add), the ratings are unstable, even numerically!
This problem is less pronounced in large datasets or with small k because the k-nearest restaurants are likely to have positive similarity with the current one. However, with small datasets, we can find that even with k relatively small (in this case, around 10), there are negative similarities in the k-neighborhood that make the estimator unstable. This sort of instability would be much less pronounced in the large dataset.
If we were to rescale the similarities to be positive, say between 0 and 1, the behavior of the estimator with respect to k and reg would be quite different. (SEE BELOW!)
2.5 Outline a process, in words, for choosing the nearest neighbor parameter k. For this question fix the regularization parameter reg at 3.
your answer here
We could use $F$-fold cross-validation (usually called $k$-fold cross-validation, but we've already used $k$ here to mean something else!). Specifically, we could randomly partition the data into $F$ equally sized folds (pieces), making sure that each fold includes at least one rating for each restaurant and one rating by each user (it would probably best to make sure that if $K$ were the maximum $k$ you would be considering, that each user and each restaurant appear at least $K$ times in each fold). For each value of $k$, and for each fold, we could repeat the procedure above for predicting user ratings by computing similarities using $F-1$ of the folds to compute similarities and computing the prediction error in the held out fold. We could then choose the $k$ with the smallest average prediction error across the folds, and recompute the recommender on the whole dataset using the chosen value of $k$.
If we wanted to both choose a good value for $k$ and check how well we could expect the result to generalize, we could divide the dataset into $F+1$ folds, and keep the last fold out as a verification set. We could perform the cross-validation above on $F$ folds to select $k$, then upon selecting $k$ use all $F$ folds to create a recommender, and see how well this recommender predicted ratings in the final validation set.
Q3 Bayesian Chocolates
Step44: Here is the Gibbs sampler skeleton that your functions fit into. Look over the structure to see how for each draw from the posterior, the sampler iterates through $\mu$, $\sigma$, $\gamma_m$ for each item, and $\theta_u$ for each user.
Step45: Posterior Summaries
Once you have posterior draws from the sampler, the most natural thing to do is to compute the posterior mean of each quantity you are intersted in. To do this, we simply need to take the average value of each quantity across the samples drawn from the sampler. Before taking the average, however, we will want to ignore the first 20-30% of samples because these correspond the burnin period, the time during which the sampler is still looking for the main meat of the distribution.
Ok it's time to recommend!
3.3 Now that you have the Gibbs sampler, draw 1000 samples from the posterior distribution using a two-dimensional latent factor and prior precisions Lambda_theta_diag and Lambda_gamma_diag both equal to 0.1.
Compute the posterior mean of the fitted values for each $Y_{um}$, eliminating the first 200 samples. Call these the prediction. These constitute our recommendations. True to the bayesian paradigm, we dont just have mean predictions, but entire distributions. But currently we are only interested in the means.
Step46: Plot the predictions against the observed data.You can use the compare_results function defined in the previous section. How do the fitted values compare to those from the KNN procedure?
Step47: your answer here
The results from the latent factor model appear to be better-behaved than those from the KNN procedure (with negative similarities) since there are no extreme values. In terms of the non-extreme values from the KNN procedure, the latent factor model appears to make similar predictions to the KNN procedure when k = 3., Specifically, the average prediction line for each class is too compressed toward the total data mean of around 3.8, meaning that we are in the bias limit where we are pooling too much information between ratings. Thus, we are overpredicting low ratings and underpredicting high ratings.
(If we compare to the KNN procedure with strictly positive weights, as in the homework addendum, we see again that the recommenders make comparable predictions, although the latent factor model again appears to fit slightly better than at both the bias or variance limits of the KNN procedure).
There is also a bias-variance tradeoff here. In this case, it appears that we have proposed a model that is too simple (thus close to the bias limit) because of how the ratings are pulled in toward the data mean. In this case, proposing more latent factors increases the flexibility of the model, and thus moves us toward the variance limit. Thus, the plot suggests that we may want to reduce the bias at the cost of increasing variance, for example by considering a latent factor model with more factors (say, L=15) to obtain a better fit (see ADDENDUM below).
Step48: Q4 Scaling Up
All our recommenders suffer from problems having to do with the fact that we subsetted an already sparse user-item matrix. The more items we have, the more items we may find in the vicinity of a given item, and thus we are likely to give a more robust average rating to the given item.
In this problem we shall use Amazon Elastic Map-Reduce to tackle the entire user-restaurant matrix. We shall do this in two parts
Step49: Running mrjob locally
mrjob scripts cannot be run from the ipython notebook, as they fork themselves on execution. Thus you must write the code for mrjob in a separate file which you must submit along with this homework, in the same folder as the python notebook file.
If you have not done so already (you were supposed to do this as part of HW 0), you will first need to install mrjob. The appropriate equivalent of the following incantation should do the job
Step50: Explanation for those funny yield keywords
The functions above “yield” values, and do not “return” them. They are generators. Here is an example
Step51: You can read more here. Also see Thu Oct 17th's class video for information about classes and generators.
Include computesim.py in your submission in the same folder as the notebook. Uncommenting and running the following cell should output your code in here.
Step52: Checking the results
Let us load the data from the file
Step54: We will Implement a function make_database_from_pairs which takes a dataframe of restaurants smalldf and the output parsed in the previous command to create the database like before. By the nature of the map-reduce algorithms these only contain those restaurant pairs with common support. The Database constructor initializes the remaining similarities to 0.
The function will take the dataframe and bizpairs obtained by parsing the EMR output file which have the key of business pairs and value the pair of pearson correlation and n_common. It will return an instance of the Database class.
This function will take a long time to run on large data sets.
Step55: We will store the output in variable db_mrjob_local.
Step56: We print a pair to see that our answers are identical.
Step57: 4.2 Lets test that our results are overall the same as before
Step58: Running on Amazon Elastic Map Reduce(EMR)
At this point, we shall shift to running on Amazon EMR.
Follow the instruction below (Successfully ran by Manoj)
Make sure the AWS account is opened and the Key file (CS109.pem in this case) is generated.
Run chmod og-rwx /Users/apple/Dropbox/DataScience/ML/Admin/CS109.pem so that ssh will be happy
Create an mr config file 'mrjob.config' in /etc (Go to folder /etc) with the below contents
Step59: This function will take a very long time to run, on the order of 5 minutes or more, depending on your computer
Step60: 4.4 For testuserid, once again, print out the ratings using the bizs list as before. How have they changed with respect to Question 2? Why might this be?
Step61: your answer here
Copy the smalldf ratings below for easy comparison | Python Code:
%matplotlib inline
from collections import defaultdict
import json
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import rcParams
import matplotlib.cm as cm
import matplotlib as mpl
#colorbrewer2 Dark2 qualitative color table
dark2_colors = [(0.10588235294117647, 0.6196078431372549, 0.4666666666666667),
(0.8509803921568627, 0.37254901960784315, 0.00784313725490196),
(0.4588235294117647, 0.4392156862745098, 0.7019607843137254),
(0.9058823529411765, 0.1607843137254902, 0.5411764705882353),
(0.4, 0.6509803921568628, 0.11764705882352941),
(0.9019607843137255, 0.6705882352941176, 0.00784313725490196),
(0.6509803921568628, 0.4627450980392157, 0.11372549019607843)]
rcParams['figure.figsize'] = (10, 6)
rcParams['figure.dpi'] = 150
rcParams['axes.color_cycle'] = dark2_colors
rcParams['lines.linewidth'] = 2
rcParams['axes.facecolor'] = 'white'
rcParams['font.size'] = 14
rcParams['patch.edgecolor'] = 'white'
rcParams['patch.facecolor'] = dark2_colors[0]
rcParams['font.family'] = 'StixGeneral'
def remove_border(axes=None, top=False, right=False, left=True, bottom=True):
Minimize chartjunk by stripping out unnecesasry plot borders and axis ticks
The top/right/left/bottom keywords toggle whether the corresponding plot border is drawn
ax = axes or plt.gca()
ax.spines['top'].set_visible(top)
ax.spines['right'].set_visible(right)
ax.spines['left'].set_visible(left)
ax.spines['bottom'].set_visible(bottom)
#turn off all ticks
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_ticks_position('none')
#now re-enable visibles
if top:
ax.xaxis.tick_top()
if bottom:
ax.xaxis.tick_bottom()
if left:
ax.yaxis.tick_left()
if right:
ax.yaxis.tick_right()
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
#utility functions
def histogram_style():
remove_border(left=False)
plt.grid(False)
plt.grid(axis='y', color='w', linestyle='-', lw=1)
def histogram_labels(xlabel, ylabel, title, loc):
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.title(title)
plt.legend(frameon=False, loc=loc)
def histogram_settings(xlabel, ylabel, title, loc = 'upper left'):
histogram_style()
histogram_labels(xlabel, ylabel, title, loc)
Explanation: HW4: Do we really need Chocolate Recommendations?
<img src="http://1.bp.blogspot.com/-8dGYKeMKNaU/TvutmCenc-I/AAAAAAAABEo/b2Czf4RlAzw/s1600/Death%2BBy%2BChocolate.JPG" width="400" height="300"/>
Before You Start
This is a long homework. Please start early. It uses a lot of different (and sometimes complex) concepts, so you might find yourself reading a lot. So, please, give yourself a lot of time.
Also, please see this link on getting an Amazon Web Services account soon, so that you dont delay its creation. This class gives you $100 in credits which you will use for this homework, possibly your project, and any other projects you might like.
Finally, please go to the labs. The one on 18th October (Today) will cover Gibbs Sampling and Bayesian Normal distributions. The one on the 25th will cover Map-Reduce. Both will help on the homework.
Collaborative Filtering systems
In this homework, you will create a recommendation system for restaurants using collaborative filtering (CF). The general structure of a recommendation system is that there are users and there are items. Users express explicit or implicit preferences towards certain items. CF thus relies on users' past behavior.
There are two primary approaches to CF: neighboorhood and latent factor model. The former is concerned with computing the relationships between items or between users. In the latter approach you have a model of hidden factors through which users and items are transformed to the same space. For example, if you are rating movies we may transform items into genre factors, and users into their preference for a particular genre.
Factor models generally lead to more accurate recommenders. One of the reasons for this is the sparsity of the item-user matrix. Most users tend to rate barely one or two items. Latent factor models are more expressive, and fit fewer parameters. However, neighborhood models are more prevalent, as they have an intuitive aspect that appeals to users(if you liked this you will like that) and online(a new preference can be incorporated very quickly).
Most recommenders today combine neighboorhood CF with model based CF, and SVD based matrix factorization approaches.
To see the example of a simple beer recommender, go here. This homework is inspired by the one there but we go after food instead, and go deeper into the problem of making recommendations.
User and Item based approaches
Original approaches to neighborhood based CF used user-user models. By this we mean that rating estimates are made from recorded ratings of like minded users. However, since most users tend to rate very few items, this is usually a losing proposition for explicit-rating based recommenders. Thus, most neighborhood based systems such as Amazon these days rely on item-item approaches. In these methods, a rating is estimated by other ratings made by the user on "similar" or "nearby" items: we have a K-Nearest-Neighbors algorithm, in effect.
Outline of this Homework
The outline of this homework is as follows:
Create a database of item-item similarities. Use this to implement a neighborhood-based CF recommender that can answer simple questions like "give me more restaurants like this one". This part of the homework assumes that the similaties calculated make good "global recommendations".
In the second part, we go one step further and attempt to predict the rating that a user will give an item they have not seen before. This requires that we find the restaurants that this user would rate as similar (not just those which are globally similar).
In the third part, we implement a factor-based CF recommender using a Bayesian model. While quite a bit more complex, this allows us to pool information both about similar users and about similar restaurants.
We will scale up our system by creating a recommender on the lines of Q1 and Q2 that works on the entire data set. We will use the map-reduce paradigm to split the computation over multiple machines.
You will start simply, by working on a subset of the restaurant data before generalizing to the entire data set in Problem 4. The complete data set has 150,000 reviews, but we shall start with just about 7000. You will create this smaller set by taking all the users who had rated more than 60 restaurants, and all the businesses which had greater than 150 reviews from the larger data set. This is not a random set: indeed we use it as it a computationally tractable set that is a bit less sparse than the entire data set.
End of explanation
fulldf=pd.read_csv("bigdf.csv")
fulldf.head(2)
Explanation: Description of the data set
The data set has been extracted from the Yelp Phoenix restaurants dataset. It is available here.
End of explanation
fulldf.groupby(fulldf.user_id).review_id.count().describe()
fulldf.user_id.unique().size
#your code here
plt.hist(fulldf.groupby(fulldf.user_id).review_id.count(), log = True, bins= 20)
plt.xlabel("Reviews per user")
plt.ylabel("review count")
histogram_style()
fulldf.groupby(fulldf.business_id).review_id.count().describe()
#your code here
plt.hist(fulldf.groupby(fulldf.business_id).review_id.count(), log =True, bins= 20)
plt.xlabel("Reviews per restaurant")
plt.ylabel("review count")
histogram_style()
#your code here
print "No of users: %d" %fulldf.groupby(fulldf.user_id).review_id.count().size
print "No of businesses: %d" %fulldf.groupby(fulldf.business_id).review_id.count().size
Explanation: The data frame is a frame of reviews. We have joined in information about users and businesses into this frame so that you have only one frame to work with.
This information is for the reviews themselves:
Here is a description of the data fields in this dataframe, on the business side
And Finally, a set of fields for users
In this data set, every user has only one review for each restaurant. Convince yourself of this. (This answer does not need to be submitted).
Our Recommender
To motivate our recommendation system, consider the follwing example. Let's pretend we are in Boston for a second. Lets say the average rating of restaurants here by all the users is 3.5. Sandrine's at Harvard square is better than an average restaurant, so it tends to be rated 0.5 stars above the average (over all the users). However, you are a curmudgeon, who tends to rate 0.2 stars below the average. Then a baseline estimate for the recommendation for Sandrine's, for you, is 3.5+0.5-0.2=3.8.
These baseline estimates thus adjust the data by accounting for the systematic tendencies for some users who give higher ratings than others, and for some restaurants to recieve higher ratings than others. We can write the baseline estimate $\hat Y_{um}^{baseline}$ for an unknown rating $Y_{um}$ for user $u$ and restaurant or business $m$ as:
$$ \hat Y_{um}^{baseline} = \hat \mu + \hat \theta_{u0} + \hat \gamma_{m0} $$
where the unknown parameters $\theta_{u0}$ and $\gamma_{m0}$ indicate the deviations, or biases, of user $u$ and item $m$, respectively, from some intercept parameter $\mu$. (The reason for the strange notation with 0s will become clear in Problem 3)
Notice that the $\theta_{u0}$ and $\gamma_{m0}$ are parameters which need to be fit. The simplest thing to start with, and something we will do for Problems 1 and 2 (but not 3), is to replace them by their "mean" estimates from the data. Thus:
$$ \hat Y^{baseline}_{um} = \bar Y + (\bar Y_u - \bar Y) + (\bar Y_m - \bar Y)$$
where $\bar Y_u$ = user_avg, the average of all a user $u$'s ratings and $\bar Y_m$ = business_avg, the average of all ratings for a restaurant $m$. $\bar Y$ is the average rating over all reviews.
The final two terms correspond to the user-specific and item-specific bias in ratings, that is, how their ratings tend to systematically diverge from the global average. This is the simplest possible way to predict a rating, based only on information about this user and this restaurant.
Can we do a better job of predicting the rating $Y_{um}$ user $u$ would give to restaurant $r$? According to the central dogma of CF, we ought to be able to use the responses of similar users regarding similar restaurants to get a better prediction.
We can make an estimate of $Y_{um}$ as:
$$ \hat{Y_{um}} = \hat Y_{um}^{baseline}\, + \,\frac{\sum\limits_{j \in S^{k}(m)} s_{mj} ( Y_{uj} - \hat Y_{uj}^{baseline} )}{\sum\limits_{j \in S^{k}(m)} s_{mj} } $$
where $s^{k}(m)$ is the $k$ neighbor items of item $m$ based on some pooling criterion, for example, those items which have been rated by user $u$.
In the next two problems, we will focus on using similar restaurants, or the item neighborhood.
To do this, we compute a similarity measure $s_{mj}$ between the $m$th and $j$th items. This similarity might be measured via cosine similarity, pearson co-efficient or using other distance based measures. Here we shall use the Pearson coefficient. This measures the tendency of users to rate items similarly. Since most ratings are unknown, it is computed on the "common user support" (n_common), which is the set of common raters of both items.
In the first problem we shall set $S$ to the global neighborhood of the item, and in the second we shall set it to those items which have been rated by user $u$.
Q1. Writing a simple "global" recommender
Now we have a way to pool information between similar restaurants to try to predict a user's recommendation. But how do we choose the neighborhood to pool over? We begin with the simplest choice. We calculate the similarity between items using their entire common user support, and rank the nearest neighbors of an item by this similarity. We call this a "global" recommender because it assumes that every user perceives the similarity between restaurants in the same way. Later on, we will implement a more specific recommender that pools information based on which items seem the most similar to this user.
The global recommender does have the advantage of dealing with the possible sparsity of the user's rated items, but also the disadvantage of giving one answer for all users, without taking the user's preferences into account. This is a classic case of bias-variance tradeoff.(what does it mean?)
Lets implement this simpler global recommender first.
Exploratory Data Analysis
1.1 Visualize the sparsity of the full data set by plotting two histograms of the review count grouped by the user_id and business_id respectively. Are there more users or more businesses?
End of explanation
#your code here
plt.hist(fulldf.stars, bins =5 , label ='Ratings')
plt.axvline(fulldf.stars.mean(), 0, 1, color='r', label='Average Rating: %.2f'%fulldf.stars.mean())
plt.legend(frameon=False, loc='upper left')
histogram_style()
Explanation: Of the total 4503 restaurants in the data, about 75% of the users have reviewed only fewer than 3 of them. Also, it seems a minority of the users contribute the most number of reviews. A similar trend is observed in the number of reviews per restaurant as well. About 75% of the restaurants have fewer than 37 reviews.
1.2 Compute the average rating of reviews in the data set and a histogram of all the ratings in the dataset.
End of explanation
def recompute_frame(ldf):
takes a dataframe ldf, makes a copy of it, and returns the copy
with all averages and review counts recomputed
this is used when a frame is subsetted.
ldfu=ldf.groupby('user_id')
ldfb=ldf.groupby('business_id')
user_avg=ldfu.stars.mean()
user_review_count=ldfu.review_id.count()
business_avg=ldfb.stars.mean()
business_review_count=ldfb.review_id.count()
nldf=ldf.copy()
nldf.set_index(['business_id'], inplace=True)
nldf['business_avg']=business_avg
nldf['business_review_count']=business_review_count
nldf.reset_index(inplace=True)
nldf.set_index(['user_id'], inplace=True)
nldf['user_avg']=user_avg
nldf['user_review_count']=user_review_count
nldf.reset_index(inplace=True)
return nldf
Explanation: The following function is used to re-compute review counts and averages whenever you subset a reviews data frame. We'll use it soon to construct a smaller, more computationally tractable data frame.
End of explanation
#your code here
smalldf = fulldf[(fulldf.business_review_count > 150) & (fulldf.user_review_count > 60)]
print "smalldf as a percentage of fulldf: %0.2f" % (float(smalldf.shape[0])/float(fulldf.shape[0])*100)
smalldf_new = recompute_frame(smalldf)
print "No of unique users: %d" %smalldf.user_id.unique().size
print "No of unique restaurants: %d" %smalldf.business_id.unique().size
Explanation: 1.3 Create a smaller data set in dataframe smalldf by looking for those businesses with more than 150 reviews and those users with more than 60 reviews. Include all the columns that were there in the parent dataframe. Since you have created a subset of the data set, use the method provided above to recalculate the averages. Print the number of unique users and items in this data set.
Note that while this cut makes sure we have prolific users, the cut on businesses restores sparsity by reducing the number of reviews per user.
End of explanation
# histogram of the review count grouped by user
plt.figure()
count_by_user = smalldf.groupby(smalldf.user_id).review_id.count()
count_by_business = smalldf.groupby(smalldf.business_id).review_id.count()
plt.hist(count_by_user, bins =10)
plt.axvline(count_by_user.mean(), 0, 1, color='r', label='Avg Reviews per user: %.2f'%count_by_user.mean())
histogram_settings("Reviews per user","review count", "Reviews count per user", "upper right")
# histogram of the review count grouped by restaurant
plt.figure()
plt.hist(count_by_business, bins =10)
plt.axvline(count_by_business.mean(), 0, 1, color='r', label='Avg Reviews per restaurant: %.2f'%count_by_business.mean())
histogram_settings("Reviews per restaurant","review count", "Reviews count per restaurant", "upper right")
Explanation: How does this compare to the parent data set, in terms of size and sparsity? Once again, plot histograms of the review count grouped by user, and by the review count grouped by business, respectively, and describe the results
End of explanation
#histogram of the average user rating
plt.figure()
avg_user_rating = smalldf.groupby('user_id').stars.mean()
plt.hist(avg_user_rating, bins =5 , label ='Avg User Ratings')
plt.axvline(avg_user_rating.mean(), 0, 1, color='r', label='Avg User Rating: %.2f'%avg_user_rating.mean())
histogram_settings("Avg User Rating", "Count", "Avg User Rating Count")
plt.figure()
avg_bus_rating = smalldf.groupby('business_id').stars.mean()
plt.hist(avg_bus_rating, bins =5 , label ='Avg Restaurant Ratings')
plt.axvline(avg_bus_rating.mean(), 0, 1, color='r', label='Avg Restaurant Rating: %.2f'%avg_bus_rating.mean())
histogram_settings("Avg Restaurant Rating", "Count", "Avg Restaurant Rating Count")
print "Overall average rating in smalldf: %0.2f"%smalldf.stars.mean()
Explanation: Data in smalldf looks less sparse than the fulldf.
1.4 Compute histograms of the average user rating in the smaller data set, and the average business rating in the smaller data set. Print the overall mean.
End of explanation
restaurants=smalldf.business_id.unique() #get all the unique restaurant ids
supports=[]
for i,rest1 in enumerate(restaurants): #loop for first restaurant (rest1) in the pair in the f
for j,rest2 in enumerate(restaurants): # loop for second restaurant(rest2) in the pair
if i < j: #skip pairing same restaurants and forming duplicate pairs
rest1_reviewers = smalldf[smalldf.business_id==rest1].user_id.unique() #find all unique users who reviewed restaurant 1
rest2_reviewers = smalldf[smalldf.business_id==rest2].user_id.unique() #find all unique users who reviewed restaurant 2
common_reviewers = set(rest1_reviewers).intersection(rest2_reviewers) # find common reviewers by taking intersection
supports.append(len(common_reviewers)) # add the no of common reviewers to list
print "Mean support is:",np.mean(supports)
plt.hist(supports) #plot hist of list
histogram_settings("Common reviewers per each pair of restaurants", "Count", "Common User Support")
Explanation: Common Support
Lets now make a histogram of the common user support (the number of common reviewers) of each pair of restaurants on the smaller set, and print the mean. Pay attention to the code, as you will use parts of it later. (This code takes a bit of time to run, so be patient).
The common support is an important concept, as for each pair of restaurants, its the number of people who reviewed both. It will be used to modify similarity between restaurants. If the common support is low, the similarity is less believable.
End of explanation
from scipy.stats.stats import pearsonr
def pearson_sim(rest1_reviews, rest2_reviews, n_common):
Given a subframe of restaurant 1 reviews and a subframe of restaurant 2 reviews,
where the reviewers are those who have reviewed both restaurants, return
the pearson correlation coefficient between the user average subtracted ratings.
The case for zero common reviewers is handled separately. Its
ok to return a NaN if any of the individual variances are 0.
if n_common==0:
rho=0.
else:
diff1=rest1_reviews['stars']-rest1_reviews['user_avg']
diff2=rest2_reviews['stars']-rest2_reviews['user_avg']
rho=pearsonr(diff1, diff2)[0]
if np.isnan(rho):
rho=0.
return rho
Explanation: As you can see, even though we chose a subset of the dataframe in which every restaurant had 150 reviews and every user had atleast made 60, the common support of most pairs of restaurants is really low, indeed less than 10!.
Calculating Similarity
Users rate restaurants on a scale of 1-5. Even though this rating is integer valued, for the purposes of this assignment we shall treat it as a real number.
Even though each reviewer uses the same 5-star scale when rating restaurants, comparing two users by comparing their raw user ratings can be problematic. Consider a user whose average rating is 2. This is a curmudgeonly user. Consider another whose average rating is 4. This is a rather enthusiastic one. How should we compare a 3 rating by the curmudgeonly one to a 5 rating of the enthusiastic one?
It is for this purpose that we must subtract the average rating of the user from the actual rating of the restaurants in computing the similarity of two restaurants. This makes the above ratings by the two users comparable. We do this in the function pearson_sim defined below.
If there is no common support (n_common=0), we have no basis for making a similarity estimate, and so we set the similarity to 0. In the case that the individual restaurant rating variance is 0, such as in the case where there is only one common reviewer (n_common=1), we return the NaN that the scipy pearsonr returns. We will deal with it soon,
End of explanation
def get_restaurant_reviews(restaurant_id, df, set_of_users):
given a resturant id and a set of reviewers, return the sub-dataframe of their
reviews.
mask = (df.user_id.isin(set_of_users)) & (df.business_id==restaurant_id)
reviews = df[mask]
reviews = reviews[reviews.user_id.duplicated()==False]
return reviews
Explanation: The function get_restaurant_reviews defined below takes a restaurant business_id and a set of users, and returns the reviews of that restaurant by those users. You will use this function in calculating a similarity function, in 1.5.
End of explanation
Function
--------
calculate_similarity
Parameters
----------
rest1 : string
The id of restaurant 1
rest2 : string
The id of restaurant 2
df : DataFrame
A dataframe of reviews, such as the smalldf above
similarity_func : func
A function like pearson_sim above which takes two dataframes of individual
restaurant reviews made by a common set of reviewers, and the number of
common reviews. This function returns the similarity of the two restaurants
based on the common reviews.
Returns
--------
A tuple
The first element of the tuple is the similarity and the second the
common support n_common. If the similarity is a NaN, set it to 0
#your code here
def calculate_similarity(rest1, rest2, df, similarity_func):
rest1_reviewers = df[df.business_id==rest1].user_id.unique() #find all unique users who reviewed restaurant 1
rest2_reviewers = df[smalldf.business_id==rest2].user_id.unique() #find all unique users who reviewed restaurant 2
common_reviewers = set(rest1_reviewers).intersection(rest2_reviewers) #
n_common = len(common_reviewers)
reviews1 = get_restaurant_reviews(rest1, df, common_reviewers)
reviews2 = get_restaurant_reviews(rest2, df, common_reviewers)
sim_coef = similarity_func(reviews1, reviews2, n_common)
if np.isnan(sim_coef):
sim_coef = 0
return sim_coef,n_common
Explanation: 1.5 Write a function calculate_similarity that operates between two restaurants and calculates a similarity for them, taking a dataframe and a similarity function similarity_func. An example of the similarity_func is the pearson_sim we defined above. calculate_similarity operates as follows:
For each of the two restaurants, get the set of reviewers who have reviewed the restaurant and compute the intersection of these two sets. Also compute the number of common reviewers n_common.
Use the function get_restaurant_reviews defined below to get the reviews for each restaurant as made by these common reviewers. Notice that get_restaurant_reviews returns a sub data frame of reviews.
Calculate the similarity using similarity_func which takes the two reviews dataframes from part 2 and the number of common reviewers n_common as arguments
Return the similarity and n_common in a tuple (sim, n_common). If the similarity is a NaN, set the similarity to 0.
End of explanation
class Database:
"A class representing a database of similaries and common supports"
def __init__(self, df):
"the constructor, takes a reviews dataframe like smalldf as its argument"
database={}
self.df=df
self.uniquebizids={v:k for (k,v) in enumerate(df.business_id.unique())}
keys=self.uniquebizids.keys()
l_keys=len(keys)
self.database_sim=np.zeros([l_keys,l_keys])
self.database_sup=np.zeros([l_keys, l_keys], dtype=np.int)
def populate_by_calculating(self, similarity_func):
a populator for every pair of businesses in df. takes similarity_func like
pearson_sim as argument
items=self.uniquebizids.items()
for b1, i1 in items:
for b2, i2 in items:
if i1 < i2:
sim, nsup=calculate_similarity(b1, b2, self.df, similarity_func)
self.database_sim[i1][i2]=sim
self.database_sim[i2][i1]=sim
self.database_sup[i1][i2]=nsup
self.database_sup[i2][i1]=nsup
elif i1==i2:
nsup=self.df[self.df.business_id==b1].user_id.count()
self.database_sim[i1][i1]=1.
self.database_sup[i1][i1]=nsup
def get(self, b1, b2):
"returns a tuple of similarity,common_support given two business ids"
sim=self.database_sim[self.uniquebizids[b1]][self.uniquebizids[b2]]
nsup=self.database_sup[self.uniquebizids[b1]][self.uniquebizids[b2]]
return (sim, nsup)
Explanation: Making a database of similarities
We now move to calculating a global database of pairwise restaurant similarities.
We provide you here with a function to make a database of the similarities for each pair of restaurants in the database. The class Database is initialized in its constructor by taking as arguments a dataframe of reviews. The method populate_by calculating iterates over every possible pair of business_id's in the dataframe and populates the database with similarities and common supports. It takes as arguments a function the similarity function similarity_func like pearson_sim (calculate_similarity then uses this to calculate the similarity). The get method on the database can be used to retrieve the similarity for two business ids.
(See Thu Oct 17th's class video for information about classes)
End of explanation
db=Database(smalldf)
db.populate_by_calculating(pearson_sim)
db.get("z3yFuLVrmH-3RJruPEMYKw", "zruUQvFySeXyEd7_rQixBg")
Explanation: Lets run make_database and store the result in the global variable db. Lets print out an example entry. Running this function will take a bit of time.
End of explanation
def shrunk_sim(sim, n_common, reg=3.):
"takes a similarity and shrinks it down by using the regularizer"
ssim=(n_common*sim)/(n_common+reg)
return ssim
Explanation: K-Nearest restaurants (in similarity)
We are now going to find the k-nearest restaurants to a given restaurant based on the database of similarities that we calculated. But we have a problem.
Consider the two cases where there is just one common reviewer, and where there are 40. In the former case, we might get a artificially high similarity based on the tastes of just this user, and thus we must reduce its importance in the nearest-neighbor calculation. In the latter case, we would get a much more unbiased estimator of the similarity of the two restaurants.
To control the effect of small common supports, we can shrink our pearson co-efficients. We shall do this by using the "regularization" parameter reg:
$$s_{mj} = \frac{N_{common}\, \rho_{mj}}{N_{common}+reg} $$
where $N_{common}$ (n_common) is the common reviewer support and $\rho_{ij}$ is the pearson co-relation coefficient.
Recall the notions of regularization introduced in class. We want to reduce the variance in our estimates, so we pull our estimates in toward a conservative point in a way that strongly corrals in estimates when there is very little data, but allows the data to speak when there is a lot. This can be shown as equivalent to adding in a reg amount of bayesian prior, as Joe has alluded to in class.
A good value of the regularizer is intuitively one that doesn't affect the similarity when the common support is high ~10, but has a large effect when the support is small. In this case, values of 2-4 are good. Usually, the value of reg is determined using cross-validation, but for the sake of simplicity we will generally set it to 3.
We define a function shrunk_sim which takes the sim and n_common obtained from the database, and shrinks the similarity down using the regularizer reg.
End of explanation
Function
--------
knearest
Parameters
----------
restaurant_id : string
The id of the restaurant whose nearest neighbors we want
set_of_restaurants : array
The set of restaurants from which we want to find the nearest neighbors
dbase : instance of Database class.
A database of similarities, on which the get method can be used to get the similarity
of two businessed. e.g. dbase.get(rid1,rid2)
k : int
the number of nearest neighbors desired, default 7
reg: float
the regularization.
Returns
--------
A sorted list
of the top k similar restaurants. The list is a list of tuples
(business_id, shrunken similarity, common support).
#your code here
from operator import itemgetter
def knearest(restaurant_id, set_of_restaurants, dbase, k =7, reg =3):
similar = []
for rest_id in set_of_restaurants: #loop to find similarity of restaurant_id with the rest of restaurants
if rest_id != restaurant_id:
sim, n_common = dbase.get(restaurant_id, rest_id)
ssm = shrunk_sim(sim, n_common, reg)
similar.append((rest_id,ssm,n_common))
similar = sorted(similar,key =itemgetter(1), reverse = True)
return similar[0:k]
Explanation: 1.6 Now we can move to writing a knearest function, which finds the k nearest neighbors of a given restaurant based on the shrunk similarities we calculate. Note that as defined here, the nearest neighbors are global over the entire set of restaurants, as opposed to being restricted to the restaurants a user has reviewed(we shall do that in the next problem). Thus, this is an expensive function!
Write a knearest that returns a k-length sorted list of 3-tuples each corresponding to a restaurant. The tuple structure is (business_id, shrunken similarity score, common support) where the similarity score and common support are with respect to the restaurant whose neighbors we are finding, and the business_id is the id of the "nearby" restaurant found. The nearby restaurants are found from a supplied numpy array of restaurants set_of_restaurants. The spec for the function is given below. HINT: use itemgetter from the operator module to do the sorting.
End of explanation
#testbizid="eIxSLxzIlfExI6vgAbn2JA"
#testbizid2="L-uPZxooP_ziXCtRrWi8Pw"
testbizid="53YGfwmbW73JhFiemNeyzQ"
testbizid2="KoIRdcIfh3XWxiCeV1BDmA"
Explanation: Ok it's time to recommend!
Lets choose the two very different businesses in the dataframe
End of explanation
def biznamefromid(df, theid):
return df['biz_name'][df['business_id']==theid].values[0]
def usernamefromid(df, theid):
return df['user_name'][df['user_id']==theid].values[0]
print testbizid, biznamefromid(smalldf,testbizid)
print testbizid2, biznamefromid(smalldf, testbizid2)
smalldf['business_id'][smalldf['biz_name']=="Olive & Ivy"].values[0]
Explanation: We provide functions to look up a business name given a business id, and a username given a user id.
End of explanation
tops=knearest(testbizid, smalldf.business_id.unique(), db, k=7, reg=3.)
print "For ",biznamefromid(smalldf, testbizid), ", top matches are: \n"
print"No\t Restaurant Name\t Similarity\t Support"
for i, (biz_id, sim, nc) in enumerate(tops):
print "%2s" %i,"%-30s" %biznamefromid(smalldf,biz_id), "%-15.4f" %sim,"%5d" %nc # -30s -> left aligns, 30 spaces
tops2=knearest(testbizid2, smalldf.business_id.unique(), db, k=7, reg=3.)
print "For ",biznamefromid(smalldf, testbizid2), ", top matches are:\n"
print"No Restaurant Name\t\t\t\t Similarity\t Support"
for i, (biz_id, sim, nc) in enumerate(tops2):
print "%2s" %i,"%-45s" %biznamefromid(smalldf,biz_id), "%-15.4f" %sim,"%5d" %nc
Explanation: Get top matches
Its now time to answer the question: "if you liked this, you might also like these". We use our testbizid and testbizid2 to compute the k=7 nearest neighbors with a regularization of 3. . We print these top 7 matches names, along with their similarity coefficient and common support.
End of explanation
def get_user_top_choices(user_id, df, numchoices=5):
"get the sorted top 5 restaurants for a user by the star rating the user gave them"
udf=df[df.user_id==user_id][['business_id','stars']].sort(['stars'], ascending=False).head(numchoices)
return udf
testuserid="7cR92zkDv4W3kqzii6axvg"
#testuserid="sALB8-F04S9VcZMF_GkGUA"
print "For user", usernamefromid(smalldf,testuserid), "top choices are:"
bizs=get_user_top_choices(testuserid, smalldf)['business_id'].values
[biznamefromid(smalldf, biz_id) for biz_id in bizs]
Explanation: We can see that these two restaurants are in somewhat different orbits :-).
Lets now turn our attention to another question: what are the top recommendations for a user? To answer this we must find the user's top rated restaurants, find the nearest neighbors of these restaurants, merge these lists while removing the duplicates and the ones that the user has already rated, and sort by the restaurant's average rating. We provide the code to get the user's top choices in a subset data frame.
End of explanation
Function
--------
get_top_recos_for_user
Parameters
----------
userid : string
The id of the user for whom we want the top recommendations
df : Dataframe
The dataframe of restaurant reviews such as smalldf
dbase : instance of Database class.
A database of similarities, on which the get method can be used to get the similarity
of two businesses. e.g. dbase.get(rid1,rid2)
n: int
the n top choices of the user by star rating
k : int
the number of nearest neighbors desired, default 8
reg: float
the regularization.
Returns
--------
A sorted list
of the top recommendations. The list is a list of tuples
(business_id, business_avg). You are combining the k-nearest recommendations
for each of the user's n top choices, removing duplicates and the ones the user
has already rated.
#your code here
def get_top_recos_for_user(userid,df,dbase,n=5,k=7,reg=3):
rest_ids = get_user_top_choices(userid, df, numchoices =n)["business_id"].values
rests_rated_by_user = df[df.user_id==userid].business_id.values
top_recos =[]
for rest in rest_ids:
tops=knearest(rest, df.business_id.unique(), dbase, k=k, reg=reg)
for top in tops:
if top[0] not in rests_rated_by_user:
top_recos.append(top)
#remove duplicates
ids=[e[0] for e in top_recos]
#print ids
uids={k:0 for k in list(set(ids))} # add 0 to identify the unique ones in the full list.
top_unique =[]
for e in top_recos:
if uids[e[0]] == 0:
top_unique.append(e)
uids[e[0]] = 1
top_restaurants =[]
for r,s,nc in top_unique:
avg_rating = df[df.business_id ==r].stars.mean()
top_restaurants.append((r,avg_rating))
top_restaurants = sorted(top_restaurants, key = itemgetter(1), reverse = True)
if n < len(top_restaurants):
return top_restaurants[0:n]
else:
return top_restaurants
Explanation: Get top recommendations for user.
1.7 Its your job now to write a function get_top_recos_for_user which takes as arguments a userid, the n top choices for the user, the dataframe, k, and a regularizer, and returns the top recommendations obtained from combining the restaurants that are neighbors of each of the n choices, in the way described in the previous paragraph. This returned list is a list of tuples (restaurant_id, business_avg) sorted by business_avg where business_avg is the average rating of the restaurant over the dataframe.
End of explanation
print "For user", usernamefromid(smalldf,testuserid), "the top recommendations are:"
toprecos=get_top_recos_for_user(testuserid, smalldf, db, n=5, k=7, reg=3.)
for biz_id, biz_avg in toprecos:
print biznamefromid(smalldf,biz_id), "| Average Rating |", biz_avg
Explanation: Lets print the top recommendations for testuserid, with a regularization of 3.
End of explanation
Function
--------
knearest_amongst_userrated
Parameters
----------
restaurant_id : string
The id of the restaurant whose nearest neighbors we want
user_id : string
The id of the user, in whose reviewed restaurants we want to find the neighbors
df: Dataframe
The dataframe of reviews such as smalldf
dbase : instance of Database class.
A database of similarities, on which the get method can be used to get the similarity
of two businessed. e.g. dbase.get(rid1,rid2)
k : int
the number of nearest neighbors desired, default 7
reg: float
the regularization.
Returns
--------
A sorted list
of the top k similar restaurants. The list is a list of tuples
(business_id, shrunken similarity, common support).
#your code here
from operator import itemgetter
def knearest_amongst_userrated(restaurant_id,user_id, df, dbase, k =7, reg =3):
similar = []
rests_rated_by_user = df[df.user_id==user_id].business_id.unique()
return knearest(restaurant_id, rests_rated_by_user, dbase, k, reg)
Explanation: Problem 2: A user based recommender with predicted ratings
This is all very nice. We can provide ratings based on global similarities to a restaurant. However, in many cases this is not enough.
For example, it is hard to judge if the above recommendations are any good. In the usual testing paradigm, say that we break the dataframe into train and test. Based on the training set, I am recommended restaurant B. Now, I have rated B, but that information is in the testing set. I have no way of comparing the rating I give B in the testing set, to the similarity computed from the training set that was used to make the recomendation. The best I could do is to compare the average rating of restaurant B in the training set to my rating of restaurant B in the test set.
In this section, we shift our focus to more fine-grained predictions about each user, and try to predict what rating a user would give to a restaurant they have never tried before. To do this, we will try to personalize the information we use even further, and only pool information from restaurants that the user has rated.
This allows us to return to the original problem of prediction $Y_{um}$ for a restaurant $m$ that user $u$ has never rated before. Using our newly computed similarity metrics, we can modify our original baseline estimate by pulling in information from the user's neighborhood of the restaurant $m$, and predict $Y_{um}$ as:
$$ \hat{Y_{um}} = \hat Y^{baseline}{um}\, + \,\frac{\sum\limits{j \in S^{k}(m;u)} s_{mj} ( Y_{uj} - \hat Y^{baseline}{uj} )}{\sum\limits{j \in S^{k}(m;u)} s_{mj} } $$
where $s^{k}(m;u)$ is the $k$ neighbor items of item $m$ which have been rated by user $u$.
Now, this is not a particularly good assumption, especially in the situation where a restaurant is new (new item problem) or a user is new (cold start problem), or in the case when there are very few reviewers of a restaurant, or very few reviews by a user respectively. However, one must start somewhere!
Notice that in adding in the similarity term, we subtract the baseline estimate from the observed rating of the user's neighbor items.
Defining the predicted rating
2.1 Write a function knearest_amongst_userrated, analogous to the knearest function we defined above, to find the nearest k neighbors to a given restaurant from the restaurants that the user has already rated. This function will take as arguments the restaurant_id, the user_id, the dataframe of reviews, the database, the k, and the regularizer reg. Just like before, return a k-length sorted list of 3-tuples each corresponding to a restaurant. HINT: use the knearest function you defined earlier
End of explanation
Function
--------
rating
Parameters
----------
df: Dataframe
The dataframe of reviews such as smalldf
dbase : instance of Database class.
A database of similarities, on which the get method can be used to get the similarity
of two businessed. e.g. dbase.get(rid1,rid2)
restaurant_id : string
The id of the restaurant whose nearest neighbors we want
user_id : string
The id of the user, in whose reviewed restaurants we want to find the neighbors
k : int
the number of nearest neighbors desired, default 7
reg: float
the regularization.
Returns
--------
A float
which is the impued rating that we predict that user_id will make for restaurant_id
#your code here
def rating(df, dbase, restaurant_id, user_id, k, reg):
#calculate base line
overall_avg_rating = df.stars.mean()
avg_user_rating = df[df.user_id == user_id].user_avg.values[0]
avg_biz_rating = df[df.business_id == restaurant_id].business_avg.values[0]
prod_similarities_and_rating = 0.
sum_of_similarities = 0.
#print baseline_rating
#find knearest and loop through them
similars = knearest_amongst_userrated(restaurant_id,user_id, df, dbase, k =k, reg =reg)
for r_id,sim,cs in similars:
#print("For restaurant: %s" %restaurant_j[0])
rest_reviews = df[(df.user_id == user_id) & (df.business_id == r_id)]
user_rating = rest_reviews.stars.values[0]
rest_avg = rest_reviews.business_avg.values[0]
prod_similarities_and_rating += (sim * (user_rating - (rest_avg + avg_user_rating - overall_avg_rating )))
sum_of_similarities += sim
#print user_rating_j, base_rating_j,prod_similarities_and_rating, sum_of_similarities
baseline_rating = avg_user_rating + avg_biz_rating - overall_avg_rating
if(sum_of_similarities <= 0):
return baseline_rating
else:
return baseline_rating + (prod_similarities_and_rating/ sum_of_similarities)
rating(smalldf,db, '53YGfwmbW73JhFiemNeyzQ', '7cR92zkDv4W3kqzii6axvg', k =7, reg = 3 )
Explanation: 2.2 Now write a function that returns the predicted rating for a user and an item using the formula at the beginning of this problem. Include code to deal with the possibility that the sum of scores that goes in the denominator is 0: return a predicted rating of the baseline portion of the formula in that case. This function rating takes as arguments the dataframe, the database, the wanted restaurant_id and user_id, and k as well as the regularizer.
End of explanation
print "User Average", smalldf[smalldf.user_id==testuserid].stars.mean(),"for",usernamefromid(smalldf,testuserid)
print "Predicted ratings for top choices calculated earlier:"
for biz_id,biz_avg in toprecos:
print biznamefromid(smalldf, biz_id),"|",rating(smalldf, db, biz_id, testuserid, k=7, reg=3.),"|","Average",biz_avg
Explanation: For the top-recommendations in the variable toprecos from the previous section, we compute the predicted rating and compare it with the average rating over all users available inside the tuples that make up toprecos. We use a k of 7 and regularization 3. For comparision we also print this users' average rating. Do you notice anything interesting about how the order has changed from when we did this with the global similarities? (for you to think, not to answer)
End of explanation
def get_other_ratings(restaurant_id, user_id, df):
"get a user's rating for a restaurant and the restaurant's average rating"
choice=df[(df.business_id==restaurant_id) & (df.user_id==user_id)]
users_score=choice.stars.values[0]
average_score=choice.business_avg.values[0]
return users_score, average_score
Explanation: Testing the ratings
Let us compare the predicted ratings with a user's ratings. Note that we are doing this on the same set that we constructed the predictions with, so this is not a validation of the procedure, but simply a check of the procedure's fit. We first write a helper function to return the user score for a restaurant, and the restaurant's average score over all users.
End of explanation
print "for user",usernamefromid(smalldf,testuserid), 'avg', smalldf[smalldf.user_id==testuserid].stars.mean()
for biz_id in bizs:
print "----------------------------------"
print biznamefromid(smalldf, biz_id)
print "Predicted Rating:",rating(smalldf, db, biz_id, testuserid, k=7, reg=3.)
u,a=get_other_ratings(biz_id, testuserid, smalldf)
print "Actual User Rating:",u,"Avg Rating",a
Explanation: For the user testuserid, we loop over the variable bizs (which is a set of restaurants the user has rated) and print the predicted rating, and the actual rating and restaurant average rating obtained using the function above. We again use k=7 and a regularization of 3.
End of explanation
def compare_results(stars_actual, stars_predicted, ylow=-10, yhigh=15, title=""):
plot predicted results against actual results. Takes 2 arguments: a
numpy array of actual ratings and a numpy array of predicted ratings
scatterplots the predictions, a unit slope line, line segments joining the mean,
and a filled in area of the standard deviations."
fig=plt.figure()
df=pd.DataFrame(dict(actual=stars_actual, predicted=stars_predicted))
ax=plt.scatter(df.actual, df.predicted, alpha=0.2, s=30, label="predicted")
plt.ylim([ylow,yhigh])
plt.plot([1,5],[1,5], label="slope 1")
xp=[1,2,3,4,5]
yp=df.groupby('actual').predicted.mean().values
plt.plot(xp,yp,'k', label="means")
sig=df.groupby('actual').predicted.std().values
plt.fill_between(xp, yp - sig, yp + sig,
color='k', alpha=0.2)
plt.xlabel("actual")
plt.ylabel("predicted")
plt.legend(frameon=False)
remove_border()
plt.grid(False)
plt.title(title)
print np.mean(np.abs(df.predicted) < 15)
Explanation: 2.3 Explain in words why the predicted ratings are lower than the actual ratings. How do the user average rating and restaurant average rating affect this? How does sparsity affect the predicted ratings?
your answer here
Recall that bizs (defined just above question 1.7) has restaurants sorted by Vern's actual star ratings. This means that in this sample, we are looking at Vern's top rated restaurants.
The predicted ratings are lower because these are Vern's top 5 choices, which represent the largest positive deviations away from Vern's mean rating of 3.57. Because we are looking at the upper tail of Vern's rating distribution, but pooling information together from the K nearest neighbors among Vern's rated restaurants to construct the predicted rating, the predicted ratings should fall closer to Vern's user mean than the true ones do. Taking into account the average restaurant rating helps a little bit here because we can adjust the predicted rating to reflect an overall very good restaurant, but it does not counteract the effect of looking at the upper tail of Vern's ratings.
Note that if we were to take Vern's bottom 5 restaurants, we would see the opposite effect.
In general, the larger K is (assuming that the similarities within this neighborhood are positive), the closer the predicted rating will be to Vern's user average (this is the bias limit in the bias-variance tradeoff). Similarly, the smaller K is, the more likely we are to have user ratings that are close to the observed rating (the variance limit). The sparsity of the data affects how quickly we move from the variance limit to the bias limit as we increase K. If there were a lot of very similar restaurants in the dataset that Vern had ranked very highly, even with K relatively large, it would be possible to see a predicted rating much closer to the extremely positive ratings we see here in Vern's top 5 (see the results in question 4.4). As these data are now, however, even the most similar 7 restaurants to these that Vern rated so highly lie closer to Vern's mean.
Error Analysis
This next function takes a set of actual ratings, and a set of predicted ratings, and plots the latter against the former. We can use a graph of this kind to see how well or badly we do in our predictions. Since the nearest neighbor models can have alternating positive and negative similarities (the sum of similarity weights in the denominator can get large), the ratings can get very large. Thus we restrict ourselves to be between -10 and 15 in our ratings and calculate the fraction within these bounds. We also plot the line with unit slope, line segments joining the means, and a filled in area representing one standard deviation from the mean.
The first argument to compare_results is a numpy array of the actual star ratings obtained from the dataframe, while the second argument is the numpy array of the predicted ones. (Feel free to improve this function for your display)
End of explanation
#your code here
def plot_results(df, k,reg):
user_id = df.user_id.values
rest_id = df.business_id.values
actual = df.stars.values
predicted= np.zeros(len(actual))
count =0
for u_id, rest_id in zip(user_id, rest_id):
predicted[count]= rating(df, db, rest_id, u_id, k, reg)
count+=1
compare_results(actual, predicted)
#your code here
print "k=3, reg=3."
plot_results(smalldf,3,3.)
plt.title("k=3, reg=3.")
print "k=3, reg=15."
plot_results(smalldf,3,15.,)
plt.title("k=3, reg=15.")
print "k=10, reg=3."
plot_results(smalldf,10,3.)
plt.title("k=10, reg=3.")
print "k=10, reg=15."
plot_results(smalldf,10,15.,)
plt.title("k=10, reg=15.")
Explanation: 2.4 For each review in the data set, obtain a prediction from the entire dataframe smalldf. Use the function compare_results above to plot the predicted ratings against the observed ones. Make 4 such graphs, at k=3 and k=10, and for reg=3. and reg=15.
Note that this analysis is not strictly a model check because we are testing on the training set. However, since the user averages would change each time a cross-validation split was done on the set, we would incur the prohibitive expense of redoing the database each time. This would be better done on a cluster, using map-reduce or other techniques. While we explore map-reduce later in this homework, we shall not do any cross-validation.
Explain the results you get in the graphs in words.
End of explanation
Function
--------
gamma_m_draw
Draw a single sample from the conditional posterior distribution
of gamma_m.
Inputs
-------
X_m: A g-by-L+1 matrix, defined above.
Y_m: A 1D vector of length g, defined above.
sig2: Residual _variance_, as defined above.
Lambda_gamma: Prior precision matrix.
Outputs
--------
Single draw from conditional posterior, defined above.
#Item-specific parameters given all else
#your code here
def gamma_m_draw(X_m,Y_m,sig2,Lambda_gamma):
Q_m_inv = np.linalg.inv(np.dot(X_m.T,X_m)/sig2 + Lambda_gamma )
XtY = np.dot(X_m.T, Y_m)
return np.random.multivariate_normal(np.dot(Q_m_inv,XtY)/sig2 , Q_m_inv)
Function
--------
theta_u_draw
Draw a single sample from the conditional posterior distribution
of gamma_m.
Inputs
-------
X_u: A g-by-L+1 matrix, defined above.
Y_u: A 1D vector of length g, defined above.
sig2: Residual _variance_, as defined above.
Lambda_theta: Prior precision matrix.
Outputs
--------
Single draw from conditional posterior, defined above.
#User-specific parameters given all else
#your code here
def theta_u_draw(X_u,Y_u,sig2,Lambda_theta):
Q_u_inv = np.linalg.inv(np.dot(X_u.T,X_u)/sig2 + Lambda_theta)
XtY = np.dot(X_u.T, Y_u)
return np.random.multivariate_normal(np.dot(Q_u_inv,XtY)/sig2 , Q_u_inv)
Explanation: your answer here
If you are a bit confused by the look of these graphs, you should be!
For k=3, the predicted values are quite well-behaved, with the exception of several predictions that have extremely large magnitudes. It appears that the predicted values are pulled into the mean star rating, which sits somewhere around 3.8, so ratings on the low end are overestimated, and similarly ratings on the high end underestimated. The regularization does not appear to have a strong effect when k = 3.
For k=10, the predicted values are much less stable, with many more extreme predictions. The means appear to track better with the true means. The regularization has a much more extreme, although indirect, effect on the appearance of the plot. Since regularization has stronger effects on similarity scores between restaurants that have small common support, we can see that increasing k makes the predictions more sensitive to the regularization because in a small dataset, the common support between a restaurant and it's 10-nearest one will be quite small.
Note that this example does not seem to follow the standard bias-variance tradeoff, where we would expect small k to give a unbiased estimates that capture the extremes, while we would expect large k to give biased estimates that pull extreme values toward the mean. A large reason for this failure for this example to capture this behavior is that we have defined similarity scores that can be positive or negative, and the bias-variance logic is based on the more standard setting where we average together values that have strictly positive weights. When you have negative weights, it's possible that the sum of sij's in a neighborhood can get close to zero, making our estimator Ŷ um explode in the positive or negative direction since this would entail dividing by (nearly) zero. Thus for those restaurants where the denominator goes to 0 (more likely to happen at larger k as you have more chances of it there being more weights to add), the ratings are unstable, even numerically!
This problem is less pronounced in large datasets or with small k because the k-nearest restaurants are likely to have positive similarity with the current one. However, with small datasets, we can find that even with k relatively small (in this case, around 10), there are negative similarities in the k-neighborhood that make the estimator unstable. This sort of instability would be much less pronounced in the large dataset.
If we were to rescale the similarities to be positive, say between 0 and 1, the behavior of the estimator with respect to k and reg would be quite different. (SEE BELOW!)
2.5 Outline a process, in words, for choosing the nearest neighbor parameter k. For this question fix the regularization parameter reg at 3.
your answer here
We could use $F$-fold cross-validation (usually called $k$-fold cross-validation, but we've already used $k$ here to mean something else!). Specifically, we could randomly partition the data into $F$ equally sized folds (pieces), making sure that each fold includes at least one rating for each restaurant and one rating by each user (it would probably best to make sure that if $K$ were the maximum $k$ you would be considering, that each user and each restaurant appear at least $K$ times in each fold). For each value of $k$, and for each fold, we could repeat the procedure above for predicting user ratings by computing similarities using $F-1$ of the folds to compute similarities and computing the prediction error in the held out fold. We could then choose the $k$ with the smallest average prediction error across the folds, and recompute the recommender on the whole dataset using the chosen value of $k$.
If we wanted to both choose a good value for $k$ and check how well we could expect the result to generalize, we could divide the dataset into $F+1$ folds, and keep the last fold out as a verification set. We could perform the cross-validation above on $F$ folds to select $k$, then upon selecting $k$ use all $F$ folds to create a recommender, and see how well this recommender predicted ratings in the final validation set.
Q3 Bayesian Chocolates: Model based recommendations
In this part of the homework, you will use your newly minted Bayesian and Gibbs sampler skills to write a recommender that uses Bayesian techniques to impute ratings.
Model-Based Recommendations
A Note on Frequentist and Bayesian Procedures
In the previous section we implemented a procedure (a set of instructions for processing data) for giving recommendations and predicting user ratings for restaurants. This procedure involved a number of arbitrary choices -- for example, the particular measure of similarity between restaurants, or the weighting scheme for constructing a predicted rating. It also gave no sense of uncertainty -- in the case of giving recommendations, there was no statement about how we would expect the ranking from the procedure to compare to the user's true opinions of restaurants, and in the case of predicting ratings, there was no confidence interval for the prediction.
It is possible in repeated applications of the above procedure to see how it performs in the long run. Based on this long-run performance we could potentially justify certain functional choices and compute measurements of uncertainty. This framework of proposing a procedure first, then evaluating its performance in real or hypothetical replications of the experiment is an example of a frequentist approach to a problem. One aspect of the frequentist approach is that the proposed procedure does not necessarily have to be derived from a model (although it often is). While this means that a proposed procedure may be more flexible or robust than a model-based procedure, it also means that there is no natural way to justify certain functional choices or construct uncertainty estimates.
In contrast, the Bayesian approach to a problem always begins with a probablistic model for how the data were generated. Assuming this model is true, the posterior distribution over unknown quantities (either parameters to be estimated or unobserved data to be predicted) gives a single coherent expression of what the observed data tell us about the unknowns. By summarizing the posterior distribution, we can derive the exact functional form of a procedure for constructing estimates or predictions. We call a procedure derived from this Bayesian approach a Bayes rule (not to be confused with Bayes' Theorem). Using the posterior distribution, we can also give a sense of how uncertain we are about the estimate or prediction we have constructed.
Outline for this Problem
In this section, we construct a model of how ratings are generated, and use this model to build a recommendation and ratings prediction system. We will take a Bayesian approach here, and construct our estimates and predictions from summaries of the posterior distribution of the model's parameters, which we will compute using a Gibbs sampler. We will also give measures of uncertainty based on the posterior distribution. We will evaluate predictions from this approach in the same way we evalutated predictions from the KNN procedure above.
The Latent Factor Model
Model Overview
The central dogma in constructing a recommendation system using collaborative filtering is that similar users will rate similar restaurants similarly. In the previous section, we explicitly encoded this idea by using a similarity function to identify similar restaurants. We also assumed that either all users were the same (the global approach) or that only the current user was similar enough to make a recommendation (the user-specific approach). In this section, we will use a model that allows us to identify both similar users and similar restaurants as a function of latent factors.
We can think of latent factors as properties of restaurants (e.g., spiciness of food or price) that users have a positive or negative preference for. We do not observe these factors or the users' preferences directly, but we assume that they affect how users tend to rate restaurants. For example, if a restaurant serves a lot of spicy food and a user dislikes spicy food, then the restaurant would have a high "spiciness" factor, and the user would have a strongly negative preference, resulting in a prediction of a low rating. Note that if users have similar preferences, then according to the model, they will behave similarly, and likewise, if restaurants have similar latent factors, they will be rated similarly by similar users. Latent factors thus give us an intuitive way to specify a generative model the obeys the central dogma.
One issue that comes up with latent factor models is determining how many latent factors to include. There may be a number of different unmeasured properties that affect ratings in different ways -- for example, in addition to the spiciness factor above, there may also be a price factor that affects how users rate a restaurant. We deal with the problem of choosing the number of latent factors to include in the same way we deal with choosing $K$ in a $K$-nearest neighbors problem.
Rating Model Specification
To make this model concrete, we can write down our probability model as a generative process. First, we define the following quantities:
Counts:
$L$: The number of latent factors.
$U$: The number of users.
$M$: The number of items (restaurants).
$N$: The number of observed ratings.
Data:
$Y_{um}$: The star rating given to restaurant $m$ by user $u$.
$Y$: The full collection of observed star ratings.
Item-specific quantities:
$\gamma_m$: An item-specific parameter vector of length $L+1$. The first element of $\gamma_m$, denoted $\gamma_m[0]$ is the item-specific bias. The remaining $L$ elements of $\gamma_m$, denoted $\gamma_m[1:]$, are the latent factors associated with item $m$.
$\Gamma$: An $M$ by $L+1$ matrix where the $m$th row is $\gamma_m$.
User-specific quantities:
$\theta_u$: A user-specific parameter vector of length $L+1$. The first element of $\theta_u$, denoted $\theta_u[0]$ is the user-specific bias. The remaining $L$ elements of $\theta_u$, denoted $\theta_u[1:]$, are user $u$'s preferences for the latent factors.
$\Theta$: A $U$ by $L+1$ matrix where the $u$th row is $\theta_u$.
Global quantities:
$\mu$: The overall ratings mean.
$\sigma$: The residual variance of ratings after the mean, bias terms, and latent factors have been taken into account.
Using these quantities, we can specify our model for each rating $Y_{um}$ similarly to a linear regression:
$$Y_{um} = \mu + \theta_{u}[0] + \gamma_{m}[0] + \theta_{u}[1:]^{\top}\gamma_{m}[1:] + \epsilon_{um}$$
where
$$\epsilon_{um} \sim N(0, \sigma).$$
Note that while this looks like a linear regression, it is of a slightly different form because the latent factor term involves the product of two unknowns. This is like a linear regression where we forgot to measure some covariates.
We also assume the following priors on the user-specific and item-specific parameters:
$$
\begin{align}
\gamma_m &\sim MVN(\mathbf 0, \Lambda_\gamma^{-1})\
\theta_u &\sim MVN(\mathbf 0, \Lambda_\theta^{-1}),
\end{align}
$$
where $MVN$ means multivariate normal, $\mathbf 0$ is vector of length $L+1$ filled with zeros, and $\Lambda_\theta^{-1}$ and $\Lambda_\gamma^{-1}$ are $L+1 \times L+1$ covariance matrices. $\mu$ and $\sigma$ also have priors, but they are not relevant to your task so we won't write them here.
Goal for this Model
Using this model, we want to make inference about all of the quantities that, if we knew them, would allow us to sample $Y_{um}$ for any user and any item. These quantities are $\mu$, $\sigma$, and the elements of $\Theta$ and $\Gamma$.
3.1: Given the goal specified above, how many quantities (counting a vector of $L$ items as $L$ quantities) are we trying to make inference about? Express your answer in terms of the variables in the "Counts" section above.
your answer here
There are $$1+1+ M\times (L+1)+ U\times (L+1)$$
Gibbs Sampling from the Posterior
Our goal is to compute the posterior distribution over the unknowns $\mu$, $\sigma$, $\Gamma$, and $\Theta$ given $Y$, which reflects how much we know about these quantities given the data we have observed. We write this distribution as $P(\mu, \sigma, \Gamma, \Theta \mid Y)$.
The most general way to learn about the posterior distribution is to sample from it. This can be challenging, particularly in problems that are very high dimensional (see your answer to the question above). One strategy for for sampling from high-dimensional distributions is Gibbs sampling, which we discussed in class and lab.
Gibbs sampling breaks down the posterior probability distribution into blocks of unknowns, and samples iteratively from each block assuming that the values of the other blocks (and the data) are known and fixed. In this case, we will break down the posterior distribution into blocks of $\mu$, $\sigma$, each vector $\gamma_m$, and each vector $\theta_u$. We have already implemented the draws for $\mu$ and $\sigma$. You will need to implement the draws for each $\gamma_m$ and each $\theta_u$. Luckily, the structures of these draws are similar, so you will only need to implement two functions.
First, we'll derive the form of the draws below. Note that you don't need to be able to follow these derivations fully -- you'll just need to be able to use the result at the end.
Distribution of $\gamma_{m'}$ given $Y, \mu, \sigma, \Gamma_{-m'}, \Theta$
Intuitively, this is the distribution of the item-specific parameters for item $m'$, imagining that all of the other unknowns are fixed.
More precisely, we want to draw from the distribution of $\gamma_{m'}$ conditional on the data $Y$ and all other unknowns -- that is, $\mu$, $\sigma$, all of $\Theta$, and all of $\Gamma$ except for $\gamma_{m'}$, which we denote $\Gamma_{-m}$.
Note that in the model specification above, the only places that $\gamma_{m'}$ appears are in the regression equations for each $Y_{um}$ that involves item $m'$. If we write out just these equations, we get a system of the following form,
$$Y_{um'} = \mu + \theta_{u}[0] + \gamma_{m'}[0] + \theta_{u}[1:]^{\top}\gamma_{m'}[1:] + \epsilon_{um'},$$
with one equation for each $u$ that rated item $m'$. Now, because
If we move all of the fully known terms to the left-hand side, we obtain the system:
$$Y_{um'} - \mu - \theta_{u}[0] = \gamma_{m'}[0] + \theta_{u}[1:]^{\top}\gamma_{m'}[1:] + \epsilon_{um'}.$$
Notice that, because we assume that $\theta_{u}$ is known, this equation now fits cleanly into the form of a linear regression, where $\gamma_{m'}$ is the vector of unknown coefficients. This means that the posterior distribution for $\gamma_{m'}$ conditional on everything else is the same as the posterior for the coefficients of a Bayesian linear regression of $(Y_{um'} - \mu - \theta_{u}[0])$ on $\theta_{u}[1:]$ and an intercept.
Let's denote the set of users who rated item $m'$ as $(u_1, \cdots, u_g)$. Then, we can define the following vector and matrix:
\begin{align}
Y_{m'} = \left(\begin{array}{c} Y_{u_1m'}-\mu-\theta_{u_1}[0]\ \vdots \ Y_{u_gm'}-\mu-\theta_{u_g}[0]\end{array}\right), \qquad
X_{m'} &= \left(\begin{array}{cc} 1 & \theta_{u_1}[1:]^\top \ \vdots & \vdots \ 1 & \theta_{u_g}[1:]^\top\end{array}\right),
\end{align}
where $Y_{m'}$ is a vector of length $g$ and $X_{m'}$ is a $g \times L+1$ matrix.
The draw from $\gamma_{m'}$ given everything else then has the form:
$$ \gamma_{m'} \mid Y, \mu, \sigma, \Gamma_{-m'}, \Theta \sim MVN\left(Q_{m'}^{-1} \frac{1}{\sigma^2}X_{m'}^\top Y_{m'}, Q_{m'}^{-1}\right)$$
where
$$ Q_{m'} = \left(\frac{1}{\sigma^2}X_{m'}^\top X_{m'} + \Lambda_\gamma\right).$$
Distribution of $\theta_{u'}$ given $Y, \mu, \sigma, \Gamma, \Theta_{-u'}$
Intuitively, this is the distribution of the user-specific parameters for user $u'$, imagining that all of the other unknowns are fixed.
We can use a very similar argument to the one above. We can denote the set of items rated by user $u'$ as $(m_1, \cdots, m_g)$ and define the vector and matrix:
\begin{align}
Y_{u'} = \left(\begin{array}{c} Y_{u'm_1}-\mu-\gamma_{m_1}[0] \ \vdots \ Y_{u'm_g}-\mu-\gamma_{m_g}[0]\end{array}\right), \qquad
X_{u'} &= \left(\begin{array}{cc} 1 & \gamma_{m_1}[1:]^\top \ \vdots & \vdots \ 1 & \gamma_{m_g}[1:]^\top\end{array}\right),
\end{align}
where $Y_{u'}$ is a vector of length $g$ and $X_{u'}$ is a $g \times L+1$ matrix.
the draw from $\theta_{u'}$ given everything else has the form:
$$ \theta_{u'} \mid Y, \mu, \sigma, \Gamma, \Theta_{-u'} \sim MVN\left(Q_{u'}^{-1} \frac{1}{\sigma^2}X_{u'}^\top Y_{u'}, Q_{u'}^{-1}\right)$$
where
$$ Q_{u'}= \left(\frac{1}{\sigma^2}X_{u'}^\top X_{u'} + \Lambda_\theta\right).$$
3.2 We will only ask you to implement a tiny portion of the Gibbs sampler. Complete the following functions that implement the conditional posterior draws for $\gamma_m$ and $\theta_u$ derived above.
Hint: np.random.multivariate_normal is a good function to know.
End of explanation
Function
--------
factor_gibbs
Runs a gibbs sampler to infer mean, variance, user-specific, and item-specific
parameters.
Inputs
-------
data: A dataframe containing ratings data.
L: Dimension of latent factors.
maxit: Number of samples to draw from posterior.
Lambda_theta_diag: Hyperparameter controlling regularization of Theta.
Lambda_gamma_diag: Hyperparameter controlling regularization of Gamma.
progress: if true, print iteration number every 100 iterations.
Outputs
--------
Dictionary with elements
mu: Draws of mu. 1D array of length maxiter.
sig2: Draws of sig2, residual _variance_. 1D array of length maxiter.
theta: Draws of Theta. U-by-L-by-maxiter array.
gamma: Draws of Gamma. M-by-L-by-maxiter array.
EY: Draws of fitted values of Y. N-by-maxiter array.
def factor_gibbs(data, L, maxit, Lambda_theta_diag, Lambda_gamma_diag, progress=True):
data = data.copy()
N = data.shape[0]
#Create indices that allow us to map users and restaurants to rows
#in parameter vectors.
uusers, uidx = np.unique(data.user_id, return_inverse=True)
uitems, midx = np.unique(data.business_id, return_inverse=True)
nusers = uusers.size
nitems = uitems.size
#Add numerical indices to dataframe.
data["uidx"] = uidx
data["midx"] = midx
#Group observations by user and by business.
ugroups = data.groupby("uidx")
mgroups = data.groupby("midx")
all_avg = data.stars.mean()
u_avg = ugroups.stars.mean()
m_avg = mgroups.stars.mean()
#Initialize parameters and set up data structures for
#holding draws.
#Overall mean
mu = all_avg
mu_draws = np.zeros(maxit)
#Residual variance
sig2 = 0.5
sig2_draws = np.zeros(maxit)
#Matrix of user-specific bias and L latent factors.
theta = np.zeros([nusers, L+1])
theta[:,0] = u_avg-all_avg
theta_draws = np.zeros([nusers, L+1, maxit])
#Matrix of item-specific bias and L latent factors.
gamma = np.zeros([nitems, L+1])
gamma[:,0] = m_avg-all_avg
gamma_draws = np.zeros([nitems, L+1, maxit])
#Matrix for holding the expected number of stars
#for each observation at each draw from the posterior.
EY_draws = np.zeros([data.shape[0], maxit])
#Inverse covariance matrices from the prior on each theta_u
#and gamma_b. These are diagonal, like Ridge regression.
Lambda_theta = np.eye(L+1)*Lambda_theta_diag
Lambda_gamma = np.eye(L+1)*Lambda_gamma_diag
#Main sampler code
for i in range(maxit):
if i%100==0 and progress:
print i
#The entire regression equation except for the overall mean.
nomu = np.sum(theta[data.uidx,1:]*gamma[data.midx,1:], axis=1) +\
theta[data.uidx,0] + gamma[data.midx,0]
#Compute the expectation of each observation given the current
#parameter values.
EY_draws[:,i]=mu+nomu
#Draw overall mean from a normal distribution
mu = np.random.normal(np.mean(data.stars-nomu), np.sqrt(sig2/N))
#Draw overall residual variance from a scaled inverse-Chi squared distribution.
sig2 = np.sum(np.power(data.stars-nomu-mu,2))/np.random.chisquare(N-2)
#For each item
for mi,itemdf in mgroups:
#Gather relevant observations, and subtract out overall mean and
#user-specific biases, which we are holding fixed.
Y_m = itemdf.stars-mu-theta[itemdf.uidx,0]
#Build the regression design matrix implied by holding user factors
#fixed.
X_m = np.hstack((np.ones([itemdf.shape[0],1]),
theta[itemdf.uidx,1:]))
gamma[mi,:] = gamma_m_draw(X_m, Y_m, sig2, Lambda_gamma)
#For each user
for ui,userdf in ugroups:
#Gather relevant observations, and subtract out overall mean and
#business-specific biases, which we are holding fixed.
Y_u = userdf.stars-mu-gamma[userdf.midx,0]
#Build the regression design matrix implied by holding business factors
#fixed.
X_u = np.hstack((np.ones([userdf.shape[0],1]),
gamma[userdf.midx,1:]))
theta[ui,:] = theta_u_draw(X_u, Y_u, sig2, Lambda_theta)
#Record draws
mu_draws[i] = mu
sig2_draws[i] = sig2
theta_draws[:,:,i] = theta
gamma_draws[:,:,i] = gamma
return {"mu": mu_draws, "sig2": sig2_draws,
"theta": theta_draws, "gamma": gamma_draws,
"EY": EY_draws}
Explanation: Here is the Gibbs sampler skeleton that your functions fit into. Look over the structure to see how for each draw from the posterior, the sampler iterates through $\mu$, $\sigma$, $\gamma_m$ for each item, and $\theta_u$ for each user.
End of explanation
#your code here
gibbs_sample = factor_gibbs(smalldf, 2, 1000, 0.1, 0.1, progress=True)
burnin = 200
prediction =np.mean(gibbs_sample["EY"][:,burnin:],axis = 1)
Explanation: Posterior Summaries
Once you have posterior draws from the sampler, the most natural thing to do is to compute the posterior mean of each quantity you are intersted in. To do this, we simply need to take the average value of each quantity across the samples drawn from the sampler. Before taking the average, however, we will want to ignore the first 20-30% of samples because these correspond the burnin period, the time during which the sampler is still looking for the main meat of the distribution.
Ok it's time to recommend!
3.3 Now that you have the Gibbs sampler, draw 1000 samples from the posterior distribution using a two-dimensional latent factor and prior precisions Lambda_theta_diag and Lambda_gamma_diag both equal to 0.1.
Compute the posterior mean of the fitted values for each $Y_{um}$, eliminating the first 200 samples. Call these the prediction. These constitute our recommendations. True to the bayesian paradigm, we dont just have mean predictions, but entire distributions. But currently we are only interested in the means.
End of explanation
#your code here
compare_results(smalldf.stars.values, prediction, ylow=1, yhigh=5, title="From Gibbs Sampler")
Explanation: Plot the predictions against the observed data.You can use the compare_results function defined in the previous section. How do the fitted values compare to those from the KNN procedure?
End of explanation
gibbs_out = factor_gibbs(smalldf, 15, 1000, 0.1, 0.1)
burnin = 200
predicted=np.mean(gibbs_out['EY'][:,burnin:], axis=1)
compare_results(smalldf.stars.values, predicted, ylow=1, yhigh=5, title="From Gibbs Sampler")
Explanation: your answer here
The results from the latent factor model appear to be better-behaved than those from the KNN procedure (with negative similarities) since there are no extreme values. In terms of the non-extreme values from the KNN procedure, the latent factor model appears to make similar predictions to the KNN procedure when k = 3., Specifically, the average prediction line for each class is too compressed toward the total data mean of around 3.8, meaning that we are in the bias limit where we are pooling too much information between ratings. Thus, we are overpredicting low ratings and underpredicting high ratings.
(If we compare to the KNN procedure with strictly positive weights, as in the homework addendum, we see again that the recommenders make comparable predictions, although the latent factor model again appears to fit slightly better than at both the bias or variance limits of the KNN procedure).
There is also a bias-variance tradeoff here. In this case, it appears that we have proposed a model that is too simple (thus close to the bias limit) because of how the ratings are pulled in toward the data mean. In this case, proposing more latent factors increases the flexibility of the model, and thus moves us toward the variance limit. Thus, the plot suggests that we may want to reduce the bias at the cost of increasing variance, for example by considering a latent factor model with more factors (say, L=15) to obtain a better fit (see ADDENDUM below).
End of explanation
subsetoffull=fulldf[['user_id','business_id', 'stars','business_avg','user_avg']]
subsetoffull.to_csv("subset-full.csv", index=False, header=False)
subsetofsmall=smalldf[['user_id','business_id', 'stars','business_avg','user_avg']]
subsetofsmall.to_csv("subset-small.csv", index=False, header=False)
Explanation: Q4 Scaling Up
All our recommenders suffer from problems having to do with the fact that we subsetted an already sparse user-item matrix. The more items we have, the more items we may find in the vicinity of a given item, and thus we are likely to give a more robust average rating to the given item.
In this problem we shall use Amazon Elastic Map-Reduce to tackle the entire user-restaurant matrix. We shall do this in two parts: we'll use MRJob locally on your machine to on the smaller data set to calclate the pearson database, and then we'll tackle the entire data set on Amazon.
The larger set has 35000 users and 4500 items. Computing the 4500X4500 similarity matrix on one machine will be prohibitively expensive. Thus we'll adopt a strategy where we'll split the calculation over multiple machines using the map-reduce paradigm, with mappers and reducers working on multiple machines
Then we calculate the k-nearest neighbors in the 'space' of the user: this involves a database lookup and an iteration over the items a user has rated. Since the latter is usually not a very large number, this computation can be managed on a front end machine (even if storing the database will take a lot of memory).
We'll first create subset data frames, which have just those columns which we will send to the map-reduce. We'll also strip out the header and index of the frame. The reason for doing this is: unless we pre-populate the machines on Amazon with software, we can rely only on the regular python library, numpy, and scipy being there (and at python 2.6), and thus we will need to parse the csv file, line by line (mrjob uses hadoop's stream protocol and thus needs to be fed line by line).
End of explanation
from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import HtmlFormatter
from IPython.display import HTML
import urllib
skelcode = urllib.urlopen("https://raw.github.com/cs109/content/master/skeleton.py").read()
skelhtml=highlight(skelcode, PythonLexer(), HtmlFormatter())
HTML(skelhtml)
Explanation: Running mrjob locally
mrjob scripts cannot be run from the ipython notebook, as they fork themselves on execution. Thus you must write the code for mrjob in a separate file which you must submit along with this homework, in the same folder as the python notebook file.
If you have not done so already (you were supposed to do this as part of HW 0), you will first need to install mrjob. The appropriate equivalent of the following incantation should do the job:
~/anaconda/bin/pip install mrjob
To familiarize yourself with the structure of an mrjob script, please read this . Run the examples in that document to familiarize yourself with mrjob.
The kind of script you will be writing is in the section "Writing your second job" in that document.
All mrjob tasks use the map-reduce strategy to divide up computation across computers. You should work through the mrjob tutorial to gain familiarity with this, but we’ll also outline the basic process here:
During the first map step, mrjob calls a mapper function with a key (which for the first step is None), and a value (which for the first step is a line of data from an input file). This function does whatever it wants with this data, and yields a key and value. The key is used in step 2 to gather up the values from all the different mappers into groups
mrjob collects the outputs from all the mappers, and gathers them into subsets with the same key value (this is similar to what pandas.groupby does). It passes each of these subsets to a reducer (or “collector”) function, whose job is to synthesize this list of grouped data into something useful (e.g., computing the mean of all the inputs). It then yields the key and reduced value.
If there are any additional steps, mrjob feeds each output from a reducer function in step 2 to the next mapper. Otherwise, it prints the output.
The point behind map-reduce is to agree upon a common framework to split up a large computational job into smaller tasks. mrjob then has a lot of freedom to organize how these tasks run in parallel, on many machines
Writing your script
4.1 Write a MRJOB script, called computesim.py. The object of this script is to take a csv file and return a tuple (rho, n_common) as calculate_similarity for pairs of restaurants. See skeleton.py below for the SPEC of this file. Your job is to fill in those methods. You MUST use this skeleton.
This script is to be run like so (substitute your own operating system's call):
~/anaconda/bin/python computesim.py subset-small.csv > output.small.local.txt
Thus, when the script below is run in this fashion, mrjob will read the data line-by-line from subset-small.csv, and pass it to the first "step".
Algorithm to calculate pearson similarities
Here is the description of the algorithm for RestaurantSimilarities.
Your code will have two steps. Each step will have a mapper and a reducer. These are described in turn here:
line_mapper will split the line, yielding the user_id as key, and the rest as value. This method's implementation is provided for you.
users_items_collector is a reducer. It is passed ALL mapper outputs corresponding to a particular user_id. Put these emissions into a list, and re-emit the user_id with this list.
pair_items_mapper takes the user_id and the list. It dosent do anything with the user_id, however, it takes every combination (thus len(list) choose 2) of 2 business_ids from the passed on list (see combinations in itertools in the python documentation) and sends on the remaining information keyed on the tuple (restaurant1, restaurant2). Be sure to handle the case where the restaurant id's are flipped: include them somehow under the same key.
calc_sim_collector is passed ALL sent on list information for the pair of restaurants that was emitted in the previous step. Note that thse will come from different user_ids. This sort of collection is key to this style of programming. This list information should now correspond to all the common support of the two restaurants. Use this information to calculate this common support and the pearson similarity. Return the aforementioned tuple by yielding it keyed by the tuple of restaurants. This information will be sent to the output file. The output keys and values will both be in JSON format, separated by a tab.
The output should be saved in a file via redirection as output.small.local.txt
Skeleton File for this problem
You ca access it here or just run the next cell to see it.
End of explanation
def upper_generator(words):
for word in words:
yield word.upper()
words = ['a', 'couple', 'of', 'words', 'to', 'process']
print upper_generator(words)
print list(upper_generator(words))
for u in upper_generator(words):
print u
Explanation: Explanation for those funny yield keywords
The functions above “yield” values, and do not “return” them. They are generators. Here is an example:
End of explanation
#thecode = open("computesim.py").read()
#thehtml=highlight(thecode, PythonLexer(), HtmlFormatter())
#HTML(thehtml)
Explanation: You can read more here. Also see Thu Oct 17th's class video for information about classes and generators.
Include computesim.py in your submission in the same folder as the notebook. Uncommenting and running the following cell should output your code in here.
End of explanation
output_small_local=[[json.loads(j) for j in line.strip().split("\t")] for line in open("./output.small.local.txt")]
output_small_local[0]
Explanation: Checking the results
Let us load the data from the file
End of explanation
def make_database_from_pairs(df, bizpairs):
make the database from the pairs returned from mrjob.
df is the dataframe, smalldf or fulldf.
bizpairs are a list of elements, each of which is a list of two
lists. The first of these lists has the two business id's, while
the second has the similarity and the common support
Returns an instance of the Database class.
dbase=Database(df)
cache={}
for bp,corrs in bizpairs:
b1,b2=bp
i1=dbase.uniquebizids[b1]
i2=dbase.uniquebizids[b2]
sim,nsup=corrs
dbase.database_sim[i1][i2]=sim
dbase.database_sim[i2][i1]=sim
dbase.database_sup[i1][i2]=nsup
dbase.database_sup[i2][i1]=nsup
if cache.has_key(b1):
nsup1=cache[b1]
else:
nsup1=dbase.df[dbase.df.business_id==b1].user_id.count()
cache[b1]=nsup1
if cache.has_key(b2):
nsup2=cache[b2]
else:
nsup2=dbase.df[dbase.df.business_id==b2].user_id.count()
cache[b2]=nsup2
dbase.database_sim[i1][i1]=1.0
dbase.database_sim[i2][i2]=1.0
dbase.database_sup[i1][i1]=nsup1
dbase.database_sup[i2][i2]=nsup2
return dbase
Explanation: We will Implement a function make_database_from_pairs which takes a dataframe of restaurants smalldf and the output parsed in the previous command to create the database like before. By the nature of the map-reduce algorithms these only contain those restaurant pairs with common support. The Database constructor initializes the remaining similarities to 0.
The function will take the dataframe and bizpairs obtained by parsing the EMR output file which have the key of business pairs and value the pair of pearson correlation and n_common. It will return an instance of the Database class.
This function will take a long time to run on large data sets.
End of explanation
db_mrjob_local=make_database_from_pairs(smalldf, output_small_local)
Explanation: We will store the output in variable db_mrjob_local.
End of explanation
print db.get("zruUQvFySeXyEd7_rQixBg", "z3yFuLVrmH-3RJruPEMYKw")
print db_mrjob_local.get("zruUQvFySeXyEd7_rQixBg", "z3yFuLVrmH-3RJruPEMYKw")
Explanation: We print a pair to see that our answers are identical.
End of explanation
sums=0.
count=0
for k in db.uniquebizids.keys():
for k2 in db.uniquebizids.keys():
count=count+1
sums=sums+db.get(k,k2)[0]-db_mrjob_local.get(k,k2)[0]
print sums, count
Explanation: 4.2 Lets test that our results are overall the same as before
End of explanation
output_full_emr=[[json.loads(j) for j in l.strip().split("\t")] for l in open("./output.full.emr.txt")]
Explanation: Running on Amazon Elastic Map Reduce(EMR)
At this point, we shall shift to running on Amazon EMR.
Follow the instruction below (Successfully ran by Manoj)
Make sure the AWS account is opened and the Key file (CS109.pem in this case) is generated.
Run chmod og-rwx /Users/apple/Dropbox/DataScience/ML/Admin/CS109.pem so that ssh will be happy
Create an mr config file 'mrjob.config' in /etc (Go to folder /etc) with the below contents:
runners:
emr:
aws_access_key_id:AKIAJ3NXQU4EPLLSE6CQ
aws_secret_access_key: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
ec2_key_pair: CS109
ec2_key_pair_file: /Users/apple/Dropbox/DataScience/ML/Admin/CS109.pem # ~/ and $$ENV_VARS allowed here
ssh_tunnel: true
Create boto config file (touch ~/.boto, vi ~/.boto) with the following contents:
[Credentials]
aws_access_key_id:AKIAJ3NXQU4EPLLSE6CQ
aws_secret_access_key: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
Run
~/anaconda/bin/python computesim.py -r emr --num-ec2-instances 2 subset-small.csv > output.small.emr.txt
See the log trail upon running the above:
```
No configs found; falling back on auto-configuration
num_ec2_instances is deprecated; set num_core_instances to 1 instead
Auto-created temp S3 bucket mrjob-5179031e01afd90b
Using s3://mrjob-5179031e01afd90b/tmp/ as our temp dir on S3
Creating temp directory /var/folders/x5/650ndbx116g04pzh2xwxtrp40000gn/T/computesim.apple.20160919.043438.080233
Copying local files to s3://mrjob-5179031e01afd90b/tmp/computesim.apple.20160919.043438.080233/files/...
Auto-created instance profile mrjob-604f15e493b94887
Auto-created service role mrjob-0de5ff0e438b1170
Created new cluster j-NI45QLK9ZDGD
Waiting for step 1 of 2 (s-Z0UTL6J582TX) to complete...
PENDING (cluster is STARTING)
PENDING (cluster is STARTING)
PENDING (cluster is STARTING)
PENDING (cluster is STARTING)
PENDING (cluster is STARTING)
PENDING (cluster is STARTING)
PENDING (cluster is STARTING)
PENDING (cluster is STARTING)
PENDING (cluster is STARTING: Configuring cluster software)
PENDING (cluster is STARTING: Configuring cluster software)
PENDING (cluster is BOOTSTRAPPING: Running bootstrap actions)
PENDING (cluster is BOOTSTRAPPING: Running bootstrap actions)
PENDING (cluster is BOOTSTRAPPING: Running bootstrap actions)
PENDING (cluster is BOOTSTRAPPING: Running bootstrap actions)
RUNNING for 8.1s
RUNNING for 39.7s
RUNNING for 71.1s
RUNNING for 102.6s
COMPLETED
Attempting to fetch counters from logs...
Waiting 10 minutes for logs to transfer to S3... (ctrl-c to skip)
To fetch logs immediately next time, set up SSH. See:
https://pythonhosted.org/mrjob/guides/emr-quickstart.html#configuring-ssh-credentials
Looking for step log in s3://mrjob-5179031e01afd90b/tmp/logs/j-NI45QLK9ZDGD/steps/s-Z0UTL6J582TX...
Parsing step log: s3://mrjob-5179031e01afd90b/tmp/logs/j-NI45QLK9ZDGD/steps/s-Z0UTL6J582TX/syslog.gz
Counters: 54
File Input Format Counters
Bytes Read=472919
File Output Format Counters
Bytes Written=323603
File System Counters
FILE: Number of bytes read=161546
FILE: Number of bytes written=607833
FILE: Number of large read operations=0
FILE: Number of read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=292
HDFS: Number of bytes written=323603
HDFS: Number of large read operations=0
HDFS: Number of read operations=7
HDFS: Number of write operations=2
S3: Number of bytes read=472919
S3: Number of bytes written=0
S3: Number of large read operations=0
S3: Number of read operations=0
S3: Number of write operations=0
Job Counters
Data-local map tasks=2
Launched map tasks=2
Launched reduce tasks=1
Total megabyte-seconds taken by all map tasks=33032448
Total megabyte-seconds taken by all reduce tasks=12291072
Total time spent by all map tasks (ms)=43011
Total time spent by all maps in occupied slots (ms)=129033
Total time spent by all reduce tasks (ms)=12003
Total time spent by all reduces in occupied slots (ms)=48012
Total vcore-seconds taken by all map tasks=43011
Total vcore-seconds taken by all reduce tasks=12003
Map-Reduce Framework
CPU time spent (ms)=14890
Combine input records=0
Combine output records=0
Failed Shuffles=0
GC time elapsed (ms)=954
Input split bytes=292
Map input records=6165
Map output bytes=553754
Map output materialized bytes=135599
Map output records=6165
Merged Map outputs=2
Physical memory (bytes) snapshot=899678208
Reduce input groups=240
Reduce input records=6165
Reduce output records=240
Reduce shuffle bytes=135599
Shuffled Maps =2
Spilled Records=12330
Total committed heap usage (bytes)=598155264
Virtual memory (bytes) snapshot=3936567296
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
Waiting for step 2 of 2 (s-1CBLWBN4DEEWY) to complete...
COMPLETED
Attempting to fetch counters from logs...
Looking for step log in s3://mrjob-5179031e01afd90b/tmp/logs/j-NI45QLK9ZDGD/steps/s-1CBLWBN4DEEWY...
Parsing step log: s3://mrjob-5179031e01afd90b/tmp/logs/j-NI45QLK9ZDGD/steps/s-1CBLWBN4DEEWY/syslog.gz
Counters: 54
File Input Format Counters
Bytes Read=358410
File Output Format Counters
Bytes Written=1096478
File System Counters
FILE: Number of bytes read=1863884
FILE: Number of bytes written=4177449
FILE: Number of large read operations=0
FILE: Number of read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=358720
HDFS: Number of bytes written=0
HDFS: Number of large read operations=0
HDFS: Number of read operations=4
HDFS: Number of write operations=0
S3: Number of bytes read=0
S3: Number of bytes written=1096478
S3: Number of large read operations=0
S3: Number of read operations=0
S3: Number of write operations=0
Job Counters
Data-local map tasks=2
Launched map tasks=2
Launched reduce tasks=1
Total megabyte-seconds taken by all map tasks=27780096
Total megabyte-seconds taken by all reduce tasks=25753600
Total time spent by all map tasks (ms)=36172
Total time spent by all maps in occupied slots (ms)=108516
Total time spent by all reduce tasks (ms)=25150
Total time spent by all reduces in occupied slots (ms)=100600
Total vcore-seconds taken by all map tasks=36172
Total vcore-seconds taken by all reduce tasks=25150
Map-Reduce Framework
CPU time spent (ms)=12680
Combine input records=0
Combine output records=0
Failed Shuffles=0
GC time elapsed (ms)=1149
Input split bytes=310
Map input records=240
Map output bytes=10193826
Map output materialized bytes=2002985
Map output records=100689
Merged Map outputs=2
Physical memory (bytes) snapshot=902324224
Reduce input groups=14348
Reduce input records=100689
Reduce output records=14348
Reduce shuffle bytes=2002985
Shuffled Maps =2
Spilled Records=201378
Total committed heap usage (bytes)=598155264
Virtual memory (bytes) snapshot=3940614144
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
Streaming final output from s3://mrjob-5179031e01afd90b/tmp/computesim.apple.20160919.043438.080233/output/...
Removing s3 temp directory s3://mrjob-5179031e01afd90b/tmp/computesim.apple.20160919.043438.080233/...
Removing temp directory /var/folders/x5/650ndbx116g04pzh2xwxtrp40000gn/T/computesim.apple.20160919.043438.080233...
Removing log files in s3://mrjob-5179031e01afd90b/tmp/logs/j-NI45QLK9ZDGD/...
Terminating cluster: j-NI45QLK9ZDGD
xpm-mac-mini:content apple$$
6. Run the script on the larger file subset-full.csv. Use between 4-8 instances on EMR on Amazon. Save the output in output.full.emr.txt. Your incantation will be something like:(By default, AWS assigns to region West Coast (Oregon). So clusters won't be seen under Singapore unless chosen explicitly)
~/anaconda/bin/python computesim.py -r emr --num-ec2-instances 5 subset-full.csv > output.full.emr.txt
```
[HW4 instructions]
Read this document for instructions on how to set yourself up on Amazon.
Reproduce the results with the smaller file on EMR
Test the smaller file and make sure it has the same results. For example, you could use the incantation:
~/anaconda/bin/python computesim.py -r emr --num-ec2-instances 2 subset-small.csv > output.small.emr.txt
You do NOT need to submit any results from that exploration to us.
Important: Please always make sure that your code is bug free, before actually submitting it to amazon. Try to run the job locally first and see if it produces the desired result. Then, if this worked, you are ready to proceed to the cloud. The homework problems are small and your free credit should provide you with a lot of room for running and testing on Amazon. However, it is your responsibility to make sure the jobs terminate properly and do not cause excessive costs.
You can always monitor your currently running jobs (in the US-East sector) using this overview at region US-EAST-1 of your MapReduce job flows.
Running the larger job
4.3 Run the script on the larger file subset-full.csv. Use between 4-8 instances on EMR on Amazon. Save the output in output.full.emr.txt. Your incantation will be something like:
~/anaconda/bin/python computesim.py -r emr --num-ec2-instances 5 subset-full.csv > output.full.emr.txt
You might elect to save the file on S3 and bring it over manually.
Try and think about what size job would be best to run on Amazon, given that there is a setup time. There is a way to persistently set up machines (the mrjob documentation provides the details), but then remember you will be billed for that setup and need to monitor it. However, a persistent setup might come useful for your projects.
Loading the full output from EMR
Lets load the output in. CAUTION The next two cells will also take a lot of time to run and load.
End of explanation
dbfull=make_database_from_pairs(fulldf, output_full_emr)
Explanation: This function will take a very long time to run, on the order of 5 minutes or more, depending on your computer
End of explanation
#your code here
#your code here
print "for user",usernamefromid(fulldf,testuserid), 'avg', fulldf[fulldf.user_id==testuserid].stars.mean()
for i in bizs:
print "========="
print biznamefromid(fulldf, i), i
print rating(fulldf, dbfull, i, testuserid, k=7, reg=3.)
u,a=get_other_ratings(i, testuserid, fulldf)
print "User Score:",u,"Avg score",a
Explanation: 4.4 For testuserid, once again, print out the ratings using the bizs list as before. How have they changed with respect to Question 2? Why might this be?
End of explanation
print "for user",usernamefromid(smalldf,testuserid), 'avg', smalldf[smalldf.user_id==testuserid].stars.mean()
for biz_id in bizs:
print "----------------------------------"
print biznamefromid(smalldf, biz_id)
print "Predicted Rating:",rating(smalldf, db, biz_id, testuserid, k=7, reg=3.)
u,a=get_other_ratings(biz_id, testuserid, smalldf)
print "Actual User Rating:",u,"Avg Rating",a
Explanation: your answer here
Copy the smalldf ratings below for easy comparison:
End of explanation |
12,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 4
Step1: 1. 3D Heatmap
NOTE
Step2: 2. Heatmap after thresholding
Here, we assume that there is some level of noise, which can be defined by redefining THRESH below. The same heatmap is generated, but only for values where the synapse count is higher than the threshold, thus attempting to remove noise. | Python Code:
import numpy as np
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import csv
data = open('../data/data.csv', 'r').readlines()
fieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']
reader = csv.reader(data)
reader.next()
rows = [[int(col) for col in row] for row in reader]
xs = []
ys = []
zs = []
ss = []
sorted_x = sorted(list(set([r[0] for r in rows])))
sorted_y = sorted(list(set([r[1] for r in rows])))
sorted_z = sorted(list(set([r[2] for r in rows])))
vol = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
xs.append(r[0])
ys.append(r[1])
zs.append(r[2])
ss.append(r[4])
vol[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
Explanation: Week 4
End of explanation
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(xs,ys,zs, marker='o', s=ss, c="goldenrod", alpha=0.4)
for ii in xrange(0,360,1):
ax.view_init(elev=10., azim=ii)
fig.show()
Explanation: 1. 3D Heatmap
NOTE: This demo will be slow on low-spec computers (like mine).
End of explanation
THRESH = np.mean(ss) * 3/2
print len(xs), len(ss)
ss_poppable = []
for s in range(len(ss)):
if ss[s] < THRESH:
ss_poppable.append(s)
for s in reversed(ss_poppable):
xs.pop(s)
ys.pop(s)
zs.pop(s)
ss.pop(s)
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(xs,ys,zs, marker='o', s=ss, c="goldenrod", alpha=0.005)
ax.view_init(elev=11., azim=25)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.title("Heatmap of Synapses")
# fig.show()
ss[0]
Explanation: 2. Heatmap after thresholding
Here, we assume that there is some level of noise, which can be defined by redefining THRESH below. The same heatmap is generated, but only for values where the synapse count is higher than the threshold, thus attempting to remove noise.
End of explanation |
12,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Plot learning curves of different classifiers
This example is a small modification of the sciki-learn tutorial test.
Comparison of different linear SVM classifiers on a 2D projection of the iris
dataset. Here I consider only two features of the dataset
Step2: Pandas
As in previous examples, we use pandas to read a database.
Step3: Testing sklearn classifiers.
Here I test different classifiers provided by sklearn into my specific test set. | Python Code:
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
Generate a simple plot of the test and training learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is not a classifier
or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validators that can be used here.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
Explanation: Plot learning curves of different classifiers
This example is a small modification of the sciki-learn tutorial test.
Comparison of different linear SVM classifiers on a 2D projection of the iris
dataset. Here I consider only two features of the dataset:
Seed area
Seed asymmetry
Test linear and other models, and use multiple features of the seeds dataset.
End of explanation
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
import pandas as pd
#I use this dataset because this has clearly separated cathegories,
#Read the database using pandas,
#Note that bad lines are omitted with error_bad_lines=False
df = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/00236/seeds_dataset.txt', header=None, sep="\t", error_bad_lines=False)
#The headers are not given in the dataset, so we give them afterwords:
#1. area A,
#2. perimeter P,
#3. compactness C = 4*pi*A/P^2,
#4. length of kernel,
#5. width of kernel,
#6. asymmetry coefficient
#7. length of kernel groove.
#8. Class: 1=Kama, 2=Rosa, 3=Canadian
df.columns = ["area","perimeter","compactness","kernel-length","kernel-width",
"asymmetry","kernel-groove-length","class"]
#This shows the header of the database:
df.head()
Explanation: Pandas
As in previous examples, we use pandas to read a database.
End of explanation
#In the database there are 3 classes of seeds:
#And skilearn can handle multiple classes
import numpy as np
#This sets class=2 to 0 and 3 to 1:
y = df.loc[:,'class']
#Extract all cathegories:
X=df.iloc[:,0:7]
#This is to convert the csv dictionary into a numpy matrix to later standarize:
X=X.as_matrix()
nfeature=X.shape[1]
# standardize features
X_std = np.copy(X)
for ifeat in range(0,nfeature):
X_std[:,ifeat] = (X[:,ifeat] - X[:,ifeat].mean()) / X[:,ifeat].std()
#Here since we have many features, we just plot the learning curves for the training and cross-validation sets.
title = "Learning Curves (Naive Bayes)"
# Cross validation with 100 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = GaussianNB()
plot_learning_curve(estimator, title, X, y, ylim=(0.7, 1.01), cv=cv, n_jobs=4)
title = "Learning Curves (SVC, Poly kernel, $\gamma=0.001$)"
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = SVC(kernel='poly',gamma=0.001)
plot_learning_curve(estimator, title, X, y, (0.7, 1.01), cv=cv, n_jobs=4)
title = "Learning Curves (SVC, RBF kernel, $\gamma=0.001$)"
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = SVC(kernel='rbf',gamma=0.001)
plot_learning_curve(estimator, title, X, y, (0.7, 1.01), cv=cv, n_jobs=4)
title = "Learning Curves (Linear SVC)"
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = svm.LinearSVC(C=1.0)
plot_learning_curve(estimator, title, X, y, (0.7, 1.01), cv=cv, n_jobs=4)
plt.show()
Explanation: Testing sklearn classifiers.
Here I test different classifiers provided by sklearn into my specific test set.
End of explanation |
12,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Send email Clint
Main file for send mail. Function define here
Importing all dependency
Step1: User Details Function
Step2: Login function
In this function we call user details function and get the user name and password, Than we use those details for IMAP login.
SMTP is Simple Mail Transfer Protocol
Step4: Send mail function.
This function takes 5 argument. 1. Login Data. 2. To Email 3. From Email 4. HTML format massage 5. Normal text
The HTML message, is best and preferred. | Python Code:
# ! /usr/bin/python
__author__ = 'Shahariar Rabby'
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.header import Header
from email.utils import formataddr
import getpass
Explanation: Send email Clint
Main file for send mail. Function define here
Importing all dependency
End of explanation
def user():
# ORG_EMAIL = "@gmail.com"
# FROM_EMAIL = "ypur mail" + ORG_EMAIL
# FROM_PWD = "yourpss"
FROM_EMAIL = raw_input("insert Email : ")
FROM_PWD = getpass.getpass("input : ")
return FROM_EMAIL,FROM_PWD
Explanation: User Details Function
End of explanation
def login():
gmail_user, gmail_pwd = user() #calling the user function for get user details
smtpserver = smtplib.SMTP("smtp.gmail.com",587) #Declaring gmail SMTP server address and port
smtpserver.starttls() #Starting tls service, Transport Layer Security (TLS) are cryptographic protocols that provide communications security over a computer network.
smtpserver.login(gmail_user, gmail_pwd) #Login to Gmail server using TLS
print 'Login successful'
return smtpserver
Explanation: Login function
In this function we call user details function and get the user name and password, Than we use those details for IMAP login.
SMTP is Simple Mail Transfer Protocol
End of explanation
# text = "Hi!\n5633222222222222222http://www.python.org"
# html = \
# <html>
# <head></head>
# <body>
# <p>Hi!<br>
# How are you?<br>
# Here is the <a href="http://www.python.org">link</a> you wanted.
# </p>
# </body>
# </html>
#
def Send_Mail(smtpserver,TO_EMAIL,text=None,html=None,subject='Subject missing',FROM_EMAIL='Shahariar'):
# Create message container - the correct MIME type is multipart/alternative.
msg = MIMEMultipart('alternative') # In turn, use text/plain and text/html parts within the multipart/alternative part.
msg['Subject'] = subject #Subject of the message
msg['From'] = formataddr((str(Header(FROM_EMAIL, 'utf-8')), FROM_EMAIL)) #Adding custom Sender Name
msg['To'] = TO_EMAIL #Assining Reciver email
part1 = MIMEText(text, 'plain') #adding text part of mail
part2 = MIMEText(html, 'html') #Adding HTMLpart of mail
# Attach parts into message container.
# According to RFC 2046, the last part of a multipart message, in this case
# the HTML message, is best and preferred.
msg.attach(part1) #attach Plain text
msg.attach(part2) #attach HTML text
# sendmail function takes 3 arguments: sender's address, recipient's address
# and message to send - here it is sent as one string.
try:
smtpserver.sendmail(FROM_EMAIL, TO_EMAIL, msg.as_string())
print " Message Send"
smtpserver.quit() #stopping server
except Exception:
print Exception
Explanation: Send mail function.
This function takes 5 argument. 1. Login Data. 2. To Email 3. From Email 4. HTML format massage 5. Normal text
The HTML message, is best and preferred.
End of explanation |
12,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 25
Step1: However, it cannot match multiple repititions
Step2: We can use this to find strings that may or may not include elements, like phone numbers with and without area codes.
Step3: The '*' Regex Operater
The * character can be used to match many (0 or n) times.
Step4: The '+' Regex Operater
The + character can match one or more (1 or n) times.
Step5: All of these characters can be escaped for literal matches
Step6: The '{}' Regex Operater
The {x} character can match x times.
Step7: This operator can also take the {x,y} argument to create a minimum or maximum number of repititions.
Step8: RegEx does greedy matches, which means it will try to find the longest string that matches, not the shortest.
Step9: You can do a non-greedy match by using a '}?' operator. | Python Code:
import re
batRegex = re.compile(r'Bat(wo)?man') # The ()? says this group can appear 0 or 1 times to match; it is optional
mo = batRegex.search('The Adventures of Batman')
print(mo.group())
mo = batRegex.search('The Adventures of Batwoman')
print(mo.group())
Explanation: Lesson 25:
RegEx groups and the Pipe Character
The | pipe character can match one of many groups, but you may want a certain number of repitions of a group.
The '?' Regex Operater
The ? RegEx operater allows for optional (0 or 1) matches:
End of explanation
mo = batRegex.search('The Adventures of Batwowowowoman')
print(mo.group())
Explanation: However, it cannot match multiple repititions:
End of explanation
phoneNumRegex = re.compile(r'\d\d\d\-\d\d\d-\d\d\d\d') # this requires an area code.
mo = phoneNumRegex.search('My number is 415-555-4242') # matches
print(mo.group())
mo2 = phoneNumRegex.search('My number is 555-4242') # will not match
print(mo2)
phoneNumRegex = re.compile(r'(\d\d\d\-)?\d\d\d-\d\d\d\d') # Make first three digits and dash optional
mo = phoneNumRegex.search('My number is 415-555-4242') # matches
print(mo.group())
mo2 = phoneNumRegex.search('My number is 555-4242') # matches
print(mo2.group())
Explanation: We can use this to find strings that may or may not include elements, like phone numbers with and without area codes.
End of explanation
import re
batRegex = re.compile(r'Bat(wo)*man') # The ()* says this group can appear 0 or n times to match
print(batRegex.search('The Adventures of Batwoman').group())
print(batRegex.search('The Adventures of Batwowowowoman').group())
Explanation: The '*' Regex Operater
The * character can be used to match many (0 or n) times.
End of explanation
import re
batRegex = re.compile(r'Bat(wo)+man') # The ()+ says this group can appear 1 or n times; it is NOT optional
print(batRegex.search('The Adventures of Batwoman').group())
print(batRegex.search('The Adventures of Batwowowowoman').group())
print(batRegex.search('The Adventures of Batman').group())
Explanation: The '+' Regex Operater
The + character can match one or more (1 or n) times.
End of explanation
import re
batRegex = re.compile(r'\+\*\?') # The +,*, and ? are escaped.
print(batRegex.search('I learned about +*? RegEx syntax').group())
Explanation: All of these characters can be escaped for literal matches:
End of explanation
haRegex = re.compile(r'(Ha){3}')
print(haRegex.search('HaHaHa').group())
print(haRegex.search('HaHaHaHa').group()) # Matches only three times, so returns only 3
#print(haRegex.search('HaHa').group()) # No Match
phoneRegex = re.compile(r'((\d)?\d\d\d(\d)?){3}') # Useful to avoid repitition
phoneNumRegex.search('My number is 415-555-4242').group()
Explanation: The '{}' Regex Operater
The {x} character can match x times.
End of explanation
haRegex = re.compile(r'(Ha){3,5}')
print(haRegex.search('HaHaHa').group())
print(haRegex.search('HaHaHaHa').group())
print(haRegex.search('HaHaHaHaHa').group())
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches max of 5
haRegex = re.compile(r'(Ha){,5}') # Can drop one or the other for unbounded matches
print(haRegex.search('Ha').group())
print(haRegex.search('HaHa').group())
print(haRegex.search('HaHaHa').group())
print(haRegex.search('HaHaHaHa').group())
print(haRegex.search('HaHaHaHaHa').group())
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches max of 5
Explanation: This operator can also take the {x,y} argument to create a minimum or maximum number of repititions.
End of explanation
haRegex = re.compile(r'(Ha){1,6}') # at least 1, or 6
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches longest string; 6
Explanation: RegEx does greedy matches, which means it will try to find the longest string that matches, not the shortest.
End of explanation
haRegex = re.compile(r'(Ha){1,6}?') # The }? says favor the first condition, not the second; non-greedy
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches shortest string, 1
Explanation: You can do a non-greedy match by using a '}?' operator.
End of explanation |
12,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
To address an interesting and practical case (entanglement doesn't grow too much) we'll use as
an initial state the all zero state apart from two flipped spins
Step1: We'll also set up some parameters and quantities to compute during the evolution
Step2: Now we are ready to being the evolution, we use the generator
at_times to yield the state at each target time, and set
a tol here which will calculate a timestep to use.
At each time, we'll compute the desired quantities and add them
to our results.
Step3: TEBD.err contains a (rough) upper estimate for the error
incurred by the trotter decomposition so far
Step4: We can also check the normalization of final state
Step5: And that energy has been conserved
Step6: Finally let's plot the quantities we calculated
Step7: Where we can see ballistic propagation (and reflection) of the 'light cone'.
Non-translationally invariant Hamiltonians
The NNI class can also handle Hamiltonians with site specific interactions and fields.
A good example of this is the Many-Body-Localized spin hamiltonian
Step8: And finally, plot the quantities again | Python Code:
L = 44
zeros = '0' * ((L - 2) // 3)
binary = zeros + '1' + zeros + '1' + zeros
print('psi0:', f"|{binary}>")
psi0 = qtn.MPS_computational_state(binary)
psi0.show() # prints ascii representation of state
H = qtn.NNI_ham_heis(L)
tebd = qtn.TEBD(psi0, H)
# Since entanglement will not grow too much, we can set quite
# a small cutoff for splitting after each gate application
tebd.split_opts['cutoff'] = 1e-12
Explanation: To address an interesting and practical case (entanglement doesn't grow too much) we'll use as
an initial state the all zero state apart from two flipped spins:
End of explanation
# times we are interested in
ts = np.linspace(0, 80, 101)
mz_t_j = [] # z-magnetization
be_t_b = [] # block entropy
sg_t_b = [] # schmidt gap
# range of bonds, and sites
js = np.arange(0, L)
bs = np.arange(1, L)
Explanation: We'll also set up some parameters and quantities to compute during the evolution:
End of explanation
# generate the state at each time in ts
# and target error 1e-3 for whole evolution
for psit in tebd.at_times(ts, tol=1e-3):
mz_j = []
be_b = []
sg_b = []
# there is one more site than bond, so start with mag
# this also sets the orthog center to 0
mz_j += [psit.magnetization(0)]
for j in range(1, L):
# after which we only need to move it from previous site
mz_j += [psit.magnetization(j, cur_orthog=j - 1)]
be_b += [psit.entropy(j, cur_orthog=j)]
sg_b += [psit.schmidt_gap(j, cur_orthog=j)]
mz_t_j += [mz_j]
be_t_b += [be_b]
sg_t_b += [sg_b]
tebd.pt.show()
Explanation: Now we are ready to being the evolution, we use the generator
at_times to yield the state at each target time, and set
a tol here which will calculate a timestep to use.
At each time, we'll compute the desired quantities and add them
to our results.
End of explanation
tebd.err # should be < tol=1e-3
Explanation: TEBD.err contains a (rough) upper estimate for the error
incurred by the trotter decomposition so far:
End of explanation
tebd.pt.H @ tebd.pt
Explanation: We can also check the normalization of final state:
End of explanation
H = qtn.MPO_ham_heis(L)
print("Initial energy:", qtn.expec_TN_1D(psi0.H, H, psi0))
print("Final energy:", qtn.expec_TN_1D(tebd.pt.H , H, tebd.pt))
Explanation: And that energy has been conserved:
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=(12, 7))
# plot the magnetization
ax1 = plt.subplot('131')
plt.pcolormesh(js, ts, mz_t_j, vmin=-0.5, vmax=0.5)
plt.set_cmap('RdYlBu')
plt.colorbar()
plt.title('Z-Magnetization')
plt.xlabel('Site')
plt.ylabel('time [ $Jt$ ]')
# plot the entropy
ax2 = plt.subplot('132', sharey=ax1)
plt.pcolormesh(bs, ts, be_t_b)
plt.setp(ax2.get_yticklabels(), visible=False)
plt.set_cmap('viridis'), plt.colorbar()
plt.title('Block Entropy')
plt.xlabel('Bond')
# plot the schmidt gap
ax3 = plt.subplot('133', sharey=ax1)
plt.pcolormesh(bs, ts, sg_t_b, vmin=0, vmax=1)
plt.setp(ax3.get_yticklabels(), visible=False)
plt.set_cmap('magma_r')
plt.colorbar()
plt.title('Schmidt Gap')
plt.xlabel('Bond')
plt.show()
Explanation: Finally let's plot the quantities we calculated:
End of explanation
builder = qtn.SpinHam(S=1/2)
# specify the interaction term (defaults to all sites)
builder += 0.5, '+', '-'
builder += 0.5, '-', '+'
builder += 1.0, 'Z', 'Z'
# add random z-fields to each site
np.random.seed(2)
for i in range(L):
builder[i] += 2 * np.random.rand() - 1, 'Z'
H = builder.build_nni(L)
tebd = qtn.TEBD(psi0, H)
tebd.split_opts['cutoff'] = 1e-10
# times we are interested in
ts = np.linspace(0, 80, 101)
mz_t_j = [] # z-magnetization
be_t_b = [] # block entropy
sg_t_b = [] # schmidt gap
# range of bonds, and sites
js = np.arange(0, L)
bs = np.arange(1, L)
# generate the state at each time in ts
# and target error 1e-3 for whole evolution
for psit in tebd.at_times(ts, tol=1e-3):
mz_j = []
be_b = []
sg_b = []
# there is one more site than bond, so start with mag
# this also sets the orthog center to 0
mz_j += [psit.magnetization(0)]
for j in range(1, L):
# after which we only need to move it from previous site
mz_j += [psit.magnetization(j, cur_orthog=j - 1)]
be_b += [psit.entropy(j, cur_orthog=j)]
sg_b += [psit.schmidt_gap(j, cur_orthog=j)]
mz_t_j += [mz_j]
be_t_b += [be_b]
sg_t_b += [sg_b]
Explanation: Where we can see ballistic propagation (and reflection) of the 'light cone'.
Non-translationally invariant Hamiltonians
The NNI class can also handle Hamiltonians with site specific interactions and fields.
A good example of this is the Many-Body-Localized spin hamiltonian:
$$
\hat{H} = J \sum_{i=1}^{L - 1} \mathbf{\sigma}i \cdot \mathbf{\sigma}{i + 1}
+ \sum_i^L h_i \sigma^Z_i
$$
Where $h_i$ is a random variable. Here we construct it manually:
End of explanation
plt.figure(figsize=(12, 7))
# plot the magnetization
ax1 = plt.subplot('131')
plt.pcolormesh(js, ts, mz_t_j, vmin=-0.5, vmax=0.5)
plt.set_cmap('RdYlBu')
plt.colorbar()
plt.title('Z-Magnetization')
plt.xlabel('Site')
plt.ylabel('time [ $Jt$ ]')
# plot the entropy
ax2 = plt.subplot('132', sharey=ax1)
plt.pcolormesh(bs, ts, be_t_b)
plt.setp(ax2.get_yticklabels(), visible=False)
plt.set_cmap('viridis'), plt.colorbar()
plt.title('Block Entropy')
plt.xlabel('Bond')
# plot the schmidt gap
ax3 = plt.subplot('133', sharey=ax1)
plt.pcolormesh(bs, ts, sg_t_b, vmin=0, vmax=1)
plt.setp(ax3.get_yticklabels(), visible=False)
plt.set_cmap('magma_r')
plt.colorbar()
plt.title('Schmidt Gap')
plt.xlabel('Bond')
plt.show()
Explanation: And finally, plot the quantities again:
End of explanation |
12,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.- Regresión Lineal Ordinaria (LSS)
En esta sección trabajaremos con un dataset conocido como House Sales in King County, USA, presentado
en la plataforma de Kaggle [4], el cual es un gran dataset para evaluar simples modelos de regresión. Los
registros contienen distintas características asociadas a las ventas de casas en la localidad King County, entre
mayo de 2014 y mayo de 2015, las cuales vienen descritas en el dataset, como la cantidad de habitaciones,
cantidad de baños, número de pisos, etc. Donde una de las variables a estudiar corresponde al precio en el
cual se vendió la casa.
a) Construya un dataframe con los datos a analizar descargándolos desde la plataforma como se indicó.
Explique por qué se realiza la línea 4.
Step1: La línea 4 se encarga de eliminar parámetros que no son importantes para la valoración de la vivienda. La función drop toma un arreglo de atributos y los elimina del dataset.
b) Describa brevemente el dataset a utilizar.
Step2: Como se mustra en la salida anterior, el dataset corresponde a la información de 21613 casas, en donde cada una tiene 18 atributos asociados. Estos atributos corresponden a precio, número de baños, número de dormitorios, superficie del living, superficie total del terreno, número de pisos, entre otros más especificados en las columnas. Además, se presentan la media, la desviaciñon estándar y los cuartiles de cada atributo sobre el dataset total.
c) Normalice los datos antes de trabajar y aplique una transformación adecuada a la variable a predecir.
Explique la importancia/conveniencia de realizar estas dos operaciones.
Step3: La normalización es necesaria debido a que la mayoría de los algoritmos clasificadores hacen uso de la distancia euclidiana para calcular la distancia entre dos puntos en el dataset. Cuando existe un atributo con un rango de valores mucho mayor en comparación a los demás, este gobernará el calculo de la distancia. Esto puede producir, por ejemplo, que el algoritmo de gradiente (SGD) converja mucho más lento en comparación a una instancia normalizada.
La transformación sobre el precio, por otro lado, es necesaria para utilizar regresión lineal. Las regresiones lineales, tal y como dice su nombre, modelan utilizando rectas. Estas rectas luego son comparadas con los valores del precio para probar que tan bien se ajustan a nuestros valores. No tiene sentido realizar esta comapracion si el precio no sigue una tendencia lineal. Por ejemplo, se estaría intentando ajustar una recta sobre una función cuadratica, lo cual no es correcto. Al aplicar el logaritmo, nos aseguramos que el precio, nuestro y , sea siempre lineal.
d) Realice una regresión lineal de mínimos cuadrados básica. Explique la importancia/conveniencia del
paso 4 y los argumentos que se deben entregar a la función que implementa la regresión lineal.
Step4: En el paso 4, se incorpora una columna de unos para representar a la constante $\beta_0$ en el modelo de la regresión lineal, que corresponde a nuestro intercepto. Esta constante proviene de la expresión
Step5: Los z-scores de los coeficientes corresponde a
Step6: Lo anterior se puede resumir en la siguiente tabla
Step7: Usando un nivel de significancia de $5\%$, tenemos que son relevantes aquellos atributos cuyo $|z|$ sea mayor a las colas de la distribución $\mathcal{N}(\mu, \sigma)$ donde $\mu$ la media de los betas y $\sigma$ la desviación estándar. Como queremos un intervalo de confianza cerrado de igual magnitud, el problema se traduce a calcular
Step8: Lo cual indica que son relevantes aquellas variables con z-score $|z| > 1.96$. Esto es, las variables sqft_living y sqft_above.
Se puede observar que de todos los atributos existentes, la mayoría tiene z-scores marginales en comparaciñon a los atributos sqft_living, sqft_above y sqft_basement, por lo que su aporte es también marginal en la predicción del precio de la vivienda.
f) Proponga un método para corregir lo observado (Hint
Step9: Considerando la distribución $t_{(N - k - 1)}$ donde $k$ son los atributos filtrados. Tenemos entonces
Step10: Esto implica que los atributos seleccionados son similarmente importantes en la predicción del precio.
g) Estime el error de predicción del modelo usando validación cruzada con un número de folds igual a K
= 5 y K = 10. Recuerde que para que la estimación sea razonable, en cada configuración (fold) deberá
reajustar los pesos del modelo. Mida el error real del modelo sobre el conjunto de pruebas, compare y
concluya.
Asumiendo que se debe utilizar el modelo con la selección de atributos seleccionada anteriormente, se tiene
Step11: Se puede observar que los errores obtenidos para cross validation con $K = 5$ y $K = 10$ son muy similares y tienen un valor aproximado de $0.0647$. El error obtenido del conjunto de prueba sobre los datos sin realización de selección de atributos corresponde a $0.065$ que es prácticamente el mismo.
Esto sucede debido a la gran cantidad de datos disponibles para el entrenamiento. Eligiendo todo el subconjunto de atributos, tenemos que $p$ es al menos 3 ordenes de magnitud menor que $N$, lo que permite tener una gran precisión utilizando cualquier método de validación.
h) Mida los errores de predicción para cada dato de entrenamiento. Utilizando un “quantile-quantile plot”
determine si es razonable la hipótesis de normalidad sobre los residuos del modelo.
Step12: Con el gráfico de Quantile-Quantile se busca analizar las diferencias entre la distribución de una población aleatoria y, en este caso, la distribución normal. Se puede apreciar que los valores están distribuidos en el gráfico de manera aproximadamente lineal con respecto a la recta, lo que es suficiente para respaldar la hipótesis de que los datos están normalmente distribuidos.
2.- Selección de Atributos
Utilizando el dataframe de la actividad anterior,
a) Construya una función que implemente Forward Step-wise Selection (FSS). Es decir, partiendo con un
modelo sin predictores (variables), agregue un predictor a la vez, re-ajustando el modelo de regresión
en cada paso. Para seleccionar localmente una variable, proponga/implemente un criterio distinto al
utilizado en el código de ejemplo. Construya un gráfico que muestre el error de entrenamiento y el error
de pruebas como función del número de variables en el modelo. Ordene el eje x de menor a mayor.
Para la implementación de FSS, se ha seleccionado el criterio del promedio del valor absoluto dado por
Step13: Como se puede apreciar en el gráfico, a medida que aumenta el número de atributos seleccionados, disminuye el error de la predicción hasta que aproximadamente en $p = 12$ la ganancia de precisión ya no es tan significativa. Esto es relevante al momento de seleccionar atributos cuidando de no caer en overfitting, ya que implica que existe un número ideal o óptimo de atributos a seleccionar.
A continuación se presenta una comparación entre la función cuadratica y absoluta para el error
Step14: Se puede notar claramente que utilizando un criterio de error cuadratico se obtienen errores de menor magnitud debido a la naturaleza de la distancia euclidiana. Sin embargo, se puede también notar que ambos siguen la misma tendencia.
3.- Regularización
Utilizando el dataframe de la actividad anterior,
a) Ajuste un modelo lineal utilizando “Ridge Regression”, es decir, regularizando con la norma $\ell_2$. Utilice
valores del parámetro de regularización $\lambda$ en el rango $[10^7, 10^1]$, variando si estima conveniente.
Construya un gráfico que muestre los coeficientes obtenidos como función del parámetro de regularización.
Describa lo que observa. (WARNING
Step15: Para poder entender el gráfico generado, es necesario entender como funciona Ridge Regression. Este algoritmo penaliza los coeficientes de la regresión lineal a través de un coeficiente denominado $\lambda$. Esto permite "suavizar" la diferencia de valor entre los coeficientes al imponer restricciones sobre su valor máximo.
Cuando se elije un valor de $\lambda$ pequeño, se puede estar en presencia de underfitting, mientras que si es muy pequeño se sufre de overfitting. Por tanto, es importante elegir valores de $\lambda$ adecuados.
Se puede observar en el gráfico que valores de $\lambda$ cercanos a 0 (o bien, cercanos a $10^1$) generan coeficientes muy dispersos con alta varianza y bias. A medida que se aumenta el valor de $\lambda$ los valores de los coeficientes van disminuyendo proporcionalmente de forma que se genera una disminución en la varianza y el bias. Alrededor de un coeficiente de $10^6$, se tiene que los coeficientes comienzan a ser muy similares entre sí mismos, produciendo el efecto de overtfitting que disminuye la capacidad de generalización del modelo entrenado.
Se puede confirmar además que los atributos relevantes corresponden a sqft_lot (Metros cuadrados de la vivienda), grade y bathrooms, debido a que son los que más aportan a la varianza.
b) Ajuste un modelo lineal utilizando el método “Lasso”, es decir, regularizando con la norma $\ell_1$. Utilice
valores del parámetro de regularización $\lambda$ en el rango $[10^0, 10^{-3}]$. Para obtener el código, modifique
las líneas 7 y 9 del ejemplo anterior. Construya un gráfico que muestre los coeficientes obtenidos
como función del parámetro de regularización. Describa lo que observa. ¿Es más efectivo Lasso para
seleccionar atributos?
Step16: A diferencia de Ride Regression, Lasso permite anular coeficientes para cierto valores de $\lambda$. A esto se le denomina "soft thresholding". Se puede observar que para coeficientes de $\lambda$ pequeños se tiene una gran varianza entre los coeficientes de la regresión lineal. A medida que se aumenta $\lambda$, comienza a disminuir la la varianza entre los coeficientes, pero también la gran mayorìa de ellos comienza a desaparecer. Esto permite reducir la cantidad de atributos requeridos para realizar la estimación, disminuyendo las probabilidades de overfitting si se selecciona un valor de $\lambda$ adecuado.
c) Escogiendo uno de los dos métodos regularizadores anteriores, especificando el porqué, construya un
gráfico que muestre el error de entrenamiento y el error de pruebas como función del parámetro de
regularización. Discuta lo que observa.
Se utiliza Lasso para aprovechar la oportunidad de reducir ciertos coeficientes a 0 (los que están más cerca del valor $\lambda = 10^{-3}$) en lugar del suavizado que realiza Ridge Regression.
Step17: Se puede notar que a medida que se disminuye el valor del parámetro $\lambda$, comienzan a aumentar tanto el error de entrenamiento como el error de prueba. Esto ocurre debido a que en Lasso muchos de los coeficientes se anulan, evitando que sus atributos asociados aporten información a la predicción. Entre $10^0$ y $10^{-1}$ se llega a un punto crítico en donde se obtiene un error máximo debido a la elminicación de la mayoría de los atributos.
d) Estime el valor del parámetro de regularización en alguno de los modelos anteriores haciendo uso de
la técnica validación cruzada.
Utilizando Lasso, se tiene que
Step18: Por lo tanto, el parámetro ideal para Lasso corresponde a $10^{-3}$ que corresponde al intercepto entre el error de entrenamiento y el error de testing de menor valor, coincidiendo con el gráfico anterior.
4.- Drift
En esta sección se presentarán dos muestras del dataframe utilizado en la actividades anteriores, donde cada
una de estas tiene una propiedad distinta ya que son muestreadas en función del valor a predecir (logaritmo
del precio de la casa). Por una parte se tiene una pequeña muestra A, la cual es extraída directamente de
los datos con los que se trabaja (manteniendo la distribución de esta) y la muestra B, es generada con el
propósito de que en cada intervalo del rango de valores haya la misma cantidad de datos aproximadamente
(simulando una distribución uniforme). El objetivo es familiarizarse con el concepto de Transfer Learning.
En el siguiente código se generan las dos muestras con las que se trabajará.
Step19: a) Cree el conjunto de entrenamiento y otro de validación para trabajar cada muestra mediante la técnica
de hold out validation.
Step20: b) Evalúe los dos modelo de regresión lineal que se generan al entrenar con cada muestra. Mida el error
de cada modelo sobre ambos conjuntos de validación (A y B). Explique lo que observa.
Step21: Se puede observar por simple inspección que se tiene en promedio un error cuadratico menor para la muestra B. Además, se observa que al utilizar un conjunto de prueba distinto al asociado con cada modelo (es decir, usar el conjunto de testing de B en el modelo A y viceversa) produce errores mayores en comparación a utilizar el conjunto de prueba original. Esto se debe a que inveitablemente cada modelo tenderá a producir overfitting para la distribución especifica en que estén los datos de entrnamiento. Si los datos de testing siguen la misma distribución, entonces tendrán menor error.
c) Si tuviera que elegir uno de los dos modelos anteriores para trabajar con data futura, ¿Cuál eligiría y
por qué?
No es tan facil realizar una decisión sin saber a priori la distribución real de los datos. Por ejemplo, tenemos que el modelo B tiene un error mucho menor en conjuntos de testing con su misma distribución, pero arroja un error mucho mayor (posee más overfitting) para conjuntos con una distribución distinta. El modelo A por otro lado, tiene mayor error para el conjunto de testing de su misma distribución, pero arroja un error menor para conjuntos con distribuciones distintas.
Aún así, y considerando que en la práctica obtener conjuntos uniformemente distribuidos de precios de casas no es razonable, consideramos que es mejor utilizar el modelo A para reducir la perdida de generalización.
5.- Detectar enfermedades cardiacas
En el área de la salud, diagnosticar a una persona de una enfermedad de forma rápida y correcta puede llegar a salvarle la vida. Los encargados de realizar estos diagnósticos, son médicos que, observando exámenes y ciertos indicadores, pueden concluir qué enfermedad presenta el paciente. Si el medico se llegase a equivocar, aparte de que el paciente pueda perder la vida, el medico podría ser demandado por negligencia arriesgando años de cárcel o pagar sumas de dinero considerable, es por estas razones que es importante no cometer
errores.
Pongámonos en el contexto de que usted es contratado para generar un modelo que prediga si es que un paciente presenta una enfermedad cardiaca a partir de ciertos indicadores, tales como la edad, sexo, presión sanguínea, nivel de glicemia, etc.
Como ayuda se le indica que la variable de máximo ritmo cardíaco alcanzado (maximum heart rate achieved) es un buen indicador de detección de enfermedades cardíacas. Por lo que el objetivo es predecir el comportamiento de esta variable en función de las otras, y con esto detectar qué tan distante es el valor real al valor predecido para así luego detectar los posibles outliers (enfermos), que en sí corresponden a pacientes que tienen un comportamiento anormal al resto.
a) Lea el archivo de datos, cárguelos en un dataframe o matriz, luego divida el dataframe en dos, un
dataframe de entrenamiento (70% Datos) y un dataframe de prueba (30% Datos).
Step22: b) Realice una regresión lineal y defina usted una frontera de decisión (umbral) para poder determinar si
es que estamos en presencia o no de una enfermedad cardíaca. Mida su desempeño con ambos conjuntos
de datos.
Para obtener un umbral relativamente aceptable, utilizaremos el promedio de valores anormales menos la mitad de la desviación estándar de los valores. Se tiene entonces
Step23: En donde se tiene un puntaje de 80% para el conjunto de entrenamiento y 71% para el de testing.
A continuación se prueba con distintos umbrales | Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv("kc_house_data.csv")
df.drop(['id','date','zipcode',],axis=1,inplace=True)
df.head()
Explanation: 1.- Regresión Lineal Ordinaria (LSS)
En esta sección trabajaremos con un dataset conocido como House Sales in King County, USA, presentado
en la plataforma de Kaggle [4], el cual es un gran dataset para evaluar simples modelos de regresión. Los
registros contienen distintas características asociadas a las ventas de casas en la localidad King County, entre
mayo de 2014 y mayo de 2015, las cuales vienen descritas en el dataset, como la cantidad de habitaciones,
cantidad de baños, número de pisos, etc. Donde una de las variables a estudiar corresponde al precio en el
cual se vendió la casa.
a) Construya un dataframe con los datos a analizar descargándolos desde la plataforma como se indicó.
Explique por qué se realiza la línea 4.
End of explanation
df.shape
df.info()
df.describe()
Explanation: La línea 4 se encarga de eliminar parámetros que no son importantes para la valoración de la vivienda. La función drop toma un arreglo de atributos y los elimina del dataset.
b) Describa brevemente el dataset a utilizar.
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(df), columns=df.columns)
df_scaled['price'] = np.log(df['price'])
Explanation: Como se mustra en la salida anterior, el dataset corresponde a la información de 21613 casas, en donde cada una tiene 18 atributos asociados. Estos atributos corresponden a precio, número de baños, número de dormitorios, superficie del living, superficie total del terreno, número de pisos, entre otros más especificados en las columnas. Además, se presentan la media, la desviaciñon estándar y los cuartiles de cada atributo sobre el dataset total.
c) Normalice los datos antes de trabajar y aplique una transformación adecuada a la variable a predecir.
Explique la importancia/conveniencia de realizar estas dos operaciones.
End of explanation
import sklearn.linear_model as lm
X = df_scaled.iloc[:,1:] #use .ix instead, in older pandas version
N = X.shape[0]
X.insert(X.shape[1], 'intercept', np.ones(N))
y = df_scaled['price']
#mascara estatica con el 70% de los datos
mascara = np.zeros(len(X))
limit = int(len(X)*0.7)
mascara[:limit] = 1
istrain = mascara== 1
Xtrain = X[istrain]
ytrain = y[istrain]
Xtest = X[np.logical_not(istrain)]
ytest = y[np.logical_not(istrain)]
linreg = lm.LinearRegression(fit_intercept = False)
linreg.fit(Xtrain, ytrain)
Explanation: La normalización es necesaria debido a que la mayoría de los algoritmos clasificadores hacen uso de la distancia euclidiana para calcular la distancia entre dos puntos en el dataset. Cuando existe un atributo con un rango de valores mucho mayor en comparación a los demás, este gobernará el calculo de la distancia. Esto puede producir, por ejemplo, que el algoritmo de gradiente (SGD) converja mucho más lento en comparación a una instancia normalizada.
La transformación sobre el precio, por otro lado, es necesaria para utilizar regresión lineal. Las regresiones lineales, tal y como dice su nombre, modelan utilizando rectas. Estas rectas luego son comparadas con los valores del precio para probar que tan bien se ajustan a nuestros valores. No tiene sentido realizar esta comapracion si el precio no sigue una tendencia lineal. Por ejemplo, se estaría intentando ajustar una recta sobre una función cuadratica, lo cual no es correcto. Al aplicar el logaritmo, nos aseguramos que el precio, nuestro y , sea siempre lineal.
d) Realice una regresión lineal de mínimos cuadrados básica. Explique la importancia/conveniencia del
paso 4 y los argumentos que se deben entregar a la función que implementa la regresión lineal.
End of explanation
# Obtención de los coeficientes
betas = linreg.coef_
col = list(X)
bT = pd.DataFrame([betas], columns=col)
bT
Explanation: En el paso 4, se incorpora una columna de unos para representar a la constante $\beta_0$ en el modelo de la regresión lineal, que corresponde a nuestro intercepto. Esta constante proviene de la expresión:
$H = \beta^TX + \beta_0$
en donde $\beta$ es la matriz de los valores libres asociados con cada atributo en la matriz $X$ y $H$ es la estimación actual.
Los parámetros que se deben entregar a la función de regresión lineal, es la matriz de atributos $X$ y el vector de los valores verdaderos $Y$.
e) Construya una tabla con los pesos y Z-score correspondientes a cada predictor (variable). ¿Qué variables
están más correlacionadas con la respuesta? Si usáramos un nivel de significación del 5%. ¿Qué es lo
que observa y cuál puede ser la causa?
Los coeficientes corresponden a:
End of explanation
# Obtención de los z-score
from scipy import stats
zscores = stats.zscore(betas)
zT = pd.DataFrame([zscores], columns=col)
zT
Explanation: Los z-scores de los coeficientes corresponde a:
End of explanation
table = bT.append(zT).transpose()
table.columns = ["Coeficiente", "Z-Score"]
table
Explanation: Lo anterior se puede resumir en la siguiente tabla:
End of explanation
stats.t.ppf([1-0.025], len(Xtrain)-len(col)-1)
Explanation: Usando un nivel de significancia de $5\%$, tenemos que son relevantes aquellos atributos cuyo $|z|$ sea mayor a las colas de la distribución $\mathcal{N}(\mu, \sigma)$ donde $\mu$ la media de los betas y $\sigma$ la desviación estándar. Como queremos un intervalo de confianza cerrado de igual magnitud, el problema se traduce a calcular:
$2P(Z \leq z) = 0.05$
Tal y como se explica en el libro guía, se puede testear utilizando la distribución $t_{(N - p - 1)}$ donde $N$ es la cantidad de datos utilizados y $p$ la cantidad de atributos, debido a la marginalidad de la diferencia entre la distribución normal y dicha distribución.
Obtenemos entonces:
End of explanation
import sklearn.linear_model as lm
X2 = df_scaled.iloc[:, 1:]
X2 = X2.iloc[:,[2, 9, 10]] #use .ix instead, in older pandas version
print(list(X2))
N = X2.shape[0]
X2.insert(X2.shape[1], 'intercept', np.ones(N))
y2 = df_scaled['price']
#mascara estatica con el 70% de los datos
mascara = np.zeros(len(X2))
limit = int(len(X2)*0.7)
mascara[:limit] = 1
istrain2 = mascara== 1
Xtrain2 = X2[istrain2]
ytrain2 = y2[istrain2]
Xtest2 = X2[np.logical_not(istrain2)]
ytest2 = y2[np.logical_not(istrain2)]
linreg2 = lm.LinearRegression(fit_intercept = False)
linreg2.fit(Xtrain2, ytrain2)
betas2 = linreg2.coef_
col2 = list(X2)
pd.DataFrame([betas2], columns=col2)
zscores2 = stats.zscore(betas2)
pd.DataFrame([zscores2], columns=col2)
Explanation: Lo cual indica que son relevantes aquellas variables con z-score $|z| > 1.96$. Esto es, las variables sqft_living y sqft_above.
Se puede observar que de todos los atributos existentes, la mayoría tiene z-scores marginales en comparaciñon a los atributos sqft_living, sqft_above y sqft_basement, por lo que su aporte es también marginal en la predicción del precio de la vivienda.
f) Proponga un método para corregir lo observado (Hint: inspírese en los métodos de feature engineering
de las siguiente secciones). Verifíquelo mediante los Z-score presentados en la pregunta e).
Basandonos en la sección anterior, podemos eliminar todos los atributos con z-score irrelevantes para la regresión lineal. Si consideramos a $z \rel 0.1$ como el umbral de selección, tenemos como resultado:
End of explanation
stats.t.ppf([1-0.025], len(Xtrain2)-len(col2)-1)
Explanation: Considerando la distribución $t_{(N - k - 1)}$ donde $k$ son los atributos filtrados. Tenemos entonces:
End of explanation
# Using the test sample conaining only selected predictors
yhat_test = linreg.predict(Xtest) # Predict
mse_test = np.mean(np.power(yhat_test - ytest, 2))
# Form the matrix from the RAW set
Xm = Xtrain.as_matrix()
ym = ytrain.as_matrix()
from sklearn.model_selection import KFold
def doKFold(Xt, yt, K):
# Using K = 5
kf = KFold(n_splits=K)
mse_cv = 0
for train_indeces, test_indeces in kf.split(Xm):
# Select only the most relevant attributes
# In the previous section we chose attributes with index 2, 9 and 10
# as the most relevant.
Xtrain_k = Xtrain.iloc[train_indeces].as_matrix()
Xtest_k = Xtrain.iloc[test_indeces].as_matrix()
ytrain_k = ytrain.iloc[train_indeces].as_matrix()
ytest_k = ytrain.iloc[test_indeces].as_matrix()
# Perform linear regression for this fold
linreg = lm.LinearRegression(fit_intercept = False)
linreg.fit(Xtrain_k, ytrain_k)
# Get the yhat for our test set
yhat_val = linreg.predict(Xtest_k)
# Calculate the MSE for the error
mse_fold = np.mean(np.power(yhat_val - ytest_k, 2))
mse_cv += mse_fold
mse_cv = mse_cv / K
return mse_cv
{"mse5": doKFold(Xtrain, ytrain, 5), "mse10": doKFold(Xtrain, ytrain, 10), "mse_test": mse_test}
Explanation: Esto implica que los atributos seleccionados son similarmente importantes en la predicción del precio.
g) Estime el error de predicción del modelo usando validación cruzada con un número de folds igual a K
= 5 y K = 10. Recuerde que para que la estimación sea razonable, en cada configuración (fold) deberá
reajustar los pesos del modelo. Mida el error real del modelo sobre el conjunto de pruebas, compare y
concluya.
Asumiendo que se debe utilizar el modelo con la selección de atributos seleccionada anteriormente, se tiene:
End of explanation
import scipy.stats as stats
import matplotlib.pyplot as plt
ypredicted = linreg.predict(Xtrain)
error = ypredicted - ytrain.as_matrix()
stats.probplot(error, dist='norm', plot=plt)
plt.ylabel('Error')
plt.title('Quantile-Quantile Plot')
plt.show()
Explanation: Se puede observar que los errores obtenidos para cross validation con $K = 5$ y $K = 10$ son muy similares y tienen un valor aproximado de $0.0647$. El error obtenido del conjunto de prueba sobre los datos sin realización de selección de atributos corresponde a $0.065$ que es prácticamente el mismo.
Esto sucede debido a la gran cantidad de datos disponibles para el entrenamiento. Eligiendo todo el subconjunto de atributos, tenemos que $p$ es al menos 3 ordenes de magnitud menor que $N$, lo que permite tener una gran precisión utilizando cualquier método de validación.
h) Mida los errores de predicción para cada dato de entrenamiento. Utilizando un “quantile-quantile plot”
determine si es razonable la hipótesis de normalidad sobre los residuos del modelo.
End of explanation
def fss_cuadratic(x, y, names_x, k = 10000):
p = x.shape[1]-1 #p is the total number of attributes
k = min(p, k) #k is the maximum number of parameters to choose
names_x = np.array(names_x) #this is the names of the columns
remaining = list(range(0, p)) #Amount of comparable attributos left
selected = [p] #First, choose the last parameter in the list
current_score = best_new_score = 0.0 #Initialize score
points = []
scores = []
while remaining and len(selected)<=k :
score_candidates = []
for candidate in remaining:
model = lm.LinearRegression(fit_intercept=False) #Init a linear regression model
indexes = selected + [candidate] # Only use the following parameters
x_train = x.iloc[:, indexes]
predictions_train = model.fit(x_train, y).predict(x_train)
residuals_train = predictions_train - y
mse_candidate = np.mean(np.power(residuals_train, 2)) # Promedio del valor absoluto
score_candidates.append((mse_candidate, candidate))
score_candidates.sort()
score_candidates[:] = score_candidates[::-1]
best_new_score, best_candidate = score_candidates.pop()
remaining.remove(best_candidate)
selected.append(best_candidate)
points.append(len(selected))
scores.append(best_new_score)
return selected, points, scores
def fss_abs(x, y, names_x, k = 10000):
p = x.shape[1]-1 #p is the total number of attributes
k = min(p, k) #k is the maximum number of parameters to choose
names_x = np.array(names_x) #this is the names of the columns
remaining = list(range(0, p)) #Amount of comparable attributos left
selected = [p] #First, choose the last parameter in the list
current_score = best_new_score = 0.0 #Initialize score
points = []
scores = []
while remaining and len(selected)<=k :
score_candidates = []
for candidate in remaining:
model = lm.LinearRegression(fit_intercept=False) #Init a linear regression model
indexes = selected + [candidate] # Only use the following parameters
x_train = x.iloc[:, indexes]
predictions_train = model.fit(x_train, y).predict(x_train)
residuals_train = predictions_train - y
mse_candidate = np.mean(np.absolute(residuals_train)) # Promedio del valor absoluto
score_candidates.append((mse_candidate, candidate))
score_candidates.sort()
score_candidates[:] = score_candidates[::-1]
best_new_score, best_candidate = score_candidates.pop()
remaining.remove(best_candidate)
selected.append(best_candidate)
points.append(len(selected))
scores.append(best_new_score)
return selected, points, scores
names_regressors = X.columns[:-1] #without intercept
(selected, points, scores) = fss_abs(X,y,names_regressors)
plt.plot(points, scores)
plt.axis()
plt.title('Error en función al número de predictores')
plt.show()
Explanation: Con el gráfico de Quantile-Quantile se busca analizar las diferencias entre la distribución de una población aleatoria y, en este caso, la distribución normal. Se puede apreciar que los valores están distribuidos en el gráfico de manera aproximadamente lineal con respecto a la recta, lo que es suficiente para respaldar la hipótesis de que los datos están normalmente distribuidos.
2.- Selección de Atributos
Utilizando el dataframe de la actividad anterior,
a) Construya una función que implemente Forward Step-wise Selection (FSS). Es decir, partiendo con un
modelo sin predictores (variables), agregue un predictor a la vez, re-ajustando el modelo de regresión
en cada paso. Para seleccionar localmente una variable, proponga/implemente un criterio distinto al
utilizado en el código de ejemplo. Construya un gráfico que muestre el error de entrenamiento y el error
de pruebas como función del número de variables en el modelo. Ordene el eje x de menor a mayor.
Para la implementación de FSS, se ha seleccionado el criterio del promedio del valor absoluto dado por:
$\sum_i^N\frac{|Y - \hat{f}(X)|}{N}$
como una forma alternativa para discriminar a los distintos candidatos.
A continuación se presenta el algoritmo y un gráfico entre el número de variables y el error:
End of explanation
(selected, points, scores) = fss_abs(X,y,names_regressors)
plt.plot(points, scores, 'ro')
plt.axis()
plt.title('Absolute error')
plt.show()
(selected, points, scores) = fss_cuadratic(X,y,names_regressors)
plt.plot(points, scores, 'ro')
plt.axis()
plt.title('Cuadratic error')
plt.show()
Explanation: Como se puede apreciar en el gráfico, a medida que aumenta el número de atributos seleccionados, disminuye el error de la predicción hasta que aproximadamente en $p = 12$ la ganancia de precisión ya no es tan significativa. Esto es relevante al momento de seleccionar atributos cuidando de no caer en overfitting, ya que implica que existe un número ideal o óptimo de atributos a seleccionar.
A continuación se presenta una comparación entre la función cuadratica y absoluta para el error:
End of explanation
from sklearn.linear_model import Ridge
import matplotlib.pylab as plt
X2 = X.drop('intercept', axis=1,inplace=False)
Xtrain = X2[istrain]
ytrain = y[istrain]
names_regressors = X2.columns
alphas_ = np.logspace(7,1,base=10)
coefs = []
model = Ridge(fit_intercept=True,solver='svd')
for a in alphas_:
model.set_params(alpha=a)
model.fit(Xtrain, ytrain)
coefs.append(model.coef_)
ax = plt.gca()
for y_arr, label in zip(np.squeeze(coefs).T, names_regressors):
plt.plot(alphas_, y_arr, label=label)
plt.legend()
ax.set_xscale('log')
ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis
plt.title('Regularization Path RIDGE')
plt.axis('tight')
plt.legend(loc=2)
plt.show()
Explanation: Se puede notar claramente que utilizando un criterio de error cuadratico se obtienen errores de menor magnitud debido a la naturaleza de la distancia euclidiana. Sin embargo, se puede también notar que ambos siguen la misma tendencia.
3.- Regularización
Utilizando el dataframe de la actividad anterior,
a) Ajuste un modelo lineal utilizando “Ridge Regression”, es decir, regularizando con la norma $\ell_2$. Utilice
valores del parámetro de regularización $\lambda$ en el rango $[10^7, 10^1]$, variando si estima conveniente.
Construya un gráfico que muestre los coeficientes obtenidos como función del parámetro de regularización.
Describa lo que observa. (WARNING: Note que la línea 3 y el primer argumento en la línea 9
son críticos).
End of explanation
from sklearn.linear_model import Lasso
alphas_ = np.logspace(0,-3,base=10)
X2 = X.drop('intercept', axis=1,inplace=False)
Xtrain = X2[istrain]
ytrain = y[istrain]
names_regressors = X2.columns
coefs = []
model = Lasso(fit_intercept=True)
for a in alphas_:
model.set_params(alpha=a)
model.fit(Xtrain, ytrain)
coefs.append(model.coef_)
ax = plt.gca()
for y_arr, label in zip(np.squeeze(coefs).T, names_regressors):
plt.plot(alphas_, y_arr, label=label)
plt.legend()
ax.set_xscale('log')
ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis
plt.title('Regularization Path LASSO')
plt.axis('tight')
plt.legend(loc=2)
plt.show()
Explanation: Para poder entender el gráfico generado, es necesario entender como funciona Ridge Regression. Este algoritmo penaliza los coeficientes de la regresión lineal a través de un coeficiente denominado $\lambda$. Esto permite "suavizar" la diferencia de valor entre los coeficientes al imponer restricciones sobre su valor máximo.
Cuando se elije un valor de $\lambda$ pequeño, se puede estar en presencia de underfitting, mientras que si es muy pequeño se sufre de overfitting. Por tanto, es importante elegir valores de $\lambda$ adecuados.
Se puede observar en el gráfico que valores de $\lambda$ cercanos a 0 (o bien, cercanos a $10^1$) generan coeficientes muy dispersos con alta varianza y bias. A medida que se aumenta el valor de $\lambda$ los valores de los coeficientes van disminuyendo proporcionalmente de forma que se genera una disminución en la varianza y el bias. Alrededor de un coeficiente de $10^6$, se tiene que los coeficientes comienzan a ser muy similares entre sí mismos, produciendo el efecto de overtfitting que disminuye la capacidad de generalización del modelo entrenado.
Se puede confirmar además que los atributos relevantes corresponden a sqft_lot (Metros cuadrados de la vivienda), grade y bathrooms, debido a que son los que más aportan a la varianza.
b) Ajuste un modelo lineal utilizando el método “Lasso”, es decir, regularizando con la norma $\ell_1$. Utilice
valores del parámetro de regularización $\lambda$ en el rango $[10^0, 10^{-3}]$. Para obtener el código, modifique
las líneas 7 y 9 del ejemplo anterior. Construya un gráfico que muestre los coeficientes obtenidos
como función del parámetro de regularización. Describa lo que observa. ¿Es más efectivo Lasso para
seleccionar atributos?
End of explanation
Xtest = X2[np.logical_not(istrain)]
ytest = y[np.logical_not(istrain)]
alphas_ = np.logspace(0,-3,base=10)
coefs = []
model = Lasso(fit_intercept=True)
mse_test = []
mse_train = []
for a in alphas_:
model.set_params(alpha=a)
model.fit(Xtrain, ytrain)
yhat_train = model.predict(Xtrain)
yhat_test = model.predict(Xtest)
mse_train.append(np.mean(np.power(yhat_train - ytrain, 2)))
mse_test.append(np.mean(np.power(yhat_test - ytest, 2)))
ax = plt.gca()
ax.plot(alphas_,mse_train,label='train error ridge')
ax.plot(alphas_,mse_test,label='test error ridge')
plt.legend(loc=1)
ax.set_xscale('log')
ax.set_xlim(ax.get_xlim()[::-1])
plt.show()
Explanation: A diferencia de Ride Regression, Lasso permite anular coeficientes para cierto valores de $\lambda$. A esto se le denomina "soft thresholding". Se puede observar que para coeficientes de $\lambda$ pequeños se tiene una gran varianza entre los coeficientes de la regresión lineal. A medida que se aumenta $\lambda$, comienza a disminuir la la varianza entre los coeficientes, pero también la gran mayorìa de ellos comienza a desaparecer. Esto permite reducir la cantidad de atributos requeridos para realizar la estimación, disminuyendo las probabilidades de overfitting si se selecciona un valor de $\lambda$ adecuado.
c) Escogiendo uno de los dos métodos regularizadores anteriores, especificando el porqué, construya un
gráfico que muestre el error de entrenamiento y el error de pruebas como función del parámetro de
regularización. Discuta lo que observa.
Se utiliza Lasso para aprovechar la oportunidad de reducir ciertos coeficientes a 0 (los que están más cerca del valor $\lambda = 10^{-3}$) en lugar del suavizado que realiza Ridge Regression.
End of explanation
def MSE(y,yhat): return np.mean(np.power(y-yhat,2))
Xm = Xtrain.as_matrix()
ym = ytrain.as_matrix()
from sklearn.model_selection import KFold
kf = KFold(n_splits=10)
best_cv_mse = float("inf")
alphas_ = np.logspace(0,-3,base=10)
model = Lasso(fit_intercept=True)
for a in alphas_:
model.set_params(alpha=a)
mse_list_k10 = [MSE(model.fit(Xm[train], ym[train]).predict(Xm[val]), ym[val]) for train, val in kf.split(Xm)]
if np.mean(mse_list_k10) < best_cv_mse:
best_cv_mse = np.mean(mse_list_k10)
best_alpha = a
print("BEST PARAMETER=%f, MSE(CV)=%f"%(best_alpha,best_cv_mse))
Explanation: Se puede notar que a medida que se disminuye el valor del parámetro $\lambda$, comienzan a aumentar tanto el error de entrenamiento como el error de prueba. Esto ocurre debido a que en Lasso muchos de los coeficientes se anulan, evitando que sus atributos asociados aporten información a la predicción. Entre $10^0$ y $10^{-1}$ se llega a un punto crítico en donde se obtiene un error máximo debido a la elminicación de la mayoría de los atributos.
d) Estime el valor del parámetro de regularización en alguno de los modelos anteriores haciendo uso de
la técnica validación cruzada.
Utilizando Lasso, se tiene que:
End of explanation
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn import linear_model
df = pd.read_csv("kc_house_data.csv")
df.drop(['id','date','zipcode',],axis=1,inplace=True)
scaler = StandardScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(df), columns=df.columns)
df_scaled['price'] = np.log(df['price'])
df_A = df_scaled.sample(1000,random_state=11)
frames = []
valor = df_scaled.price
length = 0.3
for z in np.arange(int(np.min(valor)),int(np.max(valor))+1,length):
#un maximo de 100 datos por intervalo
aux = df_scaled[(df_scaled.price >= z) & (df_scaled.price < z+length)].head(100)
frames.append(aux)
df_B = pd.concat(frames).sample(1000,random_state=11) #crea el dataframe
Explanation: Por lo tanto, el parámetro ideal para Lasso corresponde a $10^{-3}$ que corresponde al intercepto entre el error de entrenamiento y el error de testing de menor valor, coincidiendo con el gráfico anterior.
4.- Drift
En esta sección se presentarán dos muestras del dataframe utilizado en la actividades anteriores, donde cada
una de estas tiene una propiedad distinta ya que son muestreadas en función del valor a predecir (logaritmo
del precio de la casa). Por una parte se tiene una pequeña muestra A, la cual es extraída directamente de
los datos con los que se trabaja (manteniendo la distribución de esta) y la muestra B, es generada con el
propósito de que en cada intervalo del rango de valores haya la misma cantidad de datos aproximadamente
(simulando una distribución uniforme). El objetivo es familiarizarse con el concepto de Transfer Learning.
En el siguiente código se generan las dos muestras con las que se trabajará.
End of explanation
X_A = df_A.iloc[:,1:].values
y_A = df_A.price
X_B = df_B.iloc[:,1:].values
y_B = df_B.price
from sklearn.model_selection import train_test_split
Xtrain_A,Xval_A,ytrain_A,yval_A = train_test_split(X_A, y_A, test_size=0.3, random_state=42)
Xtrain_B,Xval_B,ytrain_B,yval_B = train_test_split(X_B, y_B, test_size=0.3, random_state=42)
Explanation: a) Cree el conjunto de entrenamiento y otro de validación para trabajar cada muestra mediante la técnica
de hold out validation.
End of explanation
lm_A = linear_model.LinearRegression(fit_intercept=False)
model_A = lm_A.fit(Xtrain_A, ytrain_A)
predictions_A_test_A = model_A.predict(Xval_A)
predictions_A_test_B = model_A.predict(Xval_B)
lm_B = linear_model.LinearRegression(fit_intercept=False)
model_B = lm_B.fit(Xtrain_B, ytrain_B)
predictions_B_test_A = model_B.predict(Xval_A)
predictions_B_test_B = model_B.predict(Xval_B)
# print(predictions_A, predictions_B)
msa_a_test_a = np.mean(np.power(predictions_A_test_A - yval_A, 2))
msa_a_test_b = np.mean(np.power(predictions_A_test_B - yval_B, 2))
msa_b_test_a = np.mean(np.power(predictions_B_test_A - yval_A, 2))
msa_b_test_b = np.mean(np.power(predictions_B_test_B - yval_B, 2))
score_a_test_a = model_A.score(Xval_A, yval_A)
score_a_test_b = model_A.score(Xval_B, yval_B)
score_b_test_a = model_B.score(Xval_A, yval_A)
score_b_test_b = model_B.score(Xval_B, yval_B)
print("MSA Train A Test A:", msa_a_test_a)
print("MSA Train A Test B:", msa_a_test_b)
print("MSA Train B Test A:", msa_b_test_a)
print("MSA Train B Test B:", msa_b_test_b)
print("MSA Average Train A", (msa_a_test_a+msa_a_test_b)/2)
print("MSA Average Train B", (msa_b_test_a+msa_b_test_b)/2)
print("-----------------")
print("Score Train A Test A:", score_a_test_a)
print("Score Train A Test B:", score_a_test_b)
print("Score Train B Test A:", score_b_test_a)
print("Score Train B Test B:", score_b_test_b)
print("Score Average Train A", (score_a_test_a+score_a_test_b)/2)
print("Score Average Train B", (score_b_test_a+score_b_test_b)/2).
Explanation: b) Evalúe los dos modelo de regresión lineal que se generan al entrenar con cada muestra. Mida el error
de cada modelo sobre ambos conjuntos de validación (A y B). Explique lo que observa.
End of explanation
import pandas as pd
from sklearn import linear_model
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
headers = ['age','sex','chest_pain','blood_p','serum','blood_s','electro','max_heart','angina','oldpeak','slope','vessel','thal','normal']
df = pd.read_csv('heart.csv', header=None, names=headers, sep=',')
y = df['max_heart']
# train_n = int(len(df)*0.7)
# X_train, X_test = df[-train_n:], df[:-train_n]
# y_train, y_test = y[-train_n:], y[:-train_n]
X_train, X_test, y_train, y_test = train_test_split(df[headers], y, test_size=0.3, random_state=42)
X_train_normal = X_train['normal']
X_train_heart = X_train['max_heart']
X_train_class = X_train[['normal', 'max_heart']]
X_train.drop(['max_heart','normal'],axis=1,inplace=True)
X_test_normal = X_test['normal']
X_test_heart = X_test['max_heart']
X_test.drop(['max_heart','normal'],axis=1,inplace=True)
X_train
Explanation: Se puede observar por simple inspección que se tiene en promedio un error cuadratico menor para la muestra B. Además, se observa que al utilizar un conjunto de prueba distinto al asociado con cada modelo (es decir, usar el conjunto de testing de B en el modelo A y viceversa) produce errores mayores en comparación a utilizar el conjunto de prueba original. Esto se debe a que inveitablemente cada modelo tenderá a producir overfitting para la distribución especifica en que estén los datos de entrnamiento. Si los datos de testing siguen la misma distribución, entonces tendrán menor error.
c) Si tuviera que elegir uno de los dos modelos anteriores para trabajar con data futura, ¿Cuál eligiría y
por qué?
No es tan facil realizar una decisión sin saber a priori la distribución real de los datos. Por ejemplo, tenemos que el modelo B tiene un error mucho menor en conjuntos de testing con su misma distribución, pero arroja un error mucho mayor (posee más overfitting) para conjuntos con una distribución distinta. El modelo A por otro lado, tiene mayor error para el conjunto de testing de su misma distribución, pero arroja un error menor para conjuntos con distribuciones distintas.
Aún así, y considerando que en la práctica obtener conjuntos uniformemente distribuidos de precios de casas no es razonable, consideramos que es mejor utilizar el modelo A para reducir la perdida de generalización.
5.- Detectar enfermedades cardiacas
En el área de la salud, diagnosticar a una persona de una enfermedad de forma rápida y correcta puede llegar a salvarle la vida. Los encargados de realizar estos diagnósticos, son médicos que, observando exámenes y ciertos indicadores, pueden concluir qué enfermedad presenta el paciente. Si el medico se llegase a equivocar, aparte de que el paciente pueda perder la vida, el medico podría ser demandado por negligencia arriesgando años de cárcel o pagar sumas de dinero considerable, es por estas razones que es importante no cometer
errores.
Pongámonos en el contexto de que usted es contratado para generar un modelo que prediga si es que un paciente presenta una enfermedad cardiaca a partir de ciertos indicadores, tales como la edad, sexo, presión sanguínea, nivel de glicemia, etc.
Como ayuda se le indica que la variable de máximo ritmo cardíaco alcanzado (maximum heart rate achieved) es un buen indicador de detección de enfermedades cardíacas. Por lo que el objetivo es predecir el comportamiento de esta variable en función de las otras, y con esto detectar qué tan distante es el valor real al valor predecido para así luego detectar los posibles outliers (enfermos), que en sí corresponden a pacientes que tienen un comportamiento anormal al resto.
a) Lea el archivo de datos, cárguelos en un dataframe o matriz, luego divida el dataframe en dos, un
dataframe de entrenamiento (70% Datos) y un dataframe de prueba (30% Datos).
End of explanation
from sklearn.metrics import accuracy_score
lm = linear_model.LinearRegression()
model = lm.fit(X_train, y_train)
predictions_train = model.predict(X_train)
predictions_test = model.predict(X_test)
# Calculo del umbral
sick = X_train_class[X_train_class['normal'] == 1]['max_heart']
limit = sick.mean()-sick.std()/2
print("Umbral: ", limit)
compare_train = pd.DataFrame({'Y': y_train, 'y_': predictions_train})
compare_test = pd.DataFrame({'Y': y_test, 'y_': predictions_test})
mse_train = np.mean(np.power(predictions_train - y_train, 2))
mse_test = np.mean(np.power(predictions_test - y_test, 2))
print("mse_train:", mse_train)
print("mse_test", mse_test)
# Clasificación original
y_train_outlier = X_train_normal
y_test_outlier = X_test_normal
def heart_to_class(val):
if(val >= limit):
return 1
return 2
# Aplicar clasificación según umbral
y_train_predict_outlier = compare_train['y_'].apply(heart_to_class)
y_test_predict_outlier = compare_test['y_'].apply(heart_to_class)
print("Score: ", accuracy_score(y_train_outlier,y_train_predict_outlier))
print("Score: ", accuracy_score(y_test_outlier,y_test_predict_outlier))
Explanation: b) Realice una regresión lineal y defina usted una frontera de decisión (umbral) para poder determinar si
es que estamos en presencia o no de una enfermedad cardíaca. Mida su desempeño con ambos conjuntos
de datos.
Para obtener un umbral relativamente aceptable, utilizaremos el promedio de valores anormales menos la mitad de la desviación estándar de los valores. Se tiene entonces:
End of explanation
def attemptClassification(limit):
def heart_to_class(val):
if(val >= limit):
return 1
return 2
# Aplicar clasificación según umbral
y_train_predict_outlier = compare_train['y_'].apply(heart_to_class)
y_test_predict_outlier = compare_test['y_'].apply(heart_to_class)
return(accuracy_score(y_train_outlier,y_train_predict_outlier), accuracy_score(y_test_outlier,y_test_predict_outlier))
print("2std", attemptClassification(sick.mean()-sick.std()*2))
print("std", attemptClassification(sick.mean()-sick.std()))
print("std/2", attemptClassification(sick.mean()-sick.std()/2))
print("std/4", attemptClassification(sick.mean()-sick.std()/4))
print("std/8", attemptClassification(sick.mean()-sick.std()/8))
print("std/16", attemptClassification(sick.mean()-sick.std()/16))
std_coef = pd.DataFrame({'coef': [0.1, 0.2, 0.3, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 16, 32] })
std_coef['coef'] = std_coef['coef'].apply(lambda x: attemptClassification(sick.mean()-sick.std()*x)[0])
std_coef_test = pd.DataFrame({'coef': [0.1, 0.2, 0.3, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 16, 32] })
std_coef_test['coef'] = std_coef_test['coef'].apply(lambda x: attemptClassification(sick.mean()-sick.std()*x)[1])
plt.plot([0.1, 0.2, 0.3, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 16, 32], std_coef['coef'].as_matrix(), 'ro')
plt.axis()
plt.title('Train score')
plt.show()
plt.plot([0.1, 0.2, 0.3, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 16, 32], std_coef_test['coef'].as_matrix(), 'ro')
plt.axis()
plt.title('Test score')
plt.show()
Explanation: En donde se tiene un puntaje de 80% para el conjunto de entrenamiento y 71% para el de testing.
A continuación se prueba con distintos umbrales:
End of explanation |
12,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 1a
Step2: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.
<h2> Explore data </h2>
The data is natality data (record of births in the US). The goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data. We'll first create a SQL query using the natality data after the year 2000.
Step3: Let's create a BigQuery client that we can use throughout the notebook.
Step4: Let's now examine the result of a BiqQuery call in a Pandas DataFrame using our newly created client.
Step6: First, let's get the set of all valid column names in the natality dataset. We can do this by accessing the INFORMATION_SCHEMA for the table from the dataset.
Step7: We can print our valid columns set to see all of the possible columns we have available in the dataset. Of course, you could also find this information by going to the Schema tab when selecting the table in the BigQuery UI.
Step11: Let's write a query to find the unique values for each of the columns and the count of those values.
This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value.
Step12: Make a bar plot to see is_male with avg_wt linearly scaled and num_babies logarithmically scaled.
Step13: Make a bar plot to see mother_age with avg_wt linearly scaled and num_babies linearly scaled.
Step14: Make a bar plot to see plurality with avg_wt linearly scaled and num_babies logarithmically scaled.
Step15: Make a bar plot to see gestation_weeks with avg_wt linearly scaled and num_babies logarithmically scaled. | Python Code:
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
from google.cloud import bigquery
Explanation: LAB 1a: Exploring natality dataset.
Learning Objectives
Use BigQuery to explore natality dataset
Use Cloud AI Platform Notebooks to plot data explorations
Introduction
In this notebook, we will explore the natality dataset before we begin model development and training to predict the weight of a baby before it is born. We will use BigQuery to explore the data and use Cloud AI Platform Notebooks to plot data explorations.
Load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
End of explanation
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(YEAR AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
Explanation: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.
<h2> Explore data </h2>
The data is natality data (record of births in the US). The goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data. We'll first create a SQL query using the natality data after the year 2000.
End of explanation
bq = bigquery.Client()
Explanation: Let's create a BigQuery client that we can use throughout the notebook.
End of explanation
df = bq.query(query + " LIMIT 100").to_dataframe()
df.head()
Explanation: Let's now examine the result of a BiqQuery call in a Pandas DataFrame using our newly created client.
End of explanation
# Query to get all column names within table schema
sql =
SELECT
column_name
FROM
publicdata.samples.INFORMATION_SCHEMA.COLUMNS
WHERE
table_name = "natality"
# Send query through BigQuery client and store output to a dataframe
valid_columns_df = bq.query(sql).to_dataframe()
# Convert column names in dataframe to a set
valid_columns_set = valid_columns_df["column_name"].tolist()
Explanation: First, let's get the set of all valid column names in the natality dataset. We can do this by accessing the INFORMATION_SCHEMA for the table from the dataset.
End of explanation
print(valid_columns_set)
Explanation: We can print our valid columns set to see all of the possible columns we have available in the dataset. Of course, you could also find this information by going to the Schema tab when selecting the table in the BigQuery UI.
End of explanation
def get_distinct_values(valid_columns_set, column_name):
Gets distinct value statistics of BigQuery data column.
Args:
valid_columns_set: set, the set of all possible valid column names in
table.
column_name: str, name of column in BigQuery.
Returns:
Dataframe of unique values, their counts, and averages.
assert column_name in valid_columns_set, (
"{column_name} is not a valid column_name".format(
column_name=column_name))
sql =
SELECT
{column_name},
COUNT(1) AS num_babies,
AVG(weight_pounds) AS avg_wt
FROM
publicdata.samples.natality
WHERE
year > 2000
GROUP BY
{column_name}
.format(column_name=column_name)
return bq.query(sql).to_dataframe()
def plot_distinct_values(valid_columns_set, column_name, logy=False):
Plots distinct value statistics of BigQuery data column.
Args:
valid_columns_set: set, the set of all possible valid column names in
table.
column_name: str, name of column in BigQuery.
logy: bool, if plotting counts in log scale or not.
df = get_distinct_values(valid_columns_set, column_name)
df = df.sort_values(column_name)
df.plot(
x=column_name, y="num_babies", logy=logy, kind="bar", figsize=(12, 5))
df.plot(x=column_name, y="avg_wt", kind="bar", figsize=(12, 5))
Explanation: Let's write a query to find the unique values for each of the columns and the count of those values.
This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value.
End of explanation
plot_distinct_values(valid_columns_set, column_name="is_male", logy=False)
Explanation: Make a bar plot to see is_male with avg_wt linearly scaled and num_babies logarithmically scaled.
End of explanation
plot_distinct_values(valid_columns_set, column_name="mother_age", logy=False)
Explanation: Make a bar plot to see mother_age with avg_wt linearly scaled and num_babies linearly scaled.
End of explanation
plot_distinct_values(valid_columns_set, column_name="plurality", logy=True)
Explanation: Make a bar plot to see plurality with avg_wt linearly scaled and num_babies logarithmically scaled.
End of explanation
plot_distinct_values(
valid_columns_set, column_name="gestation_weeks", logy=True)
Explanation: Make a bar plot to see gestation_weeks with avg_wt linearly scaled and num_babies logarithmically scaled.
End of explanation |
12,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The objective of this notebook is to show how to read and plot data from a mooring (time series).
Step1: Data reading
The data file is located in the datafiles directory.
Step2: As the platform is fixed, we will work on time series.<br/>
We will read the time and the sea water temperature variables, as well as their respective units.
Step3: Basic plot
For a time series, we simply use the plot function of matplotlib.<br/>
Also, we set the font size to 16
Step4: The units set for the time is maybe not the easiest to read.<br/>
However the netCDF4 module offers easy solutions to properly convert the time.
Converting time units
NetCDF4 provides the function num2date to convert the time vector into dates.<br/>
http
Step5: The dates contains datetime objects.
We also extract the platform name from the file
Step6: Finally, to avoid to have the overlap of the date ticklabels, we use the autofmt_xdate function.<br/>
Everything is in place to create the improved plot. | Python Code:
%matplotlib inline
import netCDF4
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import colors
from mpl_toolkits.basemap import Basemap
Explanation: The objective of this notebook is to show how to read and plot data from a mooring (time series).
End of explanation
datadir = './datafiles/'
datafile = 'GL_TS_MO_62164.nc'
Explanation: Data reading
The data file is located in the datafiles directory.
End of explanation
with netCDF4.Dataset(datadir + datafile) as nc:
time0 = nc.variables['TIME'][:]
time0_units = nc.variables['TIME'].units
temperature = nc.variables['TEMP'][:]
temperature_units = nc.variables['TEMP'].units
print ('Temperature units = %s' %temperature_units)
Explanation: As the platform is fixed, we will work on time series.<br/>
We will read the time and the sea water temperature variables, as well as their respective units.
End of explanation
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
plt.plot(time0, temperature, 'k-')
plt.xlabel(time0_units)
plt.ylabel(temperature_units)
plt.show()
Explanation: Basic plot
For a time series, we simply use the plot function of matplotlib.<br/>
Also, we set the font size to 16:
End of explanation
from netCDF4 import num2date
dates = num2date(time0, units=time0_units)
print dates[:5]
Explanation: The units set for the time is maybe not the easiest to read.<br/>
However the netCDF4 module offers easy solutions to properly convert the time.
Converting time units
NetCDF4 provides the function num2date to convert the time vector into dates.<br/>
http://unidata.github.io/netcdf4-python/#section7
End of explanation
with netCDF4.Dataset(datadir + datafile) as nc:
platform_name = nc.platform_name
Explanation: The dates contains datetime objects.
We also extract the platform name from the file:
End of explanation
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
plt.plot(dates, temperature, 'k-')
plt.ylabel(temperature_units)
fig.autofmt_xdate()
plt.title('Temperature at ' + platform_name)
plt.show()
Explanation: Finally, to avoid to have the overlap of the date ticklabels, we use the autofmt_xdate function.<br/>
Everything is in place to create the improved plot.
End of explanation |
12,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 12 - Introduction to Deep Learning
by Alejandro Correa Bahnsen
version 0.1, May 2016
Part of the class Machine Learning for Security Informatics
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License
Based on the slides and presentation by Alec Radford github
For this class you must install theno
pip instal theano
Motivation
How do we program a computer to recognize a picture of a
handwritten digit as a 0-9?
What if we have 60,000 of these images and their label?
Step2: Naive model
For each image, find the “most similar” image and guess
that as the label
Step3: Lets try an other example
Step4: Logistic Regression
Logistic regression is a probabilistic, linear classifier. It is parametrized
by a weight matrix $W$ and a bias vector $b$ Classification is
done by projecting data points onto a set of hyperplanes, the distance to
which is used to determine a class membership probability.
Mathematically, this can be written as
Step5: ```
Theano is a Python library that lets you to define, optimize, and evaluate mathematical expressions, especially ones with multi-dimensional arrays (numpy.ndarray). Using Theano it is possible to attain speeds rivaling hand-crafted C implementations for problems involving large amounts of data. It can also surpass C on a CPU by many orders of magnitude by taking advantage of recent GPUs.
Theano combines aspects of a computer algebra system (CAS) with aspects of an optimizing compiler. It can also generate customized C code for many mathematical operations. This combination of CAS with optimizing compilation is particularly useful for tasks in which complicated mathematical expressions are evaluated repeatedly and evaluation speed is critical. For situations where many different expressions are each evaluated once Theano can minimize the amount of compilation/analysis overhead, but still provide symbolic features such as automatic differentiation.
```
Step6: initialize model
Step7: One iteration
Step8: Now for 100 epochs
Step9: Checking the results
Step10: Simple Neural Net
Add a hidden layer with a sigmoid activation function
Step11: Complex Neural Net
Two hidden layers with dropout
Step12: Understanding rectifier units
Step13: RMSprop
RMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in
Lecture 6e of his Coursera Class
RMSprop and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagrad's radically diminishing learning rates. RMSprop in fact is identical to the first update vector of Adadelta that we derived above
Step14: Convolutional Neural Network
In machine learning, a convolutional neural network (CNN, or ConvNet) is a type of feed-forward artificial neural network in which the connectivity pattern between its neurons is inspired by the organization of the animal visual cortex, whose individual neurons are arranged in such a way that they respond to overlapping regions tiling the visual field. Convolutional networks were inspired by biological processes and are variations of multilayer perceptrons designed to use minimal amounts of preprocessing. (Wikipedia)
Motivation
Convolutional Neural Networks (CNN) are biologically-inspired variants of MLPs.
From Hubel and Wiesel's early work on the cat's visual cortex, we
know the visual cortex contains a complex arrangement of cells. These cells are
sensitive to small sub-regions of the visual field, called a receptive
field. The sub-regions are tiled to cover the entire visual field. These
cells act as local filters over the input space and are well-suited to exploit
the strong spatially local correlation present in natural images.
Additionally, two basic cell types have been identified
Step15: Modify dropout function
Step16: reshape into conv 4tensor (b, c, 0, 1) format | Python Code:
import numpy as np
from load import mnist
X_train, X_test, y_train2, y_test2 = mnist(onehot=True)
y_train = np.argmax(y_train2, axis=1)
y_test = np.argmax(y_test2, axis=1)
X_train[1].reshape((28, 28)).round(2)[:, 4:9].tolist()
from pylab import imshow, show, cm
import matplotlib.pylab as plt
%matplotlib inline
def view_image(image, label="", predicted='', size=4):
View a single image.
plt.figure(figsize = (size, size))
plt.imshow(image.reshape((28, 28)), cmap=cm.gray, )
plt.tick_params(axis='x',which='both', bottom='off',top='off', labelbottom='off')
plt.tick_params(axis='y',which='both', left='off',top='off', labelleft='off')
show()
if predicted == '':
print("Label: %s" % label)
else:
print('Label: ', str(label), 'Predicted: ', str(predicted))
view_image(X_train[1], y_train[1])
view_image(X_train[40000], y_train[40000])
Explanation: 12 - Introduction to Deep Learning
by Alejandro Correa Bahnsen
version 0.1, May 2016
Part of the class Machine Learning for Security Informatics
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License
Based on the slides and presentation by Alec Radford github
For this class you must install theno
pip instal theano
Motivation
How do we program a computer to recognize a picture of a
handwritten digit as a 0-9?
What if we have 60,000 of these images and their label?
End of explanation
def similarity(image, images):
similarities = []
image = image.reshape((28, 28))
images = images.reshape((-1, 28, 28))
for i in range(images.shape[0]):
distance = np.sqrt(np.sum(image - images[i]) ** 2)
sim = 1 / distance
similarities.append(sim)
return similarities
np.random.seed(52)
small_train = np.random.choice(X_train.shape[0], 100)
view_image(X_test[0])
similarities = similarity(X_test[0], X_train[small_train])
view_image(X_train[small_train[np.argmax(similarities)]])
Explanation: Naive model
For each image, find the “most similar” image and guess
that as the label
End of explanation
view_image(X_test[200])
similarities = similarity(X_test[200], X_train[small_train])
view_image(X_train[small_train[np.argmax(similarities)]])
Explanation: Lets try an other example
End of explanation
import theano
from theano import tensor as T
import numpy as np
import datetime as dt
theano.config.floatX = 'float32'
Explanation: Logistic Regression
Logistic regression is a probabilistic, linear classifier. It is parametrized
by a weight matrix $W$ and a bias vector $b$ Classification is
done by projecting data points onto a set of hyperplanes, the distance to
which is used to determine a class membership probability.
Mathematically, this can be written as:
$$
P(Y=i\vert x, W,b) = softmax_i(W x + b)
$$
$$
P(Y=i|x, W,b) = \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}
$$
The output of the model or prediction is then done by taking the argmax of
the vector whose i'th element is $P(Y=i|x)$.
$$
y_{pred} = argmax_i P(Y=i|x,W,b)
$$
End of explanation
def floatX(X):
# return np.asarray(X, dtype='float32')
return np.asarray(X, dtype=theano.config.floatX)
def init_weights(shape):
return theano.shared(floatX(np.random.randn(*shape) * 0.01))
def model(X, w):
return T.nnet.softmax(T.dot(X, w))
X = T.fmatrix()
Y = T.fmatrix()
w = init_weights((784, 10))
w.get_value()
Explanation: ```
Theano is a Python library that lets you to define, optimize, and evaluate mathematical expressions, especially ones with multi-dimensional arrays (numpy.ndarray). Using Theano it is possible to attain speeds rivaling hand-crafted C implementations for problems involving large amounts of data. It can also surpass C on a CPU by many orders of magnitude by taking advantage of recent GPUs.
Theano combines aspects of a computer algebra system (CAS) with aspects of an optimizing compiler. It can also generate customized C code for many mathematical operations. This combination of CAS with optimizing compilation is particularly useful for tasks in which complicated mathematical expressions are evaluated repeatedly and evaluation speed is critical. For situations where many different expressions are each evaluated once Theano can minimize the amount of compilation/analysis overhead, but still provide symbolic features such as automatic differentiation.
```
End of explanation
py_x = model(X, w)
y_pred = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y))
gradient = T.grad(cost=cost, wrt=w)
update = [[w, w - gradient * 0.05]]
train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)
Explanation: initialize model
End of explanation
for start, end in zip(range(0, X_train.shape[0], 128), range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors = [(np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test)))]
errors
Explanation: One iteration
End of explanation
t0 = dt.datetime.now()
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
Explanation: Now for 100 epochs
End of explanation
y_pred = predict(X_test)
np.random.seed(2)
small_test = np.random.choice(X_test.shape[0], 10)
for i in small_test:
view_image(X_test[i], label=y_test[i], predicted=y_pred[i], size=1)
Explanation: Checking the results
End of explanation
def sgd(cost, params, lr=0.05):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
updates.append([p, p - g * lr])
return updates
def model(X, w_h, w_o):
h = T.nnet.sigmoid(T.dot(X, w_h))
pyx = T.nnet.softmax(T.dot(h, w_o))
return pyx
w_h = init_weights((784, 625))
w_o = init_weights((625, 10))
py_x = model(X, w_h, w_o)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y))
params = [w_h, w_o]
updates = sgd(cost, params)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
Explanation: Simple Neural Net
Add a hidden layer with a sigmoid activation function
End of explanation
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
srng = RandomStreams()
def rectify(X):
return T.maximum(X, 0.)
Explanation: Complex Neural Net
Two hidden layers with dropout
End of explanation
def RMSprop(cost, params, lr=0.001, rho=0.9, epsilon=1e-6):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
acc = theano.shared(p.get_value() * 0.)
acc_new = rho * acc + (1 - rho) * g ** 2
gradient_scaling = T.sqrt(acc_new + epsilon)
g = g / gradient_scaling
updates.append((acc, acc_new))
updates.append((p, p - lr * g))
return updates
Explanation: Understanding rectifier units
End of explanation
def dropout(X, p=0.):
if p > 0:
retain_prob = 1 - p
X *= srng.binomial(X.shape, p=retain_prob, dtype=theano.config.floatX)
X /= retain_prob
return X
def model(X, w_h, w_h2, w_o, p_drop_input, p_drop_hidden):
X = dropout(X, p_drop_input)
h = rectify(T.dot(X, w_h))
h = dropout(h, p_drop_hidden)
h2 = rectify(T.dot(h, w_h2))
h2 = dropout(h2, p_drop_hidden)
py_x = softmax(T.dot(h2, w_o))
return h, h2, py_x
def softmax(X):
e_x = T.exp(X - X.max(axis=1).dimshuffle(0, 'x'))
return e_x / e_x.sum(axis=1).dimshuffle(0, 'x')
w_h = init_weights((784, 625))
w_h2 = init_weights((625, 625))
w_o = init_weights((625, 10))
noise_h, noise_h2, noise_py_x = model(X, w_h, w_h2, w_o, 0.2, 0.5)
h, h2, py_x = model(X, w_h, w_h2, w_o, 0., 0.)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(noise_py_x, Y))
params = [w_h, w_h2, w_o]
updates = RMSprop(cost, params, lr=0.001)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
Explanation: RMSprop
RMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in
Lecture 6e of his Coursera Class
RMSprop and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagrad's radically diminishing learning rates. RMSprop in fact is identical to the first update vector of Adadelta that we derived above:
$$ E[g^2]t = 0.9 E[g^2]{t-1} + 0.1 g^2_t. $$
$$\theta_{t+1} = \theta_{t} - \frac{\eta}{\sqrt{E[g^2]t + \epsilon}} g{t}.$$
RMSprop as well divides the learning rate by an exponentially decaying average of squared gradients. Hinton suggests $\gamma$ to be set to 0.9, while a good default value for the learning rate $\eta$ is 0.001.
End of explanation
# from theano.tensor.nnet.conv import conv2d
from theano.tensor.nnet import conv2d
from theano.tensor.signal.downsample import max_pool_2d
Explanation: Convolutional Neural Network
In machine learning, a convolutional neural network (CNN, or ConvNet) is a type of feed-forward artificial neural network in which the connectivity pattern between its neurons is inspired by the organization of the animal visual cortex, whose individual neurons are arranged in such a way that they respond to overlapping regions tiling the visual field. Convolutional networks were inspired by biological processes and are variations of multilayer perceptrons designed to use minimal amounts of preprocessing. (Wikipedia)
Motivation
Convolutional Neural Networks (CNN) are biologically-inspired variants of MLPs.
From Hubel and Wiesel's early work on the cat's visual cortex, we
know the visual cortex contains a complex arrangement of cells. These cells are
sensitive to small sub-regions of the visual field, called a receptive
field. The sub-regions are tiled to cover the entire visual field. These
cells act as local filters over the input space and are well-suited to exploit
the strong spatially local correlation present in natural images.
Additionally, two basic cell types have been identified: Simple cells respond
maximally to specific edge-like patterns within their receptive field. Complex
cells have larger receptive fields and are locally invariant to the exact
position of the pattern.
The animal visual cortex being the most powerful visual processing system in
existence, it seems natural to emulate its behavior. Hence, many
neurally-inspired models can be found in the literature.
Sparse Connectivity
CNNs exploit spatially-local correlation by enforcing a local connectivity
pattern between neurons of adjacent layers. In other words, the inputs of
hidden units in layer m are from a subset of units in layer m-1, units
that have spatially contiguous receptive fields. We can illustrate this
graphically as follows:
Imagine that layer m-1 is the input retina. In the above figure, units in
layer m have receptive fields of width 3 in the input retina and are thus
only connected to 3 adjacent neurons in the retina layer. Units in layer
m+1 have a similar connectivity with the layer below. We say that their
receptive field with respect to the layer below is also 3, but their receptive
field with respect to the input is larger (5). Each unit is unresponsive to
variations outside of its receptive field with respect to the retina. The
architecture thus ensures that the learnt "filters" produce the strongest
response to a spatially local input pattern.
However, as shown above, stacking many such layers leads to (non-linear)
"filters" that become increasingly "global" (i.e. responsive to a larger region
of pixel space). For example, the unit in hidden layer m+1 can encode a
non-linear feature of width 5 (in terms of pixel space).
Shared Weights
In addition, in CNNs, each filter $h_i$ is replicated across the entire
visual field. These replicated units share the same parameterization (weight
vector and bias) and form a feature map.
In the above figure, we show 3 hidden units belonging to the same feature map.
Weights of the same color are shared---constrained to be identical. Gradient
descent can still be used to learn such shared parameters, with only a small
change to the original algorithm. The gradient of a shared weight is simply the
sum of the gradients of the parameters being shared.
Replicating units in this way allows for features to be detected regardless
of their position in the visual field. Additionally, weight sharing increases
learning efficiency by greatly reducing the number of free parameters being
learnt. The constraints on the model enable CNNs to achieve better
generalization on vision problems.
Details and Notation
A feature map is obtained by repeated application of a function across
sub-regions of the entire image, in other words, by convolution of the
input image with a linear filter, adding a bias term and then applying a
non-linear function. If we denote the k-th feature map at a given layer as
$h^k$, whose filters are determined by the weights $W^k$ and bias
$b_k$, then the feature map $h^k$ is obtained as follows (for
$tanh$ non-linearities):
$$
h^k_{ij} = \tanh ( (W^k * x)_{ij} + b_k ).
$$
Note
Recall the following definition of convolution for a 1D signal.
$$ o[n] = f[n]*g[n] = \sum_{u=-\infty}^{\infty} f[u] g[n-u] = \sum_{u=-\infty}^{\infty} f[n-u] g[u]`.
$$
This can be extended to 2D as follows:
$$o[m,n] = f[m,n]*g[m,n] = \sum_{u=-\infty}^{\infty} \sum_{v=-\infty}^{\infty} f[u,v] g[m-u,n-v]`.
$$
To form a richer representation of the data, each hidden layer is composed of
multiple feature maps, ${h^{(k)}, k=0..K}$. The weights $W$ of
a hidden layer can be represented in a 4D tensor containing elements for every
combination of destination feature map, source feature map, source vertical
position, and source horizontal position. The biases $b$ can be
represented as a vector containing one element for every destination feature
map. We illustrate this graphically as follows:
Figure 1: example of a convolutional layer
The figure shows two layers of a CNN. Layer m-1 contains four feature maps.
Hidden layer m contains two feature maps ($h^0$ and $h^1$).
Pixels (neuron outputs) in $h^0$ and $h^1$ (outlined as blue and
red squares) are computed from pixels of layer (m-1) which fall within their
2x2 receptive field in the layer below (shown as colored rectangles). Notice
how the receptive field spans all four input feature maps. The weights
$W^0$ and $W^1$ of $h^0$ and $h^1$ are thus 3D weight
tensors. The leading dimension indexes the input feature maps, while the other
two refer to the pixel coordinates.
Putting it all together, $W^{kl}_{ij}$ denotes the weight connecting
each pixel of the k-th feature map at layer m, with the pixel at coordinates
(i,j) of the l-th feature map of layer (m-1).
The Convolution Operator
ConvOp is the main workhorse for implementing a convolutional layer in Theano.
ConvOp is used by theano.tensor.signal.conv2d, which takes two symbolic inputs:
a 4D tensor corresponding to a mini-batch of input images. The shape of the
tensor is as follows: [mini-batch size, number of input feature maps, image
height, image width].
a 4D tensor corresponding to the weight matrix $W$. The shape of the
tensor is: [number of feature maps at layer m, number of feature maps at
layer m-1, filter height, filter width]
MaxPooling
Another important concept of CNNs is max-pooling, which is a form of
non-linear down-sampling. Max-pooling partitions the input image into
a set of non-overlapping rectangles and, for each such sub-region, outputs the
maximum value.
Max-pooling is useful in vision for two reasons:
* By eliminating non-maximal values, it reduces computation for upper layers.
It provides a form of translation invariance. Imagine
cascading a max-pooling layer with a convolutional layer. There are 8
directions in which one can translate the input image by a single pixel.
If max-pooling is done over a 2x2 region, 3 out of these 8 possible
configurations will produce exactly the same output at the convolutional
layer. For max-pooling over a 3x3 window, this jumps to 5/8.
Since it provides additional robustness to position, max-pooling is a
"smart" way of reducing the dimensionality of intermediate representations.
Max-pooling is done in Theano by way of
theano.tensor.signal.downsample.max_pool_2d. This function takes as input
an N dimensional tensor (where N >= 2) and a downscaling factor and performs
max-pooling over the 2 trailing dimensions of the tensor.
The Full Model: CovNet
Sparse, convolutional layers and max-pooling are at the heart of the LeNet
family of models. While the exact details of the model will vary greatly,
the figure below shows a graphical depiction of a LeNet model.
The lower-layers are composed to alternating convolution and max-pooling
layers. The upper-layers however are fully-connected and correspond to a
traditional MLP (hidden layer + logistic regression). The input to the
first fully-connected layer is the set of all features maps at the layer
below.
From an implementation point of view, this means lower-layers operate on 4D
tensors. These are then flattened to a 2D matrix of rasterized feature maps,
to be compatible with our previous MLP implementation.
End of explanation
def model(X, w, w2, w3, w4, w_o, p_drop_conv, p_drop_hidden):
l1a = rectify(conv2d(X, w, border_mode='full'))
l1 = max_pool_2d(l1a, (2, 2))
l1 = dropout(l1, p_drop_conv)
l2a = rectify(conv2d(l1, w2))
l2 = max_pool_2d(l2a, (2, 2))
l2 = dropout(l2, p_drop_conv)
l3a = rectify(conv2d(l2, w3))
l3b = max_pool_2d(l3a, (2, 2))
# convert from 4tensor to normal matrix
l3 = T.flatten(l3b, outdim=2)
l3 = dropout(l3, p_drop_conv)
l4 = rectify(T.dot(l3, w4))
l4 = dropout(l4, p_drop_hidden)
pyx = softmax(T.dot(l4, w_o))
return l1, l2, l3, l4, pyx
Explanation: Modify dropout function
End of explanation
X_train2 = X_train.reshape(-1, 1, 28, 28)
X_test2 = X_test.reshape(-1, 1, 28, 28)
# now 4tensor for conv instead of matrix
X = T.ftensor4()
Y = T.fmatrix()
w = init_weights((32, 1, 3, 3))
w2 = init_weights((64, 32, 3, 3))
w3 = init_weights((128, 64, 3, 3))
w4 = init_weights((128 * 3 * 3, 625))
w_o = init_weights((625, 10))
noise_l1, noise_l2, noise_l3, noise_l4, noise_py_x = model(X, w, w2, w3, w4, w_o, 0.2, 0.5)
l1, l2, l3, l4, py_x = model(X, w, w2, w3, w4, w_o, 0., 0.)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(noise_py_x, Y))
params = [w, w2, w3, w4, w_o]
updates = RMSprop(cost, params, lr=0.001)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
t1 = dt.datetime.now()
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train2[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train2)),
np.mean(y_test != predict(X_test2))))
print(i, errors[-1])
print('Current iter time: ', (dt.datetime.now()-t1).seconds / 60.)
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
Explanation: reshape into conv 4tensor (b, c, 0, 1) format
End of explanation |
12,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First TranSiesta example.
This example will only create the structures for input into TranSiesta. I.e. sisl's capabilities of creating geometries with different species is a core functionality which is handy for creating geometries for Siesta/TranSiesta.
This example will teach you one of the most important aspect of performing a successfull DFT+NEGF calculation. Namely that the electrodes should only couple to its nearest neighbouring cell.
Step1: The above two code blocks writes two different electrodes that we will test in TranSiesta. In this example we will have the transport direction along the 1st lattice vector (0th index in Python). Note how TranSiesta does not limit your choice of orientation. Any direction may be used as a semi-infinite direction, just as in TBtrans. | Python Code:
graphene = sisl.geom.graphene(1.44, orthogonal=True)
graphene.write('STRUCT_ELEC_SMALL.fdf')
graphene.write('STRUCT_ELEC_SMALL.xyz')
elec = graphene.tile(2, axis=0)
elec.write('STRUCT_ELEC.fdf')
elec.write('STRUCT_ELEC.xyz')
Explanation: First TranSiesta example.
This example will only create the structures for input into TranSiesta. I.e. sisl's capabilities of creating geometries with different species is a core functionality which is handy for creating geometries for Siesta/TranSiesta.
This example will teach you one of the most important aspect of performing a successfull DFT+NEGF calculation. Namely that the electrodes should only couple to its nearest neighbouring cell.
End of explanation
device = elec.tile(3, axis=0)
device.write('STRUCT_DEVICE.fdf')
device.write('STRUCT_DEVICE.xyz')
Explanation: The above two code blocks writes two different electrodes that we will test in TranSiesta. In this example we will have the transport direction along the 1st lattice vector (0th index in Python). Note how TranSiesta does not limit your choice of orientation. Any direction may be used as a semi-infinite direction, just as in TBtrans.
End of explanation |
12,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
This CodeLab demonstrates how to build a fused TFLite LSTM model for MNIST recognition using Keras, and how to convert it to TensorFlow Lite.
The CodeLab is very similar to the Keras LSTM CodeLab. However, we're creating fused LSTM ops rather than the unfused versoin.
Also note
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4 | Python Code:
!pip install tf-nightly
Explanation: Overview
This CodeLab demonstrates how to build a fused TFLite LSTM model for MNIST recognition using Keras, and how to convert it to TensorFlow Lite.
The CodeLab is very similar to the Keras LSTM CodeLab. However, we're creating fused LSTM ops rather than the unfused versoin.
Also note: We're not trying to build the model to be a real world application, but only demonstrate how to use TensorFlow Lite. You can a build a much better model using CNN models. For a more canonical lstm codelab, please see here.
Step 0: Prerequisites
It's recommended to try this feature with the newest TensorFlow nightly pip build.
End of explanation
import numpy as np
import tensorflow as tf
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(28, 28), name='input'),
tf.keras.layers.LSTM(20, time_major=False, return_sequences=True),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation=tf.nn.softmax, name='output')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
Explanation: Step 1: Build the MNIST LSTM model.
End of explanation
# Load MNIST dataset.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train.astype(np.float32)
x_test = x_test.astype(np.float32)
# Change this to True if you want to test the flow rapidly.
# Train with a small dataset and only 1 epoch. The model will work poorly
# but this provides a fast way to test if the conversion works end to end.
_FAST_TRAINING = False
_EPOCHS = 5
if _FAST_TRAINING:
_EPOCHS = 1
_TRAINING_DATA_COUNT = 1000
x_train = x_train[:_TRAINING_DATA_COUNT]
y_train = y_train[:_TRAINING_DATA_COUNT]
model.fit(x_train, y_train, epochs=_EPOCHS)
model.evaluate(x_test, y_test, verbose=0)
Explanation: Step 2: Train & Evaluate the model.
We will train the model using MNIST data.
End of explanation
run_model = tf.function(lambda x: model(x))
# This is important, let's fix the input size.
BATCH_SIZE = 1
STEPS = 28
INPUT_SIZE = 28
concrete_func = run_model.get_concrete_function(
tf.TensorSpec([BATCH_SIZE, STEPS, INPUT_SIZE], model.inputs[0].dtype))
# model directory.
MODEL_DIR = "keras_lstm"
model.save(MODEL_DIR, save_format="tf", signatures=concrete_func)
converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_DIR)
tflite_model = converter.convert()
Explanation: Step 3: Convert the Keras model to TensorFlow Lite model.
End of explanation
# Run the model with TensorFlow to get expected results.
TEST_CASES = 10
# Run the model with TensorFlow Lite
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
for i in range(TEST_CASES):
expected = model.predict(x_test[i:i+1])
interpreter.set_tensor(input_details[0]["index"], x_test[i:i+1, :, :])
interpreter.invoke()
result = interpreter.get_tensor(output_details[0]["index"])
# Assert if the result of TFLite model is consistent with the TF model.
np.testing.assert_almost_equal(expected, result)
print("Done. The result of TensorFlow matches the result of TensorFlow Lite.")
# Please note: TfLite fused Lstm kernel is stateful, so we need to reset
# the states.
# Clean up internal states.
interpreter.reset_all_variables()
Explanation: Step 4: Check the converted TensorFlow Lite model.
Now load the TensorFlow Lite model and use the TensorFlow Lite python interpreter to verify the results.
End of explanation |
12,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integrating XML with Python
NLTK, the Python Natural Languge ToolKit package, is designed to work with plain text input, but sometimes your input is in XML. There are two principal paths to reconciliation
Step6: A separate window will open. Select the Models tab on the top, then averaged_perceptron_tagger and punkt, and then press the Download button. Then select the Corpora tab on the top, then wordnet, and then Download. You only have to download these once on each machine you use; the download process will install them for future use.
The annotation code
Here is the entire Python script that creates the output (we describe how the pieces work below). If you have saved the sample input as test.xml in the same directory as the location of this notebook, you can run the transformation in the notebook now, and the output should be displayed below
Step8: How it works
We’ve divided the program into sections below, with explanations after each section.
Shebang and docstring
Step9: A Python program begins with a shebang and a docstring. The shebang makes it easier to run the program from the command line, and the docstring documents what the program does, The shebang must be the very first line in a program. For now, think of the shebang as a magic incantation that should be copied and pasted verbatim; we explain below what it means. The docstring should be a single line framed by triple quotation marks, and it should describe concisely what the program does. When you execute the docstring by itself, as we do above, it echoes itself to the screen; when you run the program, though, it remains silent.
Imports
Step11: We import the ability to create a new XML document, which we’ll use to create our output, from minidom, and we import pulldom to parse the input document. We import nltk because we’ll use it to determine the part of speech and the lemma for each word.
Adding a <word> element to the output tree
Step13: When we tokenize the text into words below, we pass each word and its part of speech into the create_word_element() function. The function creates a new <word> element, adds the part of speech tag as an attribute, and then uses our lemmatize() function to determine the lemma and add that as an attribute, as well. It then creates a text() node, sets its value as the text of the word, and makes the text() node a child of the new <word> element. Finally, we return the <word> element to the calling routine, which inserts it into the output XML tree in the right place.
Converting treebank part of speech identifiers to Wordnet ones
Step14: We create a function called get_wordnet_pos(), which we’ll use later. This function is defined as taking one argument, called treebank_tag, which is a string, and it returns a value that is also a string. The reason we need to do this is that the NLTK part of speech tagger uses one set of part of speech identifiers, but Wordnet, the NLTK component that performs lemmatization, uses a different one. Since we do the part of speech tagging first, we use this function to convert that value to one that Wordnet will understand before we perform lemmatization. There are many treebank part of speech tags but only four Wordnet ones, for nouns, verbs, adjectives, and adverbs, and everything else is treated as a noun. Our function returns the correct value for the four defined parts of speech and defaults to the value for nouns otherwise.
Lemmatizing
Step16: We define a function called lemmatize() that takes two pieces of input, both of which are strings, and returns a string. The parameter text is the word to be lemmatized and the parameter pos is the part of speech in treebank form. We call the NLTK function to identify the lemma with nltk.stem.WordNetLemmatizer().lemmatize() with two arguments. The lemmatizer expects words to be lower case, so we convert the text to lower case with the lower() string method. And it requires a Wordnet part of speech, and not a treebank one, so we use our get_wordnet_pos() function to perform the conversion.
Doing the work
Step17: We refer below to line numbers, and if you’re reading this on line, you won’t those numbers. You can make them appear by running this notebook in a Jupyter session, clicking in the cell above, hitting the Esc key, to switch into command mode, and then typing l (the lowercase letter L), to toggle line numbering.
Our extract() function does all the work, calling on the functions we defined earlier as needed. Here’s how it works (with line numbers)
Step18: nltk.pos_tag()
nltk.pos_tag() takes a list of words (not a sentence) as its input. That means that we need to tokenize the sentence before tagging
Step19: You can look up the part of speech tags at https
Step20: In the example above, the lemmatizer recognizes that “thing” is the lemma for “things”, but it fails to lemmatize “Things” correctly because of the upper case letter.
Step21: In the example above, we supplied part of speech information, and the lemmatizer correctly treats “building” differently as a noun than as a verb. If we don’t specify a part of speech, it assumes everything is a noun
Step22: Input and output
Step23: We could, alternatively, have opened a file handle to read the file from disk, read it, and saved the results with | Python Code:
import nltk
# nltk.download()
Explanation: Integrating XML with Python
NLTK, the Python Natural Languge ToolKit package, is designed to work with plain text input, but sometimes your input is in XML. There are two principal paths to reconciliation: either use an XML environment that supports NLP (natural language processing) or let Python (which supports NLP through NLTK) manage the XML. The first approach, sticking to an XML environment, is illustrated in Week 3 of the Institute in the context of the eXist XML database, which integrates the Stanford Core NLP tools. Here we illustrate the second approach, letting Python manage the XML.
Before you make a mistake
It’s natural to think of parsing (reading, interpreting, and processing) XML with regular expressions, but it’s also Wrong for at least two sets of reasons:
Regular expressions operate over strings, and there are string differences in XML that are not informationally different. For example, the order of attributes on an element, whether the attributes are single- or double-quoted, whether a Unicode character is represented by a raw character or a numerical character reference, and many other details represent string differences that are not informational differences. The same is true of the extent and type of white space in some environments but not others. And the same is true when you have to recognize whether a right angle bracket or a single or double quotation mark is part of content or part of markup. XML-aware processing knows what’s informational and what isn’t, as well as what’s content and what’s markup. You don’t want to reinvent those wheels.
Parsing XML is a recursive operation. For example, if you have two elements of the same type nested inside each other, as in
xml
<emphasis><emphasis>a very emphatic thought</emphasis></emphasis>
parsing has to match up the correctly paired start and end tags. XML-aware processing knows where it is in the tree. That’s another wheel you don’t want to reinvent.
It’s also natural to think of writing XML by constructing a string, such as concatenating angle brackets and text and other bits and pieces. This is a Bad Idea because some decisions are context sensitive, and keeping track of the context is challenging. For example, attribute values can be quoted with single or double quotation marks, but if the value contains a single or double quotation mark, that can influence the choice, and there are situations where you may need to represent the quotation marks in attribute values with &quot; or &apos; character entities instead of as raw characters. A library that knows how to write XML will keep track of that for you.
Wrangling XML in Python
The Python Standard Library provides several tools for parsing and creating XML, and there are also third-party packages. In this tutorial we use two parts of the Standard Library: pulldom for parsing XML input and minidom for constructing XML output. You can read more about these modules by clicking on the preceding links to the Standard Library documentation, and also in the Structured text: XML chapter of the eTutorials.org Python tutorial.
To illustrate how to read and write XML with Python we’ll read in a small input XML document, tag each word as a <word> element, and add part of speech (POS) and lemma (dictionary form) information as @pos and @lemma attributes of the <word> elements. We’ll use pulldom to read, parse, and process the input document, NLTK to determine the part of speech and the lemma, and minidom to create the output.
Input XML
Create the following small XML document in a work directory and save with a filename like test.xml:
xml
<root>
<p speaker="hamlet">Hamlet is a prince of Denmark.</p>
<p speaker='ophelia'>Things end badly for Ophelia.</p>
<p speaker="nobody">Julius Caesar does not appear in this play.</p>
</root>
Desired output XML
The desired output is:
```xml
<?xml version="1.0" ?>
<root>
<p speaker="hamlet">
<word lemma="hamlet" pos="NNP">Hamlet</word>
<word lemma="be" pos="VBZ">is</word>
<word lemma="a" pos="DT">a</word>
<word lemma="prince" pos="NN">prince</word>
<word lemma="of" pos="IN">of</word>
<word lemma="denmark" pos="NNP">Denmark</word>
<word lemma="." pos=".">.</word>
</p>
<p speaker="ophelia">
<word lemma="thing" pos="NNS">Things</word>
<word lemma="end" pos="VBP">end</word>
<word lemma="badly" pos="RB">badly</word>
<word lemma="for" pos="IN">for</word>
<word lemma="ophelia" pos="NNP">Ophelia</word>
<word lemma="." pos=".">.</word>
</p>
<p speaker="nobody">
<word lemma="julius" pos="NNP">Julius</word>
<word lemma="caesar" pos="NNP">Caesar</word>
<word lemma="do" pos="VBZ">does</word>
<word lemma="not" pos="RB">not</word>
<word lemma="appear" pos="VB">appear</word>
<word lemma="in" pos="IN">in</word>
<word lemma="this" pos="DT">this</word>
<word lemma="play" pos="NN">play</word>
<word lemma="." pos=".">.</word>
</p>
</root>
```
The python code
Before you run the code
NLTK is installed by default with Anaconda python, but the word tokenizer isn’t. To install the tokenizer, uncomment the second line below and run (if you’ve already installed the tokenizer, run the cell without uncommenting the second line):
End of explanation
#!/usr/bin/env python
Tag words and add POS and lemma information in XML document.
from xml.dom.minidom import Document, Element
from xml.dom import pulldom
import nltk
def create_word_element(d: Document, text: str, pos: str) -> Element:
Create <word> element with POS and lemma attributes.
word = d.createElement("word")
word.setAttribute("pos", pos)
word.setAttribute("lemma", lemmatize(text, pos))
t = d.createTextNode(text)
word.appendChild(t)
return word
def get_wordnet_pos(treebank_tag: str) -> str:
Replace treebank POS tags with wordnet ones; default POS is noun.
pos_tags = {'J': nltk.corpus.reader.wordnet.ADJ, 'V': nltk.corpus.reader.wordnet.VERB,
'R': nltk.corpus.reader.wordnet.ADV}
return pos_tags.get(treebank_tag[0], nltk.corpus.reader.wordnet.NOUN)
def lemmatize(text: str, pos: str) -> str:
Identify lemma for current word.
return nltk.stem.WordNetLemmatizer().lemmatize(text.lower(), get_wordnet_pos(pos))
def extract(input_xml) -> Document:
Process entire input XML document, firing on events.
# Initialize output as XML document, point to most recent open node
d = Document()
current = d
# Start pulling; it continues automatically
doc = pulldom.parse(input_xml)
for event, node in doc:
if event == pulldom.START_ELEMENT:
current.appendChild(node)
current = node
elif event == pulldom.END_ELEMENT:
current = node.parentNode
elif event == pulldom.CHARACTERS:
# tokenize, pos-tag, create <word> as child of parent
words = nltk.word_tokenize(node.toxml())
tagged_words = nltk.pos_tag(words)
for (text, pos) in tagged_words:
word = create_word_element(d, text, pos)
current.appendChild(word)
return d
with open('test-1.xml', 'r') as test_in:
results = extract(test_in)
print(results.toprettyxml())
Explanation: A separate window will open. Select the Models tab on the top, then averaged_perceptron_tagger and punkt, and then press the Download button. Then select the Corpora tab on the top, then wordnet, and then Download. You only have to download these once on each machine you use; the download process will install them for future use.
The annotation code
Here is the entire Python script that creates the output (we describe how the pieces work below). If you have saved the sample input as test.xml in the same directory as the location of this notebook, you can run the transformation in the notebook now, and the output should be displayed below:
End of explanation
#!/usr/bin/env python
Tag words and add POS and lemma information in XML document.
Explanation: How it works
We’ve divided the program into sections below, with explanations after each section.
Shebang and docstring
End of explanation
from xml.dom.minidom import Document
from xml.dom import pulldom
import nltk
Explanation: A Python program begins with a shebang and a docstring. The shebang makes it easier to run the program from the command line, and the docstring documents what the program does, The shebang must be the very first line in a program. For now, think of the shebang as a magic incantation that should be copied and pasted verbatim; we explain below what it means. The docstring should be a single line framed by triple quotation marks, and it should describe concisely what the program does. When you execute the docstring by itself, as we do above, it echoes itself to the screen; when you run the program, though, it remains silent.
Imports
End of explanation
def create_word_element(d: Document, text: str, pos: str) -> Element:
Create <word> element with POS and lemma attributes.
word = d.createElement("word")
word.setAttribute("pos", pos)
word.setAttribute("lemma", lemmatize(text, pos))
t = d.createTextNode(text)
word.appendChild(t)
return word
Explanation: We import the ability to create a new XML document, which we’ll use to create our output, from minidom, and we import pulldom to parse the input document. We import nltk because we’ll use it to determine the part of speech and the lemma for each word.
Adding a <word> element to the output tree
End of explanation
def get_wordnet_pos(treebank_tag: str) -> str:
Replace treebank POS tags with wordnet ones; default POS is noun.
pos_tags = {'J': nltk.corpus.reader.wordnet.ADJ, 'V': nltk.corpus.reader.wordnet.VERB,
'R': nltk.corpus.reader.wordnet.ADV}
return pos_tags.get(treebank_tag[0], nltk.corpus.reader.wordnet.NOUN)
Explanation: When we tokenize the text into words below, we pass each word and its part of speech into the create_word_element() function. The function creates a new <word> element, adds the part of speech tag as an attribute, and then uses our lemmatize() function to determine the lemma and add that as an attribute, as well. It then creates a text() node, sets its value as the text of the word, and makes the text() node a child of the new <word> element. Finally, we return the <word> element to the calling routine, which inserts it into the output XML tree in the right place.
Converting treebank part of speech identifiers to Wordnet ones
End of explanation
def lemmatize(text: str, pos: str) -> str:
return nltk.stem.WordNetLemmatizer().lemmatize(text.lower(), get_wordnet_pos(pos))
Explanation: We create a function called get_wordnet_pos(), which we’ll use later. This function is defined as taking one argument, called treebank_tag, which is a string, and it returns a value that is also a string. The reason we need to do this is that the NLTK part of speech tagger uses one set of part of speech identifiers, but Wordnet, the NLTK component that performs lemmatization, uses a different one. Since we do the part of speech tagging first, we use this function to convert that value to one that Wordnet will understand before we perform lemmatization. There are many treebank part of speech tags but only four Wordnet ones, for nouns, verbs, adjectives, and adverbs, and everything else is treated as a noun. Our function returns the correct value for the four defined parts of speech and defaults to the value for nouns otherwise.
Lemmatizing
End of explanation
def extract(input_xml) -> Document:
Process entire input XML document, firing on events.
# Initialize output as XML document, point to most recent open node
d = Document()
current = d
# Start pulling; it continues automatically
doc = pulldom.parse(input_xml)
for event, node in doc:
if event == pulldom.START_ELEMENT:
current.appendChild(node)
current = node
elif event == pulldom.END_ELEMENT:
current = node.parentNode
elif event == pulldom.CHARACTERS:
# tokenize, pos-tag, create <word> as child of parent
words = nltk.word_tokenize(node.toxml())
tagged_words = nltk.pos_tag(words)
for (text, pos) in tagged_words:
word = create_word_element(d, text, pos)
current.appendChild(word)
return d
Explanation: We define a function called lemmatize() that takes two pieces of input, both of which are strings, and returns a string. The parameter text is the word to be lemmatized and the parameter pos is the part of speech in treebank form. We call the NLTK function to identify the lemma with nltk.stem.WordNetLemmatizer().lemmatize() with two arguments. The lemmatizer expects words to be lower case, so we convert the text to lower case with the lower() string method. And it requires a Wordnet part of speech, and not a treebank one, so we use our get_wordnet_pos() function to perform the conversion.
Doing the work
End of explanation
sample = "We didn't realize that we could split contractions!"
nltk.word_tokenize(sample)
Explanation: We refer below to line numbers, and if you’re reading this on line, you won’t those numbers. You can make them appear by running this notebook in a Jupyter session, clicking in the cell above, hitting the Esc key, to switch into command mode, and then typing l (the lowercase letter L), to toggle line numbering.
Our extract() function does all the work, calling on the functions we defined earlier as needed. Here’s how it works (with line numbers):
1: extract() is a function that gets called with one argument, which we assign to a parameter we’ve called input_xml.
4: Near the top of the full program we’ve already used from xml.dom.minidom import Document, Element to make the Document class (and the Element class) available to our program. Here we use it to create a new XML document, which we assign as the value of a new variable d. We’ll use this to build our output document.
5: The variable current points to the node that will be the parent of any new elements. The document node is the root of the entire document, so it’s the initial value of the current variable.
y: pulldom is a streaming parser, which means that once we start processing elements in the XML input tree, the parser keeps going until it has visited every node of the tree. We start that process with pulldom.parse(), telling it to parse the document we passed to it as the value of the input_xml parameter.
8: Parsing generates events like the start or end of an element or the presence of character data. There are other possible events, but these are the only ones we need to handle for our transformation. Each event provides a tuple that consists of two values, the name of the event (e.g., START_ELEMENT) and the value (e.g., an object of type node). We test the event type and process different types of events differently.
9–11: When we start a new element, we make it a child of the node identified by our current variable. This ensures that the output tree that we’re building will reproduce the structure of the input tree, and it also ensures that we create new <word> elements in the correct place. When we start an element, it’s the parent of any nodes we encounter until we find the corresponding END_ELEMENT event, so we make it a child of whatever node is current at the moment and then set the current variable to point to the node we just created. This means that, for example, when we encounter the first child of the root element of the input XML, we’ll make that a child of the root element of the output XML that we’re constructing.
12–13: When we encounter an END_ELEMENT event, that element can’t have any more children, so we set the current variable to point to its parent.
14–20: We’ll illustrate how the individual lines work below, but here’s a summary with everything in one place. When we encounter CHARACTERS while parsing, the value of the node is an XML text() node, and not a string. We convert it to a plain text string with the toxml() method, let NLTK break it into words with nltk.word_tokenize(), and assign the pieces to an array called words (line 16). Next, the nltk.pos_tag() function takes an array of words as its input (our words variable) and returns an array of tuples, that is, pairs of strings where the first is the original input word and the second is the part of speech according to treebank notation (17). It assigns this new array as the value of the tagged_words variable. We want to create a new <word> element in the output for each word, so we loop over that list of tuples (18). For each word, we call our create_word_element() function, which we defined earlier, and set the value of the variable word equal to the new <word> element (19). Finally, we make the new word a child of the current element, the one that was its parent in the input (20). There are other types of parse events, but we don’t need to do anything with them in this example, so we don’t write any code to process them.
Remind me about those NLTK functions again
nltk.word_tokenize()
nltk.word_tokenize() splits a text into words. It’s smarter than just splitting on white space; it treats punctuation as a word, and it knows about common English contractions:
End of explanation
sample = "We didn't realize that we could split contractions!"
words = nltk.word_tokenize(sample)
nltk.pos_tag(words)
Explanation: nltk.pos_tag()
nltk.pos_tag() takes a list of words (not a sentence) as its input. That means that we need to tokenize the sentence before tagging:
End of explanation
words = ['thing', 'things', 'Things']
[(word + ": " + nltk.stem.WordNetLemmatizer().lemmatize(word)) for word in words]
Explanation: You can look up the part of speech tags at https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html.
nltk.stem.WordNetLemmatizer().lemmatize()
The Wordnet lemmatizer tries to lemmatize (find the dictionary form) of a word with or without part of speech information, but without the part of speech, it guesses that everything is a noun. Remember that Wordnet knows about only nouns, verbs, adjectives, and adverbs, and that the part of speech tags are different in Wordnet than in the treebank system. Oh, and it assumes lower-case input, so if you give it a capitalized word, it won’t recognize it as an inflected form of something else, and will therefore return it unchanged.
End of explanation
words = [('building','n'), ('building','v')]
[(word + ": " + nltk.stem.WordNetLemmatizer().lemmatize(word, pos)) for (word, pos) in words]
Explanation: In the example above, the lemmatizer recognizes that “thing” is the lemma for “things”, but it fails to lemmatize “Things” correctly because of the upper case letter.
End of explanation
words = ['building']
[(word + ": " + nltk.stem.WordNetLemmatizer().lemmatize(word)) for word in words]
Explanation: In the example above, we supplied part of speech information, and the lemmatizer correctly treats “building” differently as a noun than as a verb. If we don’t specify a part of speech, it assumes everything is a noun:
End of explanation
with open('test.xml', 'r') as test_in:
results = extract(test_in)
print(results.toprettyxml())
Explanation: Input and output
End of explanation
contents = open('test.xml','r').read()
Explanation: We could, alternatively, have opened a file handle to read the file from disk, read it, and saved the results with:
End of explanation |
12,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step11: Vertex constants
Setup up the following constants for Vertex
Step12: AutoML constants
Set constants unique to AutoML datasets and training
Step13: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step14: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own AutoML tabular classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Job Service for batch prediction and custom training.
Step16: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config"
Step17: Quick peek at your data
You will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file.
Step18: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step19: Now save the unique dataset identifier for the Dataset resource instance you created.
Step20: Train the model
Now train an AutoML tabular classification model using your Vertex Dataset resource. To train the model, do the following steps
Step21: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are
Step22: Now save the unique identifier of the training pipeline you created.
Step23: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step24: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step25: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step26: Model deployment for batch prediction
Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction.
For online prediction, you
Step27: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make
Step28: Compute instance scaling
You have several choices on scaling the compute instances for handling your batch prediction requests
Step29: Make batch prediction request
Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters
Step30: Now get the unique identifier for the batch prediction job you created.
Step31: Get information on a batch prediction job
Use this helper function get_batch_prediction_job, with the following paramter
Step33: Get Predictions
When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED.
Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a CSV format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called predictions*.csv.
Now display (cat) the contents. You will see multiple rows, one for each prediction.
For each prediction
Step34: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML tabular classification model for batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_classification_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_classification_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create tabular classification models and do batch prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the Iris dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor.
Objective
In this tutorial, you create an AutoML tabular classification model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Make a batch prediction.
There is one key difference between using batch prediction and using online prediction:
Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.
Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
End of explanation
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML tabular classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Job Service for batch prediction and custom training.
End of explanation
IMPORT_FILE = "gs://cloud-samples-data/tables/iris_1000.csv"
Explanation: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}
The format for a Cloud Storage path is:
gs://[bucket_name]/[folder(s)/[file]
BigQuery
metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}
The format for a BigQuery path is:
bq://[collection].[dataset].[table]
Note that the uri field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files.
Data preparation
The Vertex Dataset resource for tabular has a couple of requirements for your tabular data.
Must be in a CSV file or a BigQuery query.
CSV
For tabular classification, the CSV file has a few requirements:
The first row must be the heading -- note how this is different from Vision, Video and Language where the requirement is no heading.
All but one column are features.
One column is the label, which you will specify when you subsequently create the training pipeline.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
Explanation: Quick peek at your data
You will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("iris-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
metadata: The Cloud Storage or BigQuery location of the tabular data.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML tabular classification model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
TRANSFORMATIONS = [
{"auto": {"column_name": "sepal_width"}},
{"auto": {"column_name": "sepal_length"}},
{"auto": {"column_name": "petal_length"}},
{"auto": {"column_name": "petal_width"}},
]
PIPE_NAME = "iris_pipe-" + TIMESTAMP
MODEL_NAME = "iris_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="classification"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
prediction_type: Whether we are doing "classification" or "regression".
target_column: The CSV heading column name for the column we want to predict (i.e., the label).
train_budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
transformations: Specifies the feature engineering for each feature column.
For transformations, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to "auto" to tell AutoML to automatically determine it.
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (logLoss and auPrc) you will print the result.
End of explanation
HEADING = "petal_length,petal_width,sepal_length,sepal_width"
INSTANCE_1 = "1.4,1.3,5.1,2.8"
INSTANCE_2 = "1.5,1.2,4.7,2.4"
Explanation: Model deployment for batch prediction
Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction.
For online prediction, you:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Make online prediction requests to the Endpoint resource.
For batch-prediction, you:
Create a batch prediction job.
The job service will provision resources for the batch prediction request.
The results of the batch prediction request are returned to the caller.
The job service will unprovision the resoures for the batch prediction request.
Make a batch prediction request
Now do a batch prediction to your deployed model.
Make test items
You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.csv"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
f.write(HEADING + "\n")
f.write(str(INSTANCE_1) + "\n")
f.write(str(INSTANCE_2) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make:
The first line is the heading with the feature (fields) heading names.
Each remaining line is a separate prediction request with the corresponding feature values.
For example:
"feature_1", "feature_2". ...
value_1, value_2, ...
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your batch prediction requests:
Single Instance: The batch prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.
Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
BATCH_MODEL = "iris_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "csv"
OUT_FORMAT = "csv" # [csv]
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None
)
Explanation: Make batch prediction request
Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters:
display_name: The human readable name for the prediction job.
model_name: The Vertex fully qualified identifier for the Model resource.
gcs_source_uri: The Cloud Storage path to the input file -- which you created above.
gcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to.
parameters: Additional filtering parameters for serving prediction results.
The helper function calls the job client service's create_batch_prediction_job metho, with the following parameters:
parent: The Vertex location root path for Dataset, Model and Pipeline resources.
batch_prediction_job: The specification for the batch prediction job.
Let's now dive into the specification for the batch_prediction_job:
display_name: The human readable name for the prediction batch job.
model: The Vertex fully qualified identifier for the Model resource.
dedicated_resources: The compute resources to provision for the batch prediction job.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
starting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
model_parameters: Additional filtering parameters for serving prediction results. Note, image segmentation models do not support additional parameters.
input_config: The input source and format type for the instances to predict.
instances_format: The format of the batch prediction request file: csv only supported.
gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.
output_config: The output destination and format for the predictions.
prediction_format: The format of the batch prediction response file: csv only supported.
gcs_destination: The output destination for the predictions.
This call is an asychronous operation. You will print from the response object a few select fields, including:
name: The Vertex fully qualified identifier assigned to the batch prediction job.
display_name: The human readable name for the prediction batch job.
model: The Vertex fully qualified identifier for the Model resource.
generate_explanations: Whether True/False explanations were provided with the predictions (explainability).
state: The state of the prediction job (pending, running, etc).
Since this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state.
End of explanation
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
Explanation: Now get the unique identifier for the batch prediction job you created.
End of explanation
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
Explanation: Get information on a batch prediction job
Use this helper function get_batch_prediction_job, with the following paramter:
job_name: The Vertex fully qualified identifier for the batch prediction job.
The helper function calls the job client service's get_batch_prediction_job method, with the following paramter:
name: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- batch_job_id
The helper function will return the Cloud Storage path to where the predictions are stored -- gcs_destination.
End of explanation
def get_latest_predictions(gcs_out_dir):
Get the latest prediction subfolder using the timestamp in the subfolder name
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction*.csv
! gsutil cat $folder/prediction*.csv
break
time.sleep(60)
Explanation: Get Predictions
When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED.
Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a CSV format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called predictions*.csv.
Now display (cat) the contents. You will see multiple rows, one for each prediction.
For each prediction:
The first four fields are the values (features) you did the prediction on.
The remaining fields are the confidence values, between 0 and 1, for each prediction.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
12,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook provides a way to download data files using the <a href="http
Step1: All of the data files for Particle Physics Playground are currently hosted in this Google Drive folder. To download them, you will need the download_drive_file function, which takes the file name (with proper file ending) as an argument, and downloads it as a file of the same name to the 'data' directory included when you cloned Playground.
The download_file function can be used to download files from the web that aren't on Google Drive. This function takes a url address as an argument. Though this functionality is provided, it should be unnecessary for all of the included activities.
<a href = "https
Step2: Download the data here!
The snippet below can be used to download as much or as little of the extra data as you like. It is currently commented now and is set up to download the first two CLEO MC files, but you can edit it to grab whatever data you like.
Have fun!
Step3: <a href = "https
Step4: CMS data for top-quark reconstruction exercise
Step5: BaBar data | Python Code:
import pps_tools as pps
#pps.download_drive_file()
#pps.download_file()
Explanation: This notebook provides a way to download data files using the <a href="http://docs.python-requests.org/en/latest/">Python requests library</a>. You'll need to have this library installed on your system to do any work.
The first thing we do is import some local helper code that allows us to to download a data file(s), given the url(s).
We also define where those data files can be found.
Make sure you execute the cell below before trying download any of the CLEO or CMS data!
End of explanation
cleo_MC_files = ['Single_D0B_to_KK_ISR_LARGE.dat',
'Single_D0B_to_Kenu_ISR_LARGE.dat',
'Single_D0B_to_Kpipi0_ISR_LARGE.dat',
'Single_D0B_to_Kstenu_ISR_LARGE.dat',
'Single_D0B_to_phigamma_ISR_LARGE.dat',
'Single_D0B_to_pipi_ISR_LARGE.dat',
'Single_D0_to_KK_ISR_LARGE.dat',
'Single_D0_to_Kenu_ISR_LARGE.dat',
'Single_D0_to_Kpi_LARGE.dat',
'Single_D0_to_Kpipi0_ISR_LARGE.dat',
'Single_D0_to_Kstenu_ISR_LARGE.dat',
'Single_D0_to_phigamma_ISR_LARGE.dat',
'Single_D0_to_pipi_ISR_LARGE.dat',
'Single_Dm_to_Kpipi_ISR_LARGE.dat',
'Single_Dp_to_Kpipi_ISR_LARGE.dat']
cleo_data_files = ['data31_100k_LARGE.dat']
Explanation: All of the data files for Particle Physics Playground are currently hosted in this Google Drive folder. To download them, you will need the download_drive_file function, which takes the file name (with proper file ending) as an argument, and downloads it as a file of the same name to the 'data' directory included when you cloned Playground.
The download_file function can be used to download files from the web that aren't on Google Drive. This function takes a url address as an argument. Though this functionality is provided, it should be unnecessary for all of the included activities.
<a href = "https://en.wikipedia.org/wiki/CLEO_(particle_detector)">CLEO</a> data
Here is a list of Monte Carlo (MC) and data files from CLEO. The MC files are for specific decays of $D$ mesons, both charged and neutral. For any given file, there are always (CHECK THIS!!!!!!) two D mesons produced. One decays according to the measured branching fractions, and the other decays through a very specific process. The specific decay is in the name of the file. For example,
Single_D0_to_Kpi_LARGE.dat
would be simulating the following process:
$$e^+e^- \rightarrow D^0 \bar{D}^0$$
$$D^0 \rightarrow \textrm{standard decays}$$
$$\bar{D^0} \rightarrow K^- \pi^+$$
where the $D^0$ and $\bar{D}^0$ can be exchanged.
End of explanation
'''
for filename in cleo_MC_files[0:2]:
pps.download_drive_file(filename)
''';
Explanation: Download the data here!
The snippet below can be used to download as much or as little of the extra data as you like. It is currently commented now and is set up to download the first two CLEO MC files, but you can edit it to grab whatever data you like.
Have fun!
End of explanation
cms_data_files = ['dimuons_100k.dat']
'''
pps.download_drive_file(cms_data_files[0])
''';
Explanation: <a href = "https://en.wikipedia.org/wiki/Compact_Muon_Solenoid">CMS</a> data
CMS dimuon data
End of explanation
cms_top_quark_files = ['data.zip',
'ttbar.zip',
'wjets.zip',
'dy.zip',
'ww.zip',
'wz.zip',
'zz.zip',
'single_top.zip',
'qcd.zip']
'''
for filename in cms_top_quark_files[0:2]:
pps.download_drive_file(filename)
''';
Explanation: CMS data for top-quark reconstruction exercise
End of explanation
babar_data_files = ['basicPID_R24-AllEvents-Run1-OnPeak-R24-38.hdf5',
'basicPID_R24-AllEvents-Run1-OnPeak-R24-388.hdf5',
'basicPID_R24-AllEvents-Run1-OnPeak-R24-1133.hdf5',
'basicPID_R24-AllEvents-Run1-OnPeak-R24-1552.hdf5',
'basicPID_R24-AllEvents-Run1-OnPeak-R24-1694.hdf5',
'basicPID_R24-AllEvents-Run1-OnPeak-R24-1920.hdf5',
'basicPID_R24-AllEvents-Run1-OnPeak-R24-2026.hdf5',
'basicPID_R24-AllEvents-Run1-OnPeak-R24-2781.hdf5',
'basicPID_R24-AllEvents-Run1-OnPeak-R24-2835.hdf5']
'''
for filename in babar_data_files[0:2]:
pps.download_drive_file(filename)
''';
Explanation: BaBar data
End of explanation |
12,117 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a pandas series which values are numpy array. For simplicity, say | Problem:
import pandas as pd
import numpy as np
series = pd.Series([np.array([1,2,3,4]), np.array([5,6,7,8]), np.array([9,10,11,12])], index=['file1', 'file2', 'file3'])
def g(s):
return pd.DataFrame.from_records(s.values,index=s.index)
df = g(series.copy()) |
12,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rebalancing Design Pattern
The Rebalancing Design Pattern provides various approaches for handling datasets that are inherently imbalanced. By this we mean datasets where one label makes up the majority of the dataset, leaving far fewer examples of other labels.
Step2: Downsampling
To demonstrate downsampling, we'll be using this synthetic fraud detection dataset from Kaggle. We've made a version of it available in a public Cloud Storage bucket.
Step3: Weighted classes and output bias
To demonstrate weighted classes, we'll use a different fraud detection dataset in BigQuery. This one has far fewer minority class examples than the one used in the example above.
Step4: We'll take all of the fraud examples from this dataset, and a subset of non-fraud. Then we'll shuffle and combine and look at the number of examples we have for each class.
Step5: Now let's try with weighted classes and add a bias initializer to our output layer. First, calculate the class weights.
Step6: Reframing
Step7: First, let's look at the cluster prediction results for an "average" example from our dataset.
Step8: Here, it's fairly obvious that this datapoint should be put in cluster 1, given the short distance from that cluster.
Step9: Let's compare this with a cluster prediction for an outlier baby weight.
Step10: Here there's a high distance from each cluster, which we can use to conclude that this might be an anomaly. | Python Code:
import itertools
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
import xgboost as xgb
from tensorflow import keras
from tensorflow.keras import Sequential
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import MinMaxScaler
from sklearn.utils import shuffle
from google.cloud import bigquery
Explanation: Rebalancing Design Pattern
The Rebalancing Design Pattern provides various approaches for handling datasets that are inherently imbalanced. By this we mean datasets where one label makes up the majority of the dataset, leaving far fewer examples of other labels.
End of explanation
# Download the data and preview
!gsutil cp gs://ml-design-patterns/fraud_data_kaggle.csv .
fraud_data = pd.read_csv('fraud_data_kaggle.csv')
fraud_data.head()
# Drop a few columns we won't use for this demo
fraud_data = fraud_data.drop(columns=['nameOrig', 'nameDest', 'isFlaggedFraud'])
fraud_data = pd.get_dummies(fraud_data)
# Split into separate dataframes
fraud = fraud_data[fraud_data['isFraud'] == 1]
not_fraud = fraud_data[fraud_data['isFraud'] == 0]
# Take a random sample of non-fraud data
# The .005 frac will give us around an 80/20 split of not-fraud/fraud samples
# You can experiment with this value
not_fraud_sample = not_fraud.sample(random_state=2, frac=.005)
# Put the data back together and shuffle
fraud_data = pd.concat([not_fraud_sample,fraud])
fraud_data = shuffle(fraud_data, random_state=2)
# Look at our data balance after downsampling
fraud_data['isFraud'].value_counts()
train_test_split = int(len(fraud_data) * .8)
train_data = fraud_data[:train_test_split]
test_data = fraud_data[train_test_split:]
train_labels = train_data.pop('isFraud')
test_labels = test_data.pop('isFraud')
model = xgb.XGBRegressor(
objective='reg:linear'
)
model.fit(train_data.values, train_labels)
# Get some test predictions
y_pred = model.predict(test_data.values)
# To build a confusion matrix using the scikit utility, we'll need the values as ints
y_pred_formatted = []
for i in y_pred:
y_pred_formatted.append(int(round(i)))
cm = confusion_matrix(test_labels.values, y_pred_formatted)
print(cm)
# This is from the sklearn docs
# https://scikit-learn.org/0.18/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = np.round(cm.astype('float') / cm.sum(axis=1)[:, np.newaxis], 3)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# With downsampling, our model's accuracy on fraud is almost as good as non-fraud examples
# You can compare this by training a model on the full dataset if you'd like (it'll take a long time to train given the size)
classes = ['not fraud', 'fraud']
plot_confusion_matrix(cm, classes, normalize=True)
Explanation: Downsampling
To demonstrate downsampling, we'll be using this synthetic fraud detection dataset from Kaggle. We've made a version of it available in a public Cloud Storage bucket.
End of explanation
# To access BigQuery, you'll need to authenticate to your Cloud account
from google.colab import auth
auth.authenticate_user()
Explanation: Weighted classes and output bias
To demonstrate weighted classes, we'll use a different fraud detection dataset in BigQuery. This one has far fewer minority class examples than the one used in the example above.
End of explanation
%%bigquery fraud_df --project sara-cloud-ml
SELECT
*
FROM
`bigquery-public-data.ml_datasets.ulb_fraud_detection`
WHERE Class = 1
# This query will take a a minute to run
%%bigquery nonfraud_df --project sara-cloud-ml
SELECT
*
FROM
`bigquery-public-data.ml_datasets.ulb_fraud_detection`
WHERE Class = 0
AND RAND() < 0.05
bq_fraud_data = pd.concat([fraud_df, nonfraud_df])
bq_fraud_data.sort_values(by=['Time'])
# bq_fraud_data = shuffle(bq_fraud_data, random_state=22)
# Scale time and amount values
time_scaler = MinMaxScaler()
amt_scaler = MinMaxScaler()
bq_fraud_data['Time'] = time_scaler.fit_transform(bq_fraud_data['Time'].values.reshape(-1,1))
bq_fraud_data['Amount'] = amt_scaler.fit_transform(bq_fraud_data['Amount'].values.reshape(-1,1))
# See data balance
bq_fraud_data['Class'].value_counts()
train_test_split = int(len(bq_fraud_data) * .8)
train_data = bq_fraud_data[:train_test_split]
test_data = bq_fraud_data[train_test_split:]
train_labels = train_data.pop('Class')
test_labels = test_data.pop('Class')
# Create a tf dataset
train_dataset = tf.data.Dataset.from_tensor_slices((train_data.values, train_labels))
train_dataset = train_dataset.shuffle(len(train_data)).batch(1024)
test_dataset = tf.data.Dataset.from_tensor_slices((test_data.values, test_labels))
test_dataset = test_dataset.shuffle(len(test_data)).batch(1)
Explanation: We'll take all of the fraud examples from this dataset, and a subset of non-fraud. Then we'll shuffle and combine and look at the number of examples we have for each class.
End of explanation
# Get number of examples for each class from the training set
num_minority = train_labels.value_counts()[1]
num_majority = train_labels.value_counts()[0]
minority_class_weight = 1 / (num_minority / len(train_data)) / 2
majority_class_weight = 1 / (num_majority / len(train_data)) / 2
# Pass the weights to Keras in a dict
# The key is the index of each class
keras_class_weights = {0: majority_class_weight, 1: minority_class_weight}
print(keras_class_weights)
# Calculate output bias
output_bias = math.log(num_minority / num_majority)
print(output_bias)
fraud_model = keras.Sequential([
keras.layers.Dense(16, input_shape=(len(train_data.iloc[0]),), activation='relu'),
keras.layers.Dropout(0.25),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1, activation='sigmoid', bias_initializer=tf.keras.initializers.Constant(output_bias))
])
metrics = [
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='roc_auc'),
]
fraud_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=metrics)
fraud_model.fit(train_dataset, validation_data=test_dataset, epochs=10, class_weight=keras_class_weights)
Explanation: Now let's try with weighted classes and add a bias initializer to our output layer. First, calculate the class weights.
End of explanation
# This will take about a minute to run
%%bigquery --project sara-cloud-ml
CREATE OR REPLACE MODEL
`sara-cloud-ml.natality.baby_weight_clusters` OPTIONS(model_type='kmeans',
num_clusters=4) AS
SELECT
weight_pounds,
mother_age,
gestation_weeks
FROM
`bigquery-public-data.samples.natality`
LIMIT 10000
Explanation: Reframing: using cluster distance as a prediction signal
In this approach, train a clustering model and use the distance of new examples from clusters to detect anomalies. We'll train a kmeans model on the natality dataset to demonstrate this.
End of explanation
%%bigquery average_pred --project sara-cloud-ml
SELECT
*
FROM
ML.PREDICT (MODEL `sara-cloud-ml.natality.baby_weight_clusters`,
(
SELECT
7.0 as weight_pounds,
28 as mother_age,
40 as gestation_weeks
)
)
average_pred
Explanation: First, let's look at the cluster prediction results for an "average" example from our dataset.
End of explanation
# Print the resulting cluster distances
df['NEAREST_CENTROIDS_DISTANCE'].iloc[0]
Explanation: Here, it's fairly obvious that this datapoint should be put in cluster 1, given the short distance from that cluster.
End of explanation
%%bigquery outlier_pred --project sara-cloud-ml
SELECT
*
FROM
ML.PREDICT (MODEL `sara-cloud-ml.natality.baby_weight_clusters`,
(
SELECT
3.0 as weight_pounds,
20 as mother_age,
27 as gestation_weeks
)
)
outlier_pred
Explanation: Let's compare this with a cluster prediction for an outlier baby weight.
End of explanation
outlier_pred['NEAREST_CENTROIDS_DISTANCE'].iloc[0]
Explanation: Here there's a high distance from each cluster, which we can use to conclude that this might be an anomaly.
End of explanation |
12,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Big Query Machine Learning (BQML)
Learning Objectives
- Understand that it is possible to build ML models in Big Query
- Understand when this is appropriate
- Experience building a model using BQML
Introduction
BigQuery is more than just a data warehouse, it also has some ML capabilities baked into it.
BQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat, but for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow.
In this notebook, we will build a naive model using BQML. This notebook is intended to inspire usage of BQML, we will not focus on model performance.
Set up environment variables and load necessary libraries
Step1: Create BigQuery dataset
Prior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, Dataset means a folder for tables.
We will take advantage of BigQuery's Python Client to create the dataset.
Step2: Create model
To create a model (documentation)
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
Step3: Get training statistics
Because the query uses a CREATE MODEL statement to create a table, you do not see query results. The output is an empty string.
To get the training results we use the ML.TRAINING_INFO function.
Have a look at Step Three and Four of this tutorial to see a similar example.
Step4: 'eval_loss' is reported as mean squared error, so our RMSE is 8.29. Your results may vary.
Predict
To use our model to make predictions, we use ML.PREDICT. Let's, use the taxifare_model you trained above to infer the cost of a taxi ride that occurs at 10 | Python Code:
from google import api_core
from google.cloud import bigquery
PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
%env PROJECT=$PROJECT
Explanation: Big Query Machine Learning (BQML)
Learning Objectives
- Understand that it is possible to build ML models in Big Query
- Understand when this is appropriate
- Experience building a model using BQML
Introduction
BigQuery is more than just a data warehouse, it also has some ML capabilities baked into it.
BQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat, but for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow.
In this notebook, we will build a naive model using BQML. This notebook is intended to inspire usage of BQML, we will not focus on model performance.
Set up environment variables and load necessary libraries
End of explanation
bq = bigquery.Client(project=PROJECT)
dataset = bigquery.Dataset(bq.dataset("bqml_taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created")
except api_core.exceptions.Conflict:
print("Dataset already exists")
Explanation: Create BigQuery dataset
Prior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, Dataset means a folder for tables.
We will take advantage of BigQuery's Python Client to create the dataset.
End of explanation
%%bigquery --project $PROJECT
CREATE or REPLACE MODEL bqml_taxifare.taxifare_model
OPTIONS(model_type = "linear_reg",
input_label_cols = ["label"]) AS
-- query to fetch training data
SELECT
(tolls_amount + fare_amount) AS label,
pickup_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude
FROM
`nyc-tlc.yellow.trips`
WHERE
-- Clean Data
trip_distance > 0
AND passenger_count > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
-- repeatable 1/5000th sample
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 5000)) = 1
Explanation: Create model
To create a model (documentation)
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
End of explanation
%%bigquery --project $PROJECT
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `bqml_taxifare.taxifare_model`)
Explanation: Get training statistics
Because the query uses a CREATE MODEL statement to create a table, you do not see query results. The output is an empty string.
To get the training results we use the ML.TRAINING_INFO function.
Have a look at Step Three and Four of this tutorial to see a similar example.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT
predicted_label
FROM
ML.PREDICT(MODEL `bqml_taxifare.taxifare_model`,
(
SELECT
TIMESTAMP "2014-01-03 10:00:00" as pickup_datetime,
-74.0080 as pickup_longitude,
40.7434 as pickup_latitude,
-73.7781 as dropoff_longitude,
40.6413 as dropoff_latitude
))
Explanation: 'eval_loss' is reported as mean squared error, so our RMSE is 8.29. Your results may vary.
Predict
To use our model to make predictions, we use ML.PREDICT. Let's, use the taxifare_model you trained above to infer the cost of a taxi ride that occurs at 10:00 am on January 3rd, 2014 going from the Google Office in New York (latitude: 40.7434, longitude: -74.0080) to the JFK airport (latitude: 40.6413, longitude: -73.7781)
Have a look at Step Five of this tutorial to see another example.
End of explanation |
12,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Upper Air Analysis using Declarative Syntax
The MetPy declarative syntax allows for a simplified interface to creating common
meteorological analyses including upper air observation plots.
Step1: Getting the data
In this example, data is originally from the Iowa State Upper-air archive
(https
Step2: Plotting the data
Use the declarative plotting interface to create a CONUS upper-air map for 500 hPa | Python Code:
from datetime import datetime
import pandas as pd
from metpy.cbook import get_test_data
import metpy.plots as mpplots
from metpy.units import units
Explanation: Upper Air Analysis using Declarative Syntax
The MetPy declarative syntax allows for a simplified interface to creating common
meteorological analyses including upper air observation plots.
End of explanation
data = pd.read_csv(get_test_data('UPA_obs.csv', as_file_obj=False))
Explanation: Getting the data
In this example, data is originally from the Iowa State Upper-air archive
(https://mesonet.agron.iastate.edu/archive/raob/) available through a Siphon method.
The data are pre-processed to attach latitude/longitude locations for each RAOB site.
End of explanation
# Plotting the Observations
obs = mpplots.PlotObs()
obs.data = data
obs.time = datetime(1993, 3, 14, 0)
obs.level = 500 * units.hPa
obs.fields = ['temperature', 'dewpoint', 'height']
obs.locations = ['NW', 'SW', 'NE']
obs.formats = [None, None, lambda v: format(v, '.0f')[:3]]
obs.vector_field = ('u_wind', 'v_wind')
obs.reduce_points = 0
# Add map features for the particular panel
panel = mpplots.MapPanel()
panel.layout = (1, 1, 1)
panel.area = (-124, -72, 20, 53)
panel.projection = 'lcc'
panel.layers = ['coastline', 'borders', 'states', 'land', 'ocean']
panel.plots = [obs]
# Collecting panels for complete figure
pc = mpplots.PanelContainer()
pc.size = (15, 10)
pc.panels = [panel]
# Showing the results
pc.show()
Explanation: Plotting the data
Use the declarative plotting interface to create a CONUS upper-air map for 500 hPa
End of explanation |
12,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: What's this TensorFlow business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks
Step2: Example Model
Some useful utilities
. Remember that our image data is initially N x H x W x C, where
Step3: TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).
Layers, Activations, Loss functions
Step4: Training a specific model
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.
Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture
Step5: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes)
Step6: You should see the following from the run above
(64, 10)
True
GPU!
Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.
Step7: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
Train the model.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).
Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function. See the TensorFlow documentation for more information
* Layers, Activations, Loss functions
Step8: Train the model
Below we'll create a session and train the model over one epoch. You should see a loss of 1.4 to 2.0 and an accuracy of 0.4 to 0.5. There will be some variation due to random seeds and differences in initialization
Step9: Check the accuracy of the model.
Let's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.3 to 2.0 with an accuracy of 0.45 to 0.55.
Step10: Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >= 70% accuracy on the validation set of CIFAR-10. You can use the run_model function from above.
Things you should try
Step11: Describe what you did here
In this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network
Tell us here
Test Set - Do this only once
Now that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy. | Python Code:
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
Explanation: What's this TensorFlow business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you switch over to that notebook)
What is it?
TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray.
Why?
Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.
We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
How will I learn TensorFlow?
TensorFlow has many excellent tutorials available, including those from Google themselves.
Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
Load Datasets
End of explanation
# clear old variables
tf.reset_default_graph()
# setup input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 10])
b1 = tf.get_variable("b1", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
h1 = tf.nn.relu(a1)
h1_flat = tf.reshape(h1,[-1,5408])
y_out = tf.matmul(h1_flat,W1) + b1
return y_out
y_out = simple_model(X,y)
# define our loss
total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)
Explanation: Example Model
Some useful utilities
. Remember that our image data is initially N x H x W x C, where:
* N is the number of datapoints
* H is the height of each image in pixels
* W is the height of each image in pixels
* C is the number of channels (usually 3: R, G, B)
This is the right way to represent the data when we are doing something like a 2D convolution, which needs spatial understanding of where the pixels are relative to each other. When we input image data into fully connected affine layers, however, we want each data example to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data.
The example model itself
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in TensorFlow -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Hinge loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
TensorFlow Details
In TensorFlow, much like in our previous notebooks, we'll first specifically initialize our variables, and then our network model.
End of explanation
def run_model(session, predict, loss_val, Xd, yd,
epochs=1, batch_size=64, print_every=100,
training=None, plot_losses=False):
# have tensorflow compute accuracy
correct_prediction = tf.equal(tf.argmax(predict,1), y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# shuffle indicies
train_indicies = np.arange(Xd.shape[0])
np.random.shuffle(train_indicies)
training_now = training is not None
# setting up variables we want to compute (and optimizing)
# if we have a training function, add that to things we compute
variables = [mean_loss,correct_prediction,accuracy]
if training_now:
variables[-1] = training
# counter
iter_cnt = 0
for e in range(epochs):
# keep track of losses and accuracy
correct = 0
losses = []
# make sure we iterate over the dataset once
for i in range(int(math.ceil(Xd.shape[0]/batch_size))):
# generate indicies for the batch
start_idx = (i*batch_size)%Xd.shape[0]
idx = train_indicies[start_idx:start_idx+batch_size]
# create a feed dictionary for this batch
feed_dict = {X: Xd[idx,:],
y: yd[idx],
is_training: training_now }
# get batch size
actual_batch_size = yd[idx].shape[0]
# have tensorflow compute loss and correct predictions
# and (if given) perform a training step
loss, corr, _ = session.run(variables,feed_dict=feed_dict)
# aggregate performance stats
losses.append(loss*actual_batch_size)
correct += np.sum(corr)
# print every now and then
if training_now and (iter_cnt % print_every) == 0:
print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\
.format(iter_cnt,loss,np.sum(corr)/actual_batch_size))
iter_cnt += 1
total_correct = correct/Xd.shape[0]
total_loss = np.sum(losses)/Xd.shape[0]
print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\
.format(total_loss,total_correct,e+1))
if plot_losses:
plt.plot(losses)
plt.grid(True)
plt.title('Epoch {} Loss'.format(e+1))
plt.xlabel('minibatch number')
plt.ylabel('minibatch loss')
plt.show()
return total_loss,total_correct
with tf.Session() as sess:
with tf.device("/gpu:0"): #"/cpu:0" or "/gpu:0"
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
Explanation: TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).
Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn
Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
BatchNorm: https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization
Training the model on one epoch
While we have defined a graph of operations above, in order to execute TensorFlow Graphs, by feeding them input data and computing the results, we first need to create a tf.Session object. A session encapsulates the control and state of the TensorFlow runtime. For more information, see the TensorFlow Getting started guide.
Optionally we can also specify a device context such as /cpu:0 or /gpu:0. For documentation on this behavior see this TensorFlow guide
You should see a validation loss of around 0.4 to 0.6 and an accuracy of 0.30 to 0.35 below
End of explanation
# clear old variables
tf.reset_default_graph()
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
shape_1 = 32
X = tf.placeholder(tf.float32, [None, shape_1, shape_1, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
conv1 = tf.layers.conv2d(
X,
filters=32,
kernel_size=[7, 7],
padding='SAME',
activation=tf.nn.relu)
bn1 = tf.layers.batch_normalization(
conv1,
axis=1,
training=is_training)
pool1 = tf.layers.max_pooling2d(
bn1,
pool_size=[2, 2],
strides=2)
flattened = tf.layers.flatten(pool1)
d1 = tf.layers.dense(flattened, 1024, activation=tf.nn.relu)
y_out = tf.layers.dense(d1, 10)
return y_out
y_out = complex_model(X,y,is_training)
Explanation: Training a specific model
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.
Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture:
7x7 Convolutional Layer with 32 filters and stride of 1
ReLU Activation Layer
Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
2x2 Max Pooling layer with a stride of 2
Affine layer with 1024 output units
ReLU Activation Layer
Affine layer from 1024 input units to 10 outputs
End of explanation
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, shape_1, shape_1,3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
Explanation: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
End of explanation
try:
with tf.Session() as sess:
with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
except tf.errors.InvalidArgumentError:
print("no gpu found, please use Google Cloud if you want GPU acceleration")
# rebuild the graph
# trying to start a GPU throws an exception
# and also trashes the original graph
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = complex_model(X,y,is_training)
Explanation: You should see the following from the run above
(64, 10)
True
GPU!
Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.
End of explanation
# Inputs
# y_out: is what your model computes
# y: is your TensorFlow variable with label information
# Outputs
# mean_loss: a TensorFlow variable (scalar) with numerical loss
# optimizer: a TensorFlow optimizer
# This should be ~3 lines of code!
# define our loss
total_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.RMSPropOptimizer(1e-3) # select optimizer and set learning rate
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
Explanation: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
Train the model.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).
Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function. See the TensorFlow documentation for more information
* Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn
* Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
End of explanation
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)
Explanation: Train the model
Below we'll create a session and train the model over one epoch. You should see a loss of 1.4 to 2.0 and an accuracy of 0.4 to 0.5. There will be some variation due to random seeds and differences in initialization
End of explanation
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
Explanation: Check the accuracy of the model.
Let's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.3 to 2.0 with an accuracy of 0.45 to 0.55.
End of explanation
# Feel free to play with this cell
import tensorflow.contrib.slim as slim
def vgg_like_model(X,y,is_training):
net = slim.repeat(X, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net= tf.layers.batch_normalization(net, axis=1, training=is_training)
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net= tf.layers.batch_normalization(net, axis=1, training=is_training)
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net= tf.layers.batch_normalization(net, axis=1, training=is_training)
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = tf.layers.flatten(net)
net = slim.fully_connected(net, 512, scope='fc7')
net= tf.layers.batch_normalization(net, training=is_training)
net = slim.fully_connected(net, 10, activation_fn=None, scope='fc8')
return net
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = vgg_like_model(X,y,is_training)
total_loss = tf.losses.sparse_softmax_cross_entropy(labels=y,logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.AdamOptimizer(1e-3) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
# Feel free to play with this cell
# This default code creates a session
# and trains your model for 10 epochs
# then prints the validation set accuracy
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,4,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# Test your model here, and make sure
# the output of this cell is the accuracy
# of your best model on the training and val sets
# We're looking for >= 70% accuracy on Validation
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
Explanation: Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >= 70% accuracy on the validation set of CIFAR-10. You can use the run_model function from above.
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Pooling vs Strided Convolution: Do you use max pooling or just stride convolutions?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
[conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
Use TensorFlow Scope: Use TensorFlow scope and/or tf.layers to make it easier to write deeper networks. See this tutorial for how to use tf.layers.
Use Learning Rate Decay: As the notes point out, decaying the learning rate might help the model converge. Feel free to decay every epoch, when loss doesn't change over an entire epoch, or any other heuristic you find appropriate. See the Tensorflow documentation for learning rate decay.
Global Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).
Regularization: Add l2 weight regularization, or perhaps use Dropout as in the TensorFlow MNIST tutorial
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
You should use the validation set for hyperparameter search, and we'll save the test set for evaluating your architecture on the best parameters as selected by the validation set.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
Model ensembles
Data augmentation
New Architectures
ResNets where the input from the previous layer is added to the output.
DenseNets where inputs into previous layers are concatenated together.
This blog has an in-depth overview
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at >= 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training and validation set accuracies for your final trained network.
Have fun and happy training!
End of explanation
print('Test')
run_model(sess,y_out,mean_loss,X_test,y_test,1,64)
Explanation: Describe what you did here
In this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network
Tell us here
Test Set - Do this only once
Now that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
End of explanation |
12,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fast GP implementations
Step5: Benchmarking our implementation
Let's do some timing tests and compare them to what we get with two handy GP packages
Step6: <div style="background-color
Step7: <div style="background-color
Step8: <div style="background-color
Step9: <div style="background-color | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["figure.figsize"] = 12, 4
rcParams["font.size"] = 16
rcParams["text.usetex"] = False
rcParams["font.family"] = ["sans-serif"]
rcParams["font.sans-serif"] = ["cmss10"]
rcParams["axes.unicode_minus"] = False
# https://github.com/matplotlib/matplotlib/issues/12039
try:
old_get_unicode_index
except NameError:
print('Patching matplotlib.mathtext.get_unicode_index')
import matplotlib.mathtext as mathtext
old_get_unicode_index = mathtext.get_unicode_index
mathtext.get_unicode_index = lambda symbol, math=True:\
ord('-') if symbol == '-' else old_get_unicode_index(symbol, math)
Explanation: Fast GP implementations
End of explanation
import numpy as np
from scipy.linalg import cho_factor
def ExpSquaredKernel(t1, t2=None, A=1.0, l=1.0):
Return the ``N x M`` exponential squared
covariance matrix between time vectors `t1`
and `t2`. The kernel has amplitude `A` and
lengthscale `l`.
if t2 is None:
t2 = t1
T2, T1 = np.meshgrid(t2, t1)
return A ** 2 * np.exp(-0.5 * (T1 - T2) ** 2 / l ** 2)
def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0):
Return the log of the GP likelihood of the
data `y(t)` given uncertainty `sigma` and
an Exponential Squared Kernel with amplitude `A`
and length scale `sigma`.
# The covariance and its determinant
npts = len(t)
kernel = ExpSquaredKernel
K = kernel(t, A=A, l=l) + sigma ** 2 * np.eye(npts)
# The marginal log likelihood
log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y))
log_like -= 0.5 * np.linalg.slogdet(K)[1]
log_like -= 0.5 * npts * np.log(2 * np.pi)
return log_like
def draw_from_gaussian(mu, S, ndraws=1, eps=1e-12):
Generate samples from a multivariate gaussian
specified by covariance ``S`` and mean ``mu``.
(We derived these equations in Day 1, Notebook 01, Exercise 7.)
npts = S.shape[0]
L, _ = cho_factor(S + eps * np.eye(npts), lower=True)
L = np.tril(L)
u = np.random.randn(npts, ndraws)
x = np.dot(L, u) + mu[:, None]
return x.T
def compute_gp(t_train, y_train, t_test, sigma=0, A=1.0, l=1.0):
Compute the mean vector and covariance matrix of a GP
at times `t_test` given training points `y_train(t_train)`.
The training points have uncertainty `sigma` and the
kernel is assumed to be an Exponential Squared Kernel
with amplitude `A` and lengthscale `l`.
# Compute the required matrices
kernel = ExpSquaredKernel
Stt = kernel(t_train, A=1.0, l=1.0)
Stt += sigma ** 2 * np.eye(Stt.shape[0])
Spp = kernel(t_test, A=1.0, l=1.0)
Spt = kernel(t_test, t_train, A=1.0, l=1.0)
# Compute the mean and covariance of the GP
mu = np.dot(Spt, np.linalg.solve(Stt, y_train))
S = Spp - np.dot(Spt, np.linalg.solve(Stt, Spt.T))
return mu, S
%%time
np.random.seed(3)
t = np.linspace(0, 10, 10000)
sigma = np.ones_like(t) * 0.05
gp_mu, gp_S = compute_gp([], [], t, A=1.0, l=1.0)
y = draw_from_gaussian(gp_mu, gp_S)[0] + sigma * np.random.randn(len(t))
%%time
ln_gp_likelihood(t, y, sigma)
Explanation: Benchmarking our implementation
Let's do some timing tests and compare them to what we get with two handy GP packages: george and celerite. We'll learn how to use both along the way.
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 1a</h1>
</div>
Let's time how long our custom implementation of a GP takes for a rather long dataset. Create a time array of 10,000 points between 0 and 10 and time how long it takes to sample the prior of the GP for the default kernel parameters (unit amplitude and timescale). Add a bit of noise to the sample and then time how long it takes to evaluate the log likelihood for the dataset. Make sure to store the value of the log likelihood for later.
End of explanation
import george
%%time
kernel = george.kernels.ExpSquaredKernel(1.0)
gp = george.GP(kernel)
gp.compute(t, sigma)
%%time
print(gp.log_likelihood(y))
%%time
gp.sample()
Explanation: <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 1b</h1>
</div>
Let's time how long it takes to do the same operations using the george package (pip install george).
The kernel we'll use is
python
kernel = amp ** 2 * george.kernels.ExpSquaredKernel(tau ** 2)
where amp = 1 and tau = 1 in this case.
To instantiate a GP using george, simply run
python
gp = george.GP(kernel)
The george package pre-computes a lot of matrices that are re-used in different operations, so before anything else, ask it to compute the GP model for your timeseries:
python
gp.compute(t, sigma)
Note that we've only given it the time array and the uncertainties, so as long as those remain the same, you don't have to re-compute anything. This will save you a lot of time in the long run!
Finally, the log likelihood is given by gp.log_likelihood(y) and a sample can be drawn by calling gp.sample().
How do the speeds compare? Did you get the same value of the likelihood (assuming you computed it for the same sample in both cases)?
End of explanation
%%time
gp = george.GP(kernel, solver=george.HODLRSolver)
gp.compute(t, sigma)
%%time
gp.log_likelihood(y)
Explanation: <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 1c</h1>
</div>
george offers a fancy GP solver called the HODLR solver, which makes some approximations that dramatically speed up the matrix algebra. Instantiate the GP object again by passing the keyword solver=george.HODLRSolver and re-compute the log likelihood. How long did that take?
(I wasn't able to draw samples using the HODLR solver; unfortunately this may not be implemented.)
End of explanation
import celerite
from celerite import terms
%%time
kernel = terms.Matern32Term(np.log(1), np.log(1))
gp = celerite.GP(kernel)
gp.compute(t, sigma)
%%time
gp.log_likelihood(y)
%%time
gp.sample()
Explanation: <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 2</h1>
</div>
The george package is super useful for GP modeling, and I recommend you read over the docs and examples. It implements several different kernels that come in handy in different situations, and it has support for multi-dimensional GPs. But if all you care about are GPs in one dimension (in this case, we're only doing GPs in the time domain, so we're good), then celerite is what it's all about:
bash
pip install celerite
Check out the docs here, as well as several tutorials. There is also a paper that discusses the math behind celerite. The basic idea is that for certain families of kernels, there exist extremely efficient methods of factorizing the covariance matrices. Whereas GP fitting typically scales with the number of datapoints $N$ as $N^3$, celerite is able to do everything in order $N$ (!!!) This is a huge advantage, especially for datasets with tens or hundreds of thousands of data points. Using george or any homebuilt GP model for datasets larger than about 10,000 points is simply intractable, but with celerite you can do it in a breeze.
Repeat the timing tests, but this time using celerite. Note that the Exponential Squared Kernel is not available in celerite, because it doesn't have the special form needed to make its factorization fast. Instead, use the Matern 3/2 kernel, which is qualitatively similar, and which can be approximated quite well in terms of the celerite basis functions:
python
kernel = celerite.terms.Matern32Term(np.log(1), np.log(1))
Note that celerite accepts the log of the amplitude and the log of the timescale. Other than this, you should be able to compute the likelihood and draw a sample with the same syntax as george.
How much faster did it run?
End of explanation
import matplotlib.pyplot as plt
from celerite.modeling import Model
import os
# Define the model
class MeanModel(Model):
parameter_names = ("depth", "t0", "dur")
def get_value(self, t):
return -self.depth * np.exp(-0.5 * (t - self.t0) ** 2 / (0.2 * self.dur) ** 2)
mean_model = MeanModel(depth=0.5, t0=0.05, dur=0.7)
mean_model.parameter_bounds = [(0, 1.0), (-0.1, 0.4), (0.1, 1.0)]
true_params = mean_model.get_parameter_vector()
# Simuate the data
np.random.seed(71)
x = np.sort(np.random.uniform(-1, 1, 70))
yerr = np.random.uniform(0.075, 0.1, len(x))
K = 0.2 * np.exp(-0.5 * (x[:, None] - x[None, :]) ** 2 / 10.5)
K[np.diag_indices(len(x))] += yerr ** 2
y = np.random.multivariate_normal(mean_model.get_value(x), K)
y -= np.nanmedian(y)
# Plot the data
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
t = np.linspace(-1, 1, 1000)
plt.plot(t, mean_model.get_value(t))
plt.ylabel(r"$y$")
plt.xlabel(r"$t$")
plt.xlim(-1, 1)
plt.gca().yaxis.set_major_locator(plt.MaxNLocator(5))
plt.title("simulated data");
# Save it
X = np.hstack((x.reshape(-1, 1), y.reshape(-1, 1), yerr.reshape(-1, 1)))
if not (os.path.exists("data")):
os.mkdir("data")
np.savetxt("data/sample_transit.txt", X)
import matplotlib.pyplot as plt
t, y, yerr = np.loadtxt("data/sample_transit.txt", unpack=True)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.xlabel("time")
plt.ylabel("relative flux");
from celerite.modeling import Model
from scipy.optimize import minimize
# Define the transit model as a celerite `Model`
class MeanModel(Model):
parameter_names = ("depth", "t0", "dur")
def get_value(self, t):
return -self.depth * np.exp(-0.5 * (t - self.t0) ** 2 / (0.2 * self.dur) ** 2)
# Instantiate it with some guesses (which are actually the true values in this case!)
mean_model = MeanModel(depth=0.5, t0=0.05, dur=0.7)
mean_model.parameter_bounds = [(0, 1.0), (-0.1, 0.4), (0.1, 1.0)]
true_params = mean_model.get_parameter_vector()
# Set up the GP model
kernel = terms.RealTerm(log_a=np.log(np.var(y)), log_c=0)
gp = celerite.GP(kernel, mean=mean_model, fit_mean=True)
gp.compute(x, yerr)
print("Initial log-likelihood: {0}".format(gp.log_likelihood(y)))
# Define a cost function
def neg_log_like(params, y, gp):
gp.set_parameter_vector(params)
return -gp.log_likelihood(y)
def grad_neg_log_like(params, y, gp):
gp.set_parameter_vector(params)
return -gp.grad_log_likelihood(y)[1]
# Fit for the maximum likelihood parameters
initial_params = gp.get_parameter_vector()
bounds = gp.get_parameter_bounds()
soln = minimize(neg_log_like, initial_params,
method="L-BFGS-B", bounds=bounds, args=(y, gp))
gp.set_parameter_vector(soln.x)
print("Final log-likelihood: {0}".format(-soln.fun))
# Make the maximum likelihood prediction
t = np.linspace(-1, 1, 500)
mu, var = gp.predict(y, t, return_var=True)
std = np.sqrt(var)
# Plot the data
color = "#ff7f0e"
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(t, mu, color=color)
plt.fill_between(t, mu+std, mu-std, color=color, alpha=0.3, edgecolor="none")
plt.ylabel(r"$y$")
plt.xlabel(r"$t$")
plt.xlim(-1, 1)
plt.gca().yaxis.set_major_locator(plt.MaxNLocator(5))
plt.title("maximum likelihood prediction");
def log_probability(params):
gp.set_parameter_vector(params)
lp = gp.log_prior()
if not np.isfinite(lp):
return -np.inf
try:
return gp.log_likelihood(y) + lp
except celerite.solver.LinAlgError:
return -np.inf
import emcee
initial = np.array(soln.x)
ndim, nwalkers = len(initial), 32
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_probability)
print("Running burn-in...")
p0 = initial + 1e-8 * np.random.randn(nwalkers, ndim)
p0, lp, _ = sampler.run_mcmc(p0, 1000)
print("Running production...")
sampler.reset()
sampler.run_mcmc(p0, 2000);
# Plot the data.
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
# Plot 24 posterior samples.
samples = sampler.flatchain
for s in samples[np.random.randint(len(samples), size=24)]:
gp.set_parameter_vector(s)
mu = gp.predict(y, t, return_cov=False)
plt.plot(t, mu, color=color, alpha=0.3)
plt.ylabel(r"$y$")
plt.xlabel(r"$t$")
plt.xlim(-1, 1)
plt.gca().yaxis.set_major_locator(plt.MaxNLocator(5))
plt.title("posterior predictions");
import corner
names = gp.get_parameter_names()
cols = mean_model.get_parameter_names()
inds = np.array([names.index("mean:"+k) for k in cols])
corner.corner(sampler.flatchain[:, inds], truths=true_params,
labels=[r"depth", r"$t_0$", r"dur"]);
Explanation: <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 3</h1>
</div>
Let's use celerite for a real application: fitting an exoplanet transit model in the presence of correlated noise.
Below is a (fictitious) light curve for a star with a transiting planet. There is a transit visible to the eye at $t = 0$, which (say) is when you'd expect the planet to transit if its orbit were perfectly periodic. However, a recent paper claims that the planet shows transit timing variations, which are indicative of a second, perturbing planet in the system, and that a transit at $t = 0$ can be ruled out at 3 $\sigma$. Your task is to verify this claim.
Assume you have no prior information on the planet other than the transit occurs in the observation window, the depth of the transit is somewhere in the range $(0, 1)$, and the transit duration is somewhere between $0.1$ and $1$ day. You don't know the exact process generating the noise, but you are certain that there's correlated noise in the dataset, so you'll have to pick a reasonable kernel and estimate its hyperparameters.
Fit the transit with a simple inverted Gaussian with three free parameters:
python
def transit_shape(depth, t0, dur):
return -depth * np.exp(-0.5 * (t - t0) ** 2 / (0.2 * dur) ** 2)
Read the celerite docs to figure out how to solve this problem efficiently.
HINT: I borrowed heavily from this tutorial, so you might want to take a look at it...
End of explanation |
12,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MTA Subway Stations dataset cleaning
In this notebook we will clean the Subway Stations dataset made available by MTA.
Let's start by opening and examining it.
Step1: Let's extract the latitude and longitude from the dataset. For that we will use add_coord_columns() which is defined in coordinates.py. Notice that the coordinates are reversed as in (longitude, latitude).
Step2: Now let's clean the DataFrame.
Step3: Let's quickly plot the stations coordinates to have a feel for their geographical location
Step4: The interactive map is available here.
Now let's just save it as a pickle binary file for later use in the recommender notebook. | Python Code:
import pandas as pd
stations = pd.read_csv('data/DOITT_SUBWAY_STATION_01_13SEPT2010.csv')
stations.head(4)
Explanation: MTA Subway Stations dataset cleaning
In this notebook we will clean the Subway Stations dataset made available by MTA.
Let's start by opening and examining it.
End of explanation
import coordinates as coord
coord.add_coord_columns(stations, 'the_geom', sep=' ', _reversed=True)
stations.loc[:, ('latitude', 'longitude')].head()
Explanation: Let's extract the latitude and longitude from the dataset. For that we will use add_coord_columns() which is defined in coordinates.py. Notice that the coordinates are reversed as in (longitude, latitude).
End of explanation
stations.rename(columns={'NAME': 'station', 'LINE': 'lines', 'NOTES': 'notes'}, inplace=True)
relevant_cols = ['station', 'latitude', 'longitude', 'lines', 'notes']
stations_cleaned = stations.loc[:, relevant_cols]
stations_cleaned.sort_values(by='station', inplace=True)
stations_cleaned.head()
Explanation: Now let's clean the DataFrame.
End of explanation
!pip install folium
import folium
stations_map = folium.Map([40.729, -73.9], zoom_start=11, tiles='CartoDB positron', width='60%')
for i, station in stations_cleaned.iterrows():
marker = folium.CircleMarker([station['latitude'], station['longitude']],
popup=station['station'], color='FireBrick',
fill_color='FireBrick', radius=2)
marker.add_to(stations_map)
stations_map.save('maps/all_entrances.html')
stations_map
Explanation: Let's quickly plot the stations coordinates to have a feel for their geographical location:
End of explanation
stations_cleaned.to_pickle('pickle/stations_locations.p')
Explanation: The interactive map is available here.
Now let's just save it as a pickle binary file for later use in the recommender notebook.
End of explanation |
12,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 5.2 - Using your own images
In the next part of the lab we will download another set of images from the web and format them for use with a Convolutional Neural Network (CNN). In this example we will use cat and dog images from a recent competition on Kaggle but you will be able to follow the same process to import and format your own sets of images and use them to solve your own image classification problems if you wish.
Let's begin by importing some of the libraries we will be using in this lab
Step1: Go to https
Step2: Now we will look through each folder and generate a data set of properly formatted image data matched with the proper category label.
Step3: The process of loading all the image data and putting them into the data array will take some time so be patient and wait for the cell to finish running before continuing with the rest of the notebook.
If you get an error saying the kernel has crashed, you are probably running out of RAM memory. The entire data array with all image information needs to be stored dynamically in your RAM while the process is running, so depending on your computer's available RAM, using too many images or too high of a resolution can cause the RAM to fill up completely before the process has finished running, which will unfortunately cause Python to crash. If you run into this issue try setting a lower target resolution for the images or loading less images from the folder.
Once the data is loaded, we will shuffle the whole dataset to ensure random distribution of both categories.
Step4: Next we will make two new blank numpy arrays for both the feature (X) and target (y) data, and fill them with data from the data array. It might seem redundant to first load the data into a Python array and then transfer them to the numpy arrays we actually want. However, Python arrays have a more flexible data structure which allows us to fill the data set without first knowing how many images we have, and lets us keep the feature and target data together for each sample. This makes it easier to shuffle the entire data set in one move, and makes the process more flexible for other sets of images.
Step5: Let's make sure the data set has been properly imported and formatted by visualizing one of the images in the X feature dataset and printing the corresponding category from the y target dataset.
Step6: Now we will split both the X and y data sets by an arbitrary factor to create separate training and test sets. As before we will use the first 70% of the data for training, and the remaining 30% of the data for testing.
Step7: Finally, we will use the pickle library to save these datasets out to a local file.
The pickle library is extremely useful for saving the state of variables in your Python programs for later reuse. The library is able to take variables of any data type and output them to efficiently compressed local binary files. When you need the data again you can use the pickle library to reload the variables from the generated file. This is especially useful for storing sets of data that you may want to reuse several times, but take a long time to produce. This way you won't need to run the process in this notebook each time you want to use the images to train a model.
Warning | Python Code:
%matplotlib inline
from matplotlib.pyplot import imshow
import matplotlib.pyplot as plt
import numpy as np
from scipy import misc
import os
import random
import pickle
Explanation: Lab 5.2 - Using your own images
In the next part of the lab we will download another set of images from the web and format them for use with a Convolutional Neural Network (CNN). In this example we will use cat and dog images from a recent competition on Kaggle but you will be able to follow the same process to import and format your own sets of images and use them to solve your own image classification problems if you wish.
Let's begin by importing some of the libraries we will be using in this lab:
End of explanation
imageFolder = "-catsdogs"
folders = os.listdir(imageFolder)
num_categories = len(folders)
print folders
Explanation: Go to https://www.kaggle.com/c/dogs-vs-cats/data and download the "train" dataset only to your computer. You will have to register for a Kaggle account before you can download the data. Kaggle is an online repository for Machine Learning (ML) and Artificial Intelligence (AI) competitions and is a great resource for getting data to test your learning algorithms, and to keep up with the state-of-the-art in the ML and AI fields.
Once the train.zip file has been downloaded, uncompress it to a folder on your computer. The folder contains 25,000 images named according to whether they are a 'cat' or 'dog'. To make sure that these images work with the code below, create a new folder in the week-5 folder in your local repository (the same folder that contains this notebook file) called -catsdogs. Notice the dash (-) before the name, this is important so that Github does not sync the images to your account (which is not necessary and would take a really long time). Within this folder create two new folders called 0 and 1. Your folder structure should look like this:
.
├── dmc
| ├── notebooks
| | └── week-5
| | | └── -catsdogs
| | | | └── 0
| | | | └── 1
Finally, move all the cat images into the 0 folder, and all dog images into the 1 folder. From now on, we will consider the category 0 to represent cat and the category 1 to represent dog.
Next, we will use the os library to find the folders inside the main -catsdogs folder. This will make the code extensible to image recognition problems with any number of categories. In this case we only have two categories (cats and dogs) but you can extend it to more categories simply by adding more folders with images and labeling the folders sequentially starting with 0.
End of explanation
# specify desired image properties
# in this case we want black and white square images 64x64 pixels in size
image_dim = 1 # black and white
image_size = 64
# create an empty array to store the image data
data = []
# look inside each folder which represents the categories of our data
for folder in folders:
# find the files within each folder
fileNames = os.listdir("/".join([imageFolder, folder]))
# for each file, load and process each image
# in this case we limit the number of images used per cateogry to 10,000
# to prevent overloading our RAM memory
for fileName in fileNames[:10000]:
# read in the image data into a numpy array
img = misc.imread("/".join([imageFolder, folder, fileName]))
# if the image contains more than one color channel,
# take only the first channel (in effect, convert it to black and white)
if image_dim == 1 and len(img.shape) > 2:
img = img[:,:,0] # convert to black and white
# resize to target resolution if necessary
if img.shape[0] != image_size or img.shape[1] != image_size:
img = misc.imresize(img, (image_size, image_size), interp='nearest')
# normalize data to have mean 0 and standard deviation 1
# then rescale it to roughly the range 0-1
img = (img - img.mean()) / img.std() / 4 + 0.5
# add the image data and the associated category
# (which is stored in the folder variable) to the data set
# for this to work you need to make sure your folders
# are named sequentially starting with 0
data.append([img, folder])
print "Load data complete"
Explanation: Now we will look through each folder and generate a data set of properly formatted image data matched with the proper category label.
End of explanation
random.shuffle(data)
Explanation: The process of loading all the image data and putting them into the data array will take some time so be patient and wait for the cell to finish running before continuing with the rest of the notebook.
If you get an error saying the kernel has crashed, you are probably running out of RAM memory. The entire data array with all image information needs to be stored dynamically in your RAM while the process is running, so depending on your computer's available RAM, using too many images or too high of a resolution can cause the RAM to fill up completely before the process has finished running, which will unfortunately cause Python to crash. If you run into this issue try setting a lower target resolution for the images or loading less images from the folder.
Once the data is loaded, we will shuffle the whole dataset to ensure random distribution of both categories.
End of explanation
X = np.ndarray((len(data), image_size, image_size), dtype=np.float32)
y = np.ndarray((len(data), 1), dtype=np.int32)
for i, d in enumerate(data):
X[i] = d[0]
y[i] = d[1]
Explanation: Next we will make two new blank numpy arrays for both the feature (X) and target (y) data, and fill them with data from the data array. It might seem redundant to first load the data into a Python array and then transfer them to the numpy arrays we actually want. However, Python arrays have a more flexible data structure which allows us to fill the data set without first knowing how many images we have, and lets us keep the feature and target data together for each sample. This makes it easier to shuffle the entire data set in one move, and makes the process more flexible for other sets of images.
End of explanation
img_index = 2
img = X[img_index]
print "image dimensions:", img.shape
print "target category:", (['cat', 'dog'][y[img_index][0]])
imshow(img, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 1, interpolation='nearest')
plt.axis('off')
plt.show()
Explanation: Let's make sure the data set has been properly imported and formatted by visualizing one of the images in the X feature dataset and printing the corresponding category from the y target dataset.
End of explanation
trainingSplit = int(.7 * X.shape[0])
X_train = X[:trainingSplit]
y_train = y[:trainingSplit]
X_test = X[trainingSplit:]
y_test = y[trainingSplit:]
Explanation: Now we will split both the X and y data sets by an arbitrary factor to create separate training and test sets. As before we will use the first 70% of the data for training, and the remaining 30% of the data for testing.
End of explanation
pickle_file = imageFolder + '.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'X_train': X_train,
'y_train': y_train,
'X_test': X_test,
'y_test': y_test,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print 'Unable to save data to', pickle_file, ':', e
raise
statinfo = os.stat(pickle_file)
print 'Saved data to', pickle_file
print 'Compressed pickle size:', statinfo.st_size
Explanation: Finally, we will use the pickle library to save these datasets out to a local file.
The pickle library is extremely useful for saving the state of variables in your Python programs for later reuse. The library is able to take variables of any data type and output them to efficiently compressed local binary files. When you need the data again you can use the pickle library to reload the variables from the generated file. This is especially useful for storing sets of data that you may want to reuse several times, but take a long time to produce. This way you won't need to run the process in this notebook each time you want to use the images to train a model.
Warning: the saved dataset with 10,000 images per category will be over 300mb, so make sure you have enough space on your hard drive before running the following cell:
End of explanation |
12,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
kneed -- knee detection in Python
For the purposes of the walkthrough, import DataGenerator to create simulated datasets.
In practice, the KneeLocator class will be used to identify the knee point.
Step1: The knee is located by passing x and y values to knee_locator.
S is the sensitivity parameter
curve='concave'
Step2: There are plotting functions to visualize the knee point on the raw data and the normalized data.
Average Knee for NoisyGaussian from 50 random iterations
Step3: Test all type of functions
Step4: Polynomial line fit
An example of a "bumpy" line where the traditional interp1d spline fitting method does not provide the best estimate for the point of maximum curvature.
We demonstrate that setting the parameter interp_method='polynomial' will choose the right point by smoothing the line. | Python Code:
%matplotlib inline
from kneed.data_generator import DataGenerator as dg
from kneed.knee_locator import KneeLocator
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
x = [3.07, 3.38, 3.55, 3.68, 3.78, 3.81, 3.85, 3.88, 3.9, 3.93]
y = [0.0, 0.3, 0.47, 0.6, 0.69, 0.78, 0.845, 0.903, 0.95, 1.0]
kl = KneeLocator(x, y, S=1.0, curve='convex', direction='increasing', interp_method='interp1d')
kl.x_normalized
np.diff(kl.x_normalized).mean()
np.diff(x).mean()
from scipy.signal import argrelextrema
argrelextrema(kl.y_difference, np.greater)
argrelextrema(kl.y_difference, np.less)
kl.y_difference_maxima
plt.plot(kl.x_normalized, kl.y_normalized);
plt.plot(kl.x_difference, kl.y_difference);
np.random.seed(23) # only for the walkthrough
x,y = dg.noisy_gaussian(N=1000)
x[:5],y[:5]
Explanation: kneed -- knee detection in Python
For the purposes of the walkthrough, import DataGenerator to create simulated datasets.
In practice, the KneeLocator class will be used to identify the knee point.
End of explanation
kneedle = KneeLocator(x, y, S=1.0, curve='concave', direction='increasing', interp_method='polynomial')
kneedle.plot_knee_normalized()
kneedle.plot_knee()
kneedle.knee
Explanation: The knee is located by passing x and y values to knee_locator.
S is the sensitivity parameter
curve='concave'
End of explanation
knees = []
for i in range(50):
x,y = dg.noisy_gaussian(N=1000)
kneedle = KneeLocator(x, y, direction='increasing', curve='concave', interp_method='polynomial')
knees.append(kneedle.knee)
np.mean(knees)
Explanation: There are plotting functions to visualize the knee point on the raw data and the normalized data.
Average Knee for NoisyGaussian from 50 random iterations
End of explanation
x = np.arange(0,10)
y_convex_inc = np.array([1,2,3,4,5,10,15,20,40,100])
y_convex_dec = y_convex_inc[::-1]
y_concave_dec = 100 - y_convex_inc
y_concave_inc = 100 - y_convex_dec
kn = KneeLocator(x, y_convex_inc, curve='convex')
knee_yconvinc = kn.knee
kn = KneeLocator(x, y_convex_dec, curve='convex', direction='decreasing')
knee_yconvdec = kn.knee
kn = KneeLocator(x, y_concave_inc, curve='concave')
knee_yconcinc = kn.knee
kn = KneeLocator(x, y_concave_dec, curve='concave', direction='decreasing')
knee_yconcdec = kn.knee
f, axes = plt.subplots(2, 2, figsize=(10,10));
yconvinc = axes[0][0]
yconvdec = axes[0][1]
yconcinc = axes[1][0]
yconcdec = axes[1][1]
sns.lineplot(x, y_convex_inc, ax=axes[0][0])
yconvinc.vlines(x=knee_yconvinc, ymin=0, ymax=100, linestyle='--')
yconvinc.set_title("curve='convex', direction='increasing'")
sns.lineplot(x, y_convex_dec, ax=axes[0][1])
yconvdec.vlines(x=knee_yconvdec, ymin=0, ymax=100, linestyle='--')
yconvdec.set_title("curve='convex', direction='decreasing'")
sns.lineplot(x, y_concave_inc, ax=axes[1][0])
yconcinc.vlines(x=knee_yconcinc, ymin=0, ymax=100, linestyle='--')
yconcinc.set_title("curve='concave', direction='increasing'")
sns.lineplot(x, y_concave_dec, ax=axes[1][1])
yconcdec.vlines(x=knee_yconcdec, ymin=0, ymax=100, linestyle='--')
yconcdec.set_title("curve='concave', direction='decreasing'");
Explanation: Test all type of functions
End of explanation
x = list(range(90))
y = [
7304.99, 6978.98, 6666.61, 6463.2, 6326.53, 6048.79, 6032.79, 5762.01, 5742.77,
5398.22, 5256.84, 5226.98, 5001.72, 4941.98, 4854.24, 4734.61, 4558.75, 4491.1,
4411.61, 4333.01, 4234.63, 4139.1, 4056.8, 4022.49, 3867.96, 3808.27, 3745.27,
3692.34, 3645.55, 3618.28, 3574.26, 3504.31, 3452.44, 3401.2, 3382.37, 3340.67,
3301.08, 3247.59, 3190.27, 3179.99, 3154.24, 3089.54, 3045.62, 2988.99, 2993.61,
2941.35, 2875.6, 2866.33, 2834.12, 2785.15, 2759.65, 2763.2, 2720.14, 2660.14,
2690.22, 2635.71, 2632.92, 2574.63, 2555.97, 2545.72, 2513.38, 2491.57, 2496.05,
2466.45, 2442.72, 2420.53, 2381.54, 2388.09, 2340.61, 2335.03, 2318.93, 2319.05,
2308.23, 2262.23, 2235.78, 2259.27, 2221.05, 2202.69, 2184.29, 2170.07, 2160.05,
2127.68, 2134.73, 2101.96, 2101.44, 2066.4, 2074.25, 2063.68, 2048.12, 2031.87
]
kneedle = KneeLocator(x, y, S=1.0, curve='convex', direction='decreasing')
kneedle.plot_knee_normalized()
kneedle = KneeLocator(x, y, S=1.0, curve='convex', direction='decreasing', interp_method='polynomial')
kneedle.plot_knee_normalized()
Explanation: Polynomial line fit
An example of a "bumpy" line where the traditional interp1d spline fitting method does not provide the best estimate for the point of maximum curvature.
We demonstrate that setting the parameter interp_method='polynomial' will choose the right point by smoothing the line.
End of explanation |
12,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TF-Slim Walkthrough
This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks.
Table of contents
<a href="#Install">Installation and setup</a><br>
<a href='#MLP'>Creating your first neural network with TF-Slim</a><br>
<a href='#ReadingTFSlimDatasets'>Reading Data with TF-Slim</a><br>
<a href='#CNN'>Training a convolutional neural network (CNN)</a><br>
<a href='#Pretained'>Using pre-trained models</a><br>
Installation and setup
<a id='Install'></a>
As of 8/28/16, the latest stable release of TF is r0.10, which does not contain the latest version of slim.
To obtain the latest version of TF-Slim, please install the most recent nightly build of TF
as explained here.
To use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here. Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim before running this notebook, so that these files are in your python path.
To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory.
Step2: Creating your first neural network with TF-Slim
<a id='MLP'></a>
Below we give some code to create a simple multilayer perceptron (MLP) which can be used
for regression problems. The model has 2 hidden layers.
The output is a single node.
When this function is called, it will create various nodes, and silently add them to whichever global TF graph is currently in scope. When a node which corresponds to a layer with adjustable parameters (eg., a fully connected layer) is created, additional parameter variable nodes are silently created, and added to the graph. (We will discuss how to train the parameters later.)
We use variable scope to put all the nodes under a common name,
so that the graph has some hierarchical structure.
This is useful when we want to visualize the TF graph in tensorboard, or if we want to query related
variables.
The fully connected layers all use the same L2 weight decay and ReLu activations, as specified by arg_scope. (However, the final layer overrides these defaults, and uses an identity activation function.)
We also illustrate how to add a dropout layer after the first fully connected layer (FC1). Note that at test time,
we do not drop out nodes, but instead use the average activations; hence we need to know whether the model is being
constructed for training or testing, since the computational graph will be different in the two cases
(although the variables, storing the model parameters, will be shared, since they have the same name/scope).
Step3: Let's create the model and examine its structure.
We create a TF graph and call regression_model(), which adds nodes (tensors) to the graph. We then examine their shape, and print the names of all the model variables which have been implicitly created inside of each layer. We see that the names of the variables follow the scopes that we specified.
Step4: Let's create some 1d regression data .
We will train and test the model on some noisy observations of a nonlinear function.
Step5: Let's fit the model to the data
The user has to specify the loss function and the optimizer, and slim does the rest.
In particular, the slim.learning.train function does the following
Step6: Training with multiple loss functions.
Sometimes we have multiple objectives we want to simultaneously optimize.
In slim, it is easy to add more losses, as we show below. (We do not optimize the total loss in this example,
but we show how to compute it.)
Step7: Let's load the saved model and use it for prediction.
Step8: Let's compute various evaluation metrics on the test set.
In TF-Slim termiology, losses are optimized, but metrics (which may not be differentiable, e.g., precision and recall) are just measured. As an illustration, the code below computes mean squared error and mean absolute error metrics on the test set.
Each metric declaration creates several local variables (which must be initialized via tf.initialize_local_variables()) and returns both a value_op and an update_op. When evaluated, the value_op returns the current value of the metric. The update_op loads a new batch of data, runs the model, obtains the predictions and accumulates the metric statistics appropriately before returning the current value of the metric. We store these value nodes and update nodes in 2 dictionaries.
After creating the metric nodes, we can pass them to slim.evaluation.evaluation, which repeatedly evaluates these nodes the specified number of times. (This allows us to compute the evaluation in a streaming fashion across minibatches, which is usefulf for large datasets.) Finally, we print the final value of each metric.
Step9: Reading Data with TF-Slim
<a id='ReadingTFSlimDatasets'></a>
Reading data with TF-Slim has two main components
Step10: Display some of the data.
Step11: Convolutional neural nets (CNNs).
<a id='CNN'></a>
In this section, we show how to train an image classifier using a simple CNN.
Define the model.
Below we define a simple CNN. Note that the output layer is linear function - we will apply softmax transformation externally to the model, either in the loss function (for training), or in the prediction function (during testing).
Step12: Apply the model to some randomly generated images.
Step14: Train the model on the Flowers dataset.
Before starting, make sure you've run the code to <a href="#DownloadFlowers">Download the Flowers</a> dataset. Now, we'll get a sense of what it looks like to use TF-Slim's training functions found in
learning.py. First, we'll create a function, load_batch, that loads batches of dataset from a dataset. Next, we'll train a model for a single step (just to demonstrate the API), and evaluate the results.
Step15: Evaluate some metrics.
As we discussed above, we can compute various metrics besides the loss.
Below we show how to compute prediction accuracy of the trained model, as well as top-5 classification accuracy. (The difference between evaluation and evaluation_loop is that the latter writes the results to a log directory, so they can be viewed in tensorboard.)
Step16: Using pre-trained models
<a id='Pretrained'></a>
Neural nets work best when they have many parameters, making them very flexible function approximators.
However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list here.
You can either use these models as-is, or you can perform "surgery" on them, to modify them for some other task. For example, it is common to "chop off" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes.
Take into account that VGG and ResNet final layers have only 1000 outputs rather than 1001. The ImageNet dataset provied has an empty background class which can be used to fine-tune the model to other tasks. VGG and ResNet models provided here don't use that class. We provide two examples of using pretrained models
Step17: Apply Pre-trained Inception V1 model to Images.
We have to convert each image to the size expected by the model checkpoint.
There is no easy way to determine this size from the checkpoint itself.
So we use a preprocessor to enforce this.
Step18: Download the VGG-16 checkpoint
Step19: Apply Pre-trained VGG-16 model to Images.
We have to convert each image to the size expected by the model checkpoint.
There is no easy way to determine this size from the checkpoint itself.
So we use a preprocessor to enforce this. Pay attention to the difference caused by 1000 classes instead of 1001.
Step21: Fine-tune the model on a different set of labels.
We will fine tune the inception model on the Flowers dataset.
Step22: Apply fine tuned model to some images. | Python Code:
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import math
import numpy as np
import tensorflow as tf
import time
from datasets import dataset_utils
# Main slim library
slim = tf.contrib.slim
Explanation: TF-Slim Walkthrough
This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks.
Table of contents
<a href="#Install">Installation and setup</a><br>
<a href='#MLP'>Creating your first neural network with TF-Slim</a><br>
<a href='#ReadingTFSlimDatasets'>Reading Data with TF-Slim</a><br>
<a href='#CNN'>Training a convolutional neural network (CNN)</a><br>
<a href='#Pretained'>Using pre-trained models</a><br>
Installation and setup
<a id='Install'></a>
As of 8/28/16, the latest stable release of TF is r0.10, which does not contain the latest version of slim.
To obtain the latest version of TF-Slim, please install the most recent nightly build of TF
as explained here.
To use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here. Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim before running this notebook, so that these files are in your python path.
To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory.
End of explanation
def regression_model(inputs, is_training=True, scope="deep_regression"):
Creates the regression model.
Args:
inputs: A node that yields a `Tensor` of size [batch_size, dimensions].
is_training: Whether or not we're currently training the model.
scope: An optional variable_op scope for the model.
Returns:
predictions: 1-D `Tensor` of shape [batch_size] of responses.
end_points: A dict of end points representing the hidden layers.
with tf.variable_scope(scope, 'deep_regression', [inputs]):
end_points = {}
# Set the default weight _regularizer and acvitation for each fully_connected layer.
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(0.01)):
# Creates a fully connected layer from the inputs with 32 hidden units.
net = slim.fully_connected(inputs, 32, scope='fc1')
end_points['fc1'] = net
# Adds a dropout layer to prevent over-fitting.
net = slim.dropout(net, 0.8, is_training=is_training)
# Adds another fully connected layer with 16 hidden units.
net = slim.fully_connected(net, 16, scope='fc2')
end_points['fc2'] = net
# Creates a fully-connected layer with a single hidden unit. Note that the
# layer is made linear by setting activation_fn=None.
predictions = slim.fully_connected(net, 1, activation_fn=None, scope='prediction')
end_points['out'] = predictions
return predictions, end_points
Explanation: Creating your first neural network with TF-Slim
<a id='MLP'></a>
Below we give some code to create a simple multilayer perceptron (MLP) which can be used
for regression problems. The model has 2 hidden layers.
The output is a single node.
When this function is called, it will create various nodes, and silently add them to whichever global TF graph is currently in scope. When a node which corresponds to a layer with adjustable parameters (eg., a fully connected layer) is created, additional parameter variable nodes are silently created, and added to the graph. (We will discuss how to train the parameters later.)
We use variable scope to put all the nodes under a common name,
so that the graph has some hierarchical structure.
This is useful when we want to visualize the TF graph in tensorboard, or if we want to query related
variables.
The fully connected layers all use the same L2 weight decay and ReLu activations, as specified by arg_scope. (However, the final layer overrides these defaults, and uses an identity activation function.)
We also illustrate how to add a dropout layer after the first fully connected layer (FC1). Note that at test time,
we do not drop out nodes, but instead use the average activations; hence we need to know whether the model is being
constructed for training or testing, since the computational graph will be different in the two cases
(although the variables, storing the model parameters, will be shared, since they have the same name/scope).
End of explanation
with tf.Graph().as_default():
# Dummy placeholders for arbitrary number of 1d inputs and outputs
inputs = tf.placeholder(tf.float32, shape=(None, 1))
outputs = tf.placeholder(tf.float32, shape=(None, 1))
# Build model
predictions, end_points = regression_model(inputs)
# Print name and shape of each tensor.
print "Layers"
for k, v in end_points.items():
print 'name = {}, shape = {}'.format(v.name, v.get_shape())
# Print name and shape of parameter nodes (values not yet initialized)
print "\n"
print "Parameters"
for v in slim.get_model_variables():
print 'name = {}, shape = {}'.format(v.name, v.get_shape())
Explanation: Let's create the model and examine its structure.
We create a TF graph and call regression_model(), which adds nodes (tensors) to the graph. We then examine their shape, and print the names of all the model variables which have been implicitly created inside of each layer. We see that the names of the variables follow the scopes that we specified.
End of explanation
def produce_batch(batch_size, noise=0.3):
xs = np.random.random(size=[batch_size, 1]) * 10
ys = np.sin(xs) + 5 + np.random.normal(size=[batch_size, 1], scale=noise)
return [xs.astype(np.float32), ys.astype(np.float32)]
x_train, y_train = produce_batch(200)
x_test, y_test = produce_batch(200)
plt.scatter(x_train, y_train)
Explanation: Let's create some 1d regression data .
We will train and test the model on some noisy observations of a nonlinear function.
End of explanation
def convert_data_to_tensors(x, y):
inputs = tf.constant(x)
inputs.set_shape([None, 1])
outputs = tf.constant(y)
outputs.set_shape([None, 1])
return inputs, outputs
# The following snippet trains the regression model using a mean_squared_error loss.
ckpt_dir = '/tmp/regression_model/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
inputs, targets = convert_data_to_tensors(x_train, y_train)
# Make the model.
predictions, nodes = regression_model(inputs, is_training=True)
# Add the loss function to the graph.
loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions)
# The total loss is the uers's loss plus any regularization losses.
total_loss = slim.losses.get_total_loss()
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.005)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training inside a session.
final_loss = slim.learning.train(
train_op,
logdir=ckpt_dir,
number_of_steps=5000,
save_summaries_secs=5,
log_every_n_steps=500)
print("Finished training. Last batch loss:", final_loss)
print("Checkpoint saved in %s" % ckpt_dir)
Explanation: Let's fit the model to the data
The user has to specify the loss function and the optimizer, and slim does the rest.
In particular, the slim.learning.train function does the following:
For each iteration, evaluate the train_op, which updates the parameters using the optimizer applied to the current minibatch. Also, update the global_step.
Occasionally store the model checkpoint in the specified directory. This is useful in case your machine crashes - then you can simply restart from the specified checkpoint.
End of explanation
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_train, y_train)
predictions, end_points = regression_model(inputs, is_training=True)
# Add multiple loss nodes.
mean_squared_error_loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions)
absolute_difference_loss = slim.losses.absolute_difference(predictions, targets)
# The following two ways to compute the total loss are equivalent
regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = mean_squared_error_loss + absolute_difference_loss + regularization_loss
# Regularization Loss is included in the total loss by default.
# This is good for training, but not for testing.
total_loss2 = slim.losses.get_total_loss(add_regularization_losses=True)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op) # Will initialize the parameters with random weights.
total_loss1, total_loss2 = sess.run([total_loss1, total_loss2])
print('Total Loss1: %f' % total_loss1)
print('Total Loss2: %f' % total_loss2)
print('Regularization Losses:')
for loss in slim.losses.get_regularization_losses():
print(loss)
print('Loss Functions:')
for loss in slim.losses.get_losses():
print(loss)
Explanation: Training with multiple loss functions.
Sometimes we have multiple objectives we want to simultaneously optimize.
In slim, it is easy to add more losses, as we show below. (We do not optimize the total loss in this example,
but we show how to compute it.)
End of explanation
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
# Create the model structure. (Parameters will be loaded below.)
predictions, end_points = regression_model(inputs, is_training=False)
# Make a session which restores the old parameters from a checkpoint.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
inputs, predictions, targets = sess.run([inputs, predictions, targets])
plt.scatter(inputs, targets, c='r');
plt.scatter(inputs, predictions, c='b');
plt.title('red=true, blue=predicted')
Explanation: Let's load the saved model and use it for prediction.
End of explanation
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
predictions, end_points = regression_model(inputs, is_training=False)
# Specify metrics to evaluate:
names_to_value_nodes, names_to_update_nodes = slim.metrics.aggregate_metric_map({
'Mean Squared Error': slim.metrics.streaming_mean_squared_error(predictions, targets),
'Mean Absolute Error': slim.metrics.streaming_mean_absolute_error(predictions, targets)
})
# Make a session which restores the old graph parameters, and then run eval.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
metric_values = slim.evaluation.evaluation(
sess,
num_evals=1, # Single pass over data
eval_op=names_to_update_nodes.values(),
final_op=names_to_value_nodes.values())
names_to_values = dict(zip(names_to_value_nodes.keys(), metric_values))
for key, value in names_to_values.items():
print('%s: %f' % (key, value))
Explanation: Let's compute various evaluation metrics on the test set.
In TF-Slim termiology, losses are optimized, but metrics (which may not be differentiable, e.g., precision and recall) are just measured. As an illustration, the code below computes mean squared error and mean absolute error metrics on the test set.
Each metric declaration creates several local variables (which must be initialized via tf.initialize_local_variables()) and returns both a value_op and an update_op. When evaluated, the value_op returns the current value of the metric. The update_op loads a new batch of data, runs the model, obtains the predictions and accumulates the metric statistics appropriately before returning the current value of the metric. We store these value nodes and update nodes in 2 dictionaries.
After creating the metric nodes, we can pass them to slim.evaluation.evaluation, which repeatedly evaluates these nodes the specified number of times. (This allows us to compute the evaluation in a streaming fashion across minibatches, which is usefulf for large datasets.) Finally, we print the final value of each metric.
End of explanation
import tensorflow as tf
from datasets import dataset_utils
url = "http://download.tensorflow.org/data/flowers.tar.gz"
flowers_data_dir = '/tmp/flowers'
if not tf.gfile.Exists(flowers_data_dir):
tf.gfile.MakeDirs(flowers_data_dir)
dataset_utils.download_and_uncompress_tarball(url, flowers_data_dir)
Explanation: Reading Data with TF-Slim
<a id='ReadingTFSlimDatasets'></a>
Reading data with TF-Slim has two main components: A
Dataset and a
DatasetDataProvider. The former is a descriptor of a dataset, while the latter performs the actions necessary for actually reading the data. Lets look at each one in detail:
Dataset
A TF-Slim
Dataset
contains descriptive information about a dataset necessary for reading it, such as the list of data files and how to decode them. It also contains metadata including class labels, the size of the train/test splits and descriptions of the tensors that the dataset provides. For example, some datasets contain images with labels. Others augment this data with bounding box annotations, etc. The Dataset object allows us to write generic code using the same API, regardless of the data content and encoding type.
TF-Slim's Dataset works especially well when the data is stored as a (possibly sharded)
TFRecords file, where each record contains a tf.train.Example protocol buffer.
TF-Slim uses a consistent convention for naming the keys and values inside each Example record.
DatasetDataProvider
A
DatasetDataProvider is a class which actually reads the data from a dataset. It is highly configurable to read the data in various ways that may make a big impact on the efficiency of your training process. For example, it can be single or multi-threaded. If your data is sharded across many files, it can read each files serially, or from every file simultaneously.
Demo: The Flowers Dataset
For convenience, we've include scripts to convert several common image datasets into TFRecord format and have provided
the Dataset descriptor files necessary for reading them. We demonstrate how easy it is to use these dataset via the Flowers dataset below.
Download the Flowers Dataset
<a id='DownloadFlowers'></a>
We've made available a tarball of the Flowers dataset which has already been converted to TFRecord format.
End of explanation
from datasets import flowers
import tensorflow as tf
slim = tf.contrib.slim
with tf.Graph().as_default():
dataset = flowers.get_split('train', flowers_data_dir)
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32, common_queue_min=1)
image, label = data_provider.get(['image', 'label'])
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
for i in xrange(4):
np_image, np_label = sess.run([image, label])
height, width, _ = np_image.shape
class_name = name = dataset.labels_to_names[np_label]
plt.figure()
plt.imshow(np_image)
plt.title('%s, %d x %d' % (name, height, width))
plt.axis('off')
plt.show()
Explanation: Display some of the data.
End of explanation
def my_cnn(images, num_classes, is_training): # is_training is not used...
with slim.arg_scope([slim.max_pool2d], kernel_size=[3, 3], stride=2):
net = slim.conv2d(images, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.conv2d(net, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.flatten(net)
net = slim.fully_connected(net, 192)
net = slim.fully_connected(net, num_classes, activation_fn=None)
return net
Explanation: Convolutional neural nets (CNNs).
<a id='CNN'></a>
In this section, we show how to train an image classifier using a simple CNN.
Define the model.
Below we define a simple CNN. Note that the output layer is linear function - we will apply softmax transformation externally to the model, either in the loss function (for training), or in the prediction function (during testing).
End of explanation
import tensorflow as tf
with tf.Graph().as_default():
# The model can handle any input size because the first layer is convolutional.
# The size of the model is determined when image_node is first passed into the my_cnn function.
# Once the variables are initialized, the size of all the weight matrices is fixed.
# Because of the fully connected layers, this means that all subsequent images must have the same
# input size as the first image.
batch_size, height, width, channels = 3, 28, 28, 3
images = tf.random_uniform([batch_size, height, width, channels], maxval=1)
# Create the model.
num_classes = 10
logits = my_cnn(images, num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
# Initialize all the variables (including parameters) randomly.
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
# Run the init_op, evaluate the model outputs and print the results:
sess.run(init_op)
probabilities = sess.run(probabilities)
print('Probabilities Shape:')
print(probabilities.shape) # batch_size x num_classes
print('\nProbabilities:')
print(probabilities)
print('\nSumming across all classes (Should equal 1):')
print(np.sum(probabilities, 1)) # Each row sums to 1
Explanation: Apply the model to some randomly generated images.
End of explanation
from preprocessing import inception_preprocessing
import tensorflow as tf
slim = tf.contrib.slim
def load_batch(dataset, batch_size=32, height=299, width=299, is_training=False):
Loads a single batch of data.
Args:
dataset: The dataset to load.
batch_size: The number of images in the batch.
height: The size of each image after preprocessing.
width: The size of each image after preprocessing.
is_training: Whether or not we're currently training or evaluating.
Returns:
images: A Tensor of size [batch_size, height, width, 3], image samples that have been preprocessed.
images_raw: A Tensor of size [batch_size, height, width, 3], image samples that can be used for visualization.
labels: A Tensor of size [batch_size], whose values range between 0 and dataset.num_classes.
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32,
common_queue_min=8)
image_raw, label = data_provider.get(['image', 'label'])
# Preprocess image for usage by Inception.
image = inception_preprocessing.preprocess_image(image_raw, height, width, is_training=is_training)
# Preprocess the image for display purposes.
image_raw = tf.expand_dims(image_raw, 0)
image_raw = tf.image.resize_images(image_raw, [height, width])
image_raw = tf.squeeze(image_raw)
# Batch it up.
images, images_raw, labels = tf.train.batch(
[image, image_raw, label],
batch_size=batch_size,
num_threads=1,
capacity=2 * batch_size)
return images, images_raw, labels
from datasets import flowers
# This might take a few minutes.
train_dir = '/tmp/tfslim_model/'
print('Will save model to %s' % train_dir)
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
# Create the model:
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.summary.scalar('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
number_of_steps=1, # For speed, we just do 1 epoch
save_summaries_secs=1)
print('Finished training. Final batch loss %d' % final_loss)
Explanation: Train the model on the Flowers dataset.
Before starting, make sure you've run the code to <a href="#DownloadFlowers">Download the Flowers</a> dataset. Now, we'll get a sense of what it looks like to use TF-Slim's training functions found in
learning.py. First, we'll create a function, load_batch, that loads batches of dataset from a dataset. Next, we'll train a model for a single step (just to demonstrate the API), and evaluate the results.
End of explanation
from datasets import flowers
# This might take a few minutes.
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.DEBUG)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=False)
predictions = tf.argmax(logits, 1)
# Define the metrics:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'eval/Accuracy': slim.metrics.streaming_accuracy(predictions, labels),
'eval/Recall@5': slim.metrics.streaming_recall_at_k(logits, labels, 5),
})
print('Running evaluation Loop...')
checkpoint_path = tf.train.latest_checkpoint(train_dir)
metric_values = slim.evaluation.evaluate_once(
master='',
checkpoint_path=checkpoint_path,
logdir=train_dir,
eval_op=names_to_updates.values(),
final_op=names_to_values.values())
names_to_values = dict(zip(names_to_values.keys(), metric_values))
for name in names_to_values:
print('%s: %f' % (name, names_to_values[name]))
Explanation: Evaluate some metrics.
As we discussed above, we can compute various metrics besides the loss.
Below we show how to compute prediction accuracy of the trained model, as well as top-5 classification accuracy. (The difference between evaluation and evaluation_loop is that the latter writes the results to a log directory, so they can be viewed in tensorboard.)
End of explanation
from datasets import dataset_utils
url = "http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz"
checkpoints_dir = '/tmp/checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
Explanation: Using pre-trained models
<a id='Pretrained'></a>
Neural nets work best when they have many parameters, making them very flexible function approximators.
However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list here.
You can either use these models as-is, or you can perform "surgery" on them, to modify them for some other task. For example, it is common to "chop off" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes.
Take into account that VGG and ResNet final layers have only 1000 outputs rather than 1001. The ImageNet dataset provied has an empty background class which can be used to fine-tune the model to other tasks. VGG and ResNet models provided here don't use that class. We provide two examples of using pretrained models: Inception V1 and VGG-19 models to highlight this difference.
Download the Inception V1 checkpoint
End of explanation
import numpy as np
import os
import tensorflow as tf
import urllib2
from datasets import imagenet
from nets import inception
from preprocessing import inception_preprocessing
slim = tf.contrib.slim
image_size = inception.inception_v1.default_image_size
with tf.Graph().as_default():
url = 'https://upload.wikimedia.org/wikipedia/commons/7/70/EnglishCockerSpaniel_simon.jpg'
image_string = urllib2.urlopen(url).read()
image = tf.image.decode_jpeg(image_string, channels=3)
processed_image = inception_preprocessing.preprocess_image(image, image_size, image_size, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False)
probabilities = tf.nn.softmax(logits)
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
slim.get_model_variables('InceptionV1'))
with tf.Session() as sess:
init_fn(sess)
np_image, probabilities = sess.run([image, probabilities])
probabilities = probabilities[0, 0:]
sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])]
plt.figure()
plt.imshow(np_image.astype(np.uint8))
plt.axis('off')
plt.show()
names = imagenet.create_readable_names_for_imagenet_labels()
for i in range(5):
index = sorted_inds[i]
print('Probability %0.2f%% => [%s]' % (probabilities[index] * 100, names[index]))
Explanation: Apply Pre-trained Inception V1 model to Images.
We have to convert each image to the size expected by the model checkpoint.
There is no easy way to determine this size from the checkpoint itself.
So we use a preprocessor to enforce this.
End of explanation
from datasets import dataset_utils
import tensorflow as tf
url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz"
checkpoints_dir = '/tmp/checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
Explanation: Download the VGG-16 checkpoint
End of explanation
import numpy as np
import os
import tensorflow as tf
import urllib2
from datasets import imagenet
from nets import vgg
from preprocessing import vgg_preprocessing
slim = tf.contrib.slim
image_size = vgg.vgg_16.default_image_size
with tf.Graph().as_default():
url = 'https://upload.wikimedia.org/wikipedia/commons/d/d9/First_Student_IC_school_bus_202076.jpg'
image_string = urllib2.urlopen(url).read()
image = tf.image.decode_jpeg(image_string, channels=3)
processed_image = vgg_preprocessing.preprocess_image(image, image_size, image_size, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(vgg.vgg_arg_scope()):
# 1000 classes instead of 1001.
logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False)
probabilities = tf.nn.softmax(logits)
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'vgg_16.ckpt'),
slim.get_model_variables('vgg_16'))
with tf.Session() as sess:
init_fn(sess)
np_image, probabilities = sess.run([image, probabilities])
probabilities = probabilities[0, 0:]
sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])]
plt.figure()
plt.imshow(np_image.astype(np.uint8))
plt.axis('off')
plt.show()
names = imagenet.create_readable_names_for_imagenet_labels()
for i in range(5):
index = sorted_inds[i]
# Shift the index of a class name by one.
print('Probability %0.2f%% => [%s]' % (probabilities[index] * 100, names[index+1]))
Explanation: Apply Pre-trained VGG-16 model to Images.
We have to convert each image to the size expected by the model checkpoint.
There is no easy way to determine this size from the checkpoint itself.
So we use a preprocessor to enforce this. Pay attention to the difference caused by 1000 classes instead of 1001.
End of explanation
# Note that this may take several minutes.
import os
from datasets import flowers
from nets import inception
from preprocessing import inception_preprocessing
slim = tf.contrib.slim
image_size = inception.inception_v1.default_image_size
def get_init_fn():
Returns a function run by the chief worker to warm-start the training.
checkpoint_exclude_scopes=["InceptionV1/Logits", "InceptionV1/AuxLogits"]
exclusions = [scope.strip() for scope in checkpoint_exclude_scopes]
variables_to_restore = []
for var in slim.get_model_variables():
excluded = False
for exclusion in exclusions:
if var.op.name.startswith(exclusion):
excluded = True
break
if not excluded:
variables_to_restore.append(var)
return slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
variables_to_restore)
train_dir = '/tmp/inception_finetuned/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.summary.scalar('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
init_fn=get_init_fn(),
number_of_steps=2)
print('Finished training. Last batch loss %f' % final_loss)
Explanation: Fine-tune the model on a different set of labels.
We will fine tune the inception model on the Flowers dataset.
End of explanation
import numpy as np
import tensorflow as tf
from datasets import flowers
from nets import inception
slim = tf.contrib.slim
image_size = inception.inception_v1.default_image_size
batch_size = 3
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, images_raw, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
checkpoint_path = tf.train.latest_checkpoint(train_dir)
init_fn = slim.assign_from_checkpoint_fn(
checkpoint_path,
slim.get_variables_to_restore())
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
sess.run(tf.initialize_local_variables())
init_fn(sess)
np_probabilities, np_images_raw, np_labels = sess.run([probabilities, images_raw, labels])
for i in xrange(batch_size):
image = np_images_raw[i, :, :, :]
true_label = np_labels[i]
predicted_label = np.argmax(np_probabilities[i, :])
predicted_name = dataset.labels_to_names[predicted_label]
true_name = dataset.labels_to_names[true_label]
plt.figure()
plt.imshow(image.astype(np.uint8))
plt.title('Ground Truth: [%s], Prediction [%s]' % (true_name, predicted_name))
plt.axis('off')
plt.show()
Explanation: Apply fine tuned model to some images.
End of explanation |
12,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-1', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
12,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute iterative reweighted TF-MxNE with multiscale time-frequency dictionary
The iterative reweighted TF-MxNE solver is a distributed inverse method
based on the TF-MxNE solver, which promotes focal (sparse) sources
Step1: Load somatosensory MEG data
Step2: Run iterative reweighted multidict TF-MxNE solver
Step3: Generate stc from dipoles
Step4: Show the evoked response and the residual for gradiometers | Python Code:
# Author: Mathurin Massias <[email protected]>
# Yousra Bekhti <[email protected]>
# Daniel Strohmeier <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import mne
from mne.datasets import somato
from mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles
from mne.viz import plot_sparse_source_estimates
print(__doc__)
Explanation: Compute iterative reweighted TF-MxNE with multiscale time-frequency dictionary
The iterative reweighted TF-MxNE solver is a distributed inverse method
based on the TF-MxNE solver, which promotes focal (sparse) sources
:footcite:StrohmeierEtAl2015. The benefit of this approach is that:
it is spatio-temporal without assuming stationarity (sources properties
can vary over time),
activations are localized in space, time and frequency in one step,
the solver uses non-convex penalties in the TF domain, which results in a
solution less biased towards zero than when simple TF-MxNE is used,
using a multiscale dictionary allows to capture short transient
activations along with slower brain waves :footcite:BekhtiEtAl2016.
End of explanation
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
fwd_fname = op.join(data_path, 'derivatives', 'sub-{}'.format(subject),
'sub-{}_task-{}-fwd.fif'.format(subject, task))
condition = 'Unknown'
# Read evoked
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
reject = dict(grad=4000e-13, eog=350e-6)
picks = mne.pick_types(raw.info, meg=True, eog=True)
event_id, tmin, tmax = 1, -1., 3.
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
reject=reject, preload=True)
evoked = epochs.filter(1, None).average()
evoked = evoked.pick_types(meg=True)
evoked.crop(tmin=0.008, tmax=0.2)
# Compute noise covariance matrix
cov = mne.compute_covariance(epochs, rank='info', tmax=0.)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
Explanation: Load somatosensory MEG data
End of explanation
alpha, l1_ratio = 20, 0.05
loose, depth = 1, 0.95
# Use a multiscale time-frequency dictionary
wsize, tstep = [4, 16], [2, 4]
n_tfmxne_iter = 10
# Compute TF-MxNE inverse solution with dipole output
dipoles, residual = tf_mixed_norm(
evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio,
n_tfmxne_iter=n_tfmxne_iter, loose=loose,
depth=depth, tol=1e-3,
wsize=wsize, tstep=tstep, return_as_dipoles=True,
return_residual=True)
# Crop to remove edges
for dip in dipoles:
dip.crop(tmin=-0.05, tmax=0.3)
evoked.crop(tmin=-0.05, tmax=0.3)
residual.crop(tmin=-0.05, tmax=0.3)
Explanation: Run iterative reweighted multidict TF-MxNE solver
End of explanation
stc = make_stc_from_dipoles(dipoles, forward['src'])
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1, fig_name="irTF-MxNE (cond %s)"
% condition)
Explanation: Generate stc from dipoles
End of explanation
ylim = dict(grad=[-300, 300])
evoked.pick_types(meg='grad')
evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,
proj=True)
residual.pick_types(meg='grad')
residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,
proj=True)
Explanation: Show the evoked response and the residual for gradiometers
End of explanation |
12,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This tutorial is based on example in Jake Vanderplas' PyCon 2015 tutorial.
What is Machine Learning
Machine Learning is a subfield of computer science that utilizes statistics and mathemathical optimization to learn generalizable patterns from data. In machine learning, we develop models with adjustable parameters, so we can learn the parameters that best fit the data. In this tutorial, I will cover five commonly used algorithms in supervised learning.
It's a simple idea that has led to some impressive results.
The goal of this talk is for you to understand what these algorithms are learning from the data.
Many machine learning tutorials just show classifier's performance with no in-depth explanation of how the algorithm works.
Photo Credits @ Randy Olsen
Imagine you were taught how to build a birdhouse, but never taught how to use a hammer.
Step1: But I don't want scare people with spooky, scary math
Iris Dataset
The Iris dataset is a small dataset of 150 Iris flowers. For each data point we have the following features
Step2: For this example, we will only use the Sepal Length and Sepal Width feature, so we can plot the data in a 2D grid.
Step3: K-Nearest Neighbors
Step4: First, we'll create a random sample of 25 data points
Step5: Let's look at a random sample of the Iris dataset. Suppose I were to ask you to classify the green dot as either red, yellow, or blue. What would you say?
Step6: Your intuition is that the green point will have a label similar to the points surrounding it. This is the idea behind the K-Nearest Neighbors algorithm.
Start with test data point.
Calculate distance between test point and all data points in the training set.
Use the labels of the k closest points to assign a label to the test point.
Step7: We created a K-NN classifier that uses 13 neighbors to decide each test point. Let's see how it would label the space.
Step8: K-NN is easy to understand and implement, but because it requires us to test each new data point against each point in the training set, it is slow and scales poorly to larger datasets.
Logistic Regression
Step9: Logistic Regression is the classification cousin of linear regression. Our boundaries are made up of lines.
In logistic regression, we predict the likelihood of a given label y based on our features.
Step10: Compared to KNN, we can create more concise decision boundaries for the data points.
Step11: Logistic regression is quick and simple, but sometimes does not perform well due to the strict partitioning of the decision boundary.
Support Vector Machines
Step12: Well, we can put a line down the middle that separates them.
Step13: But now if we add another red ball, the boundary does not hold
Step14: Could this problem have been avoided? Is there a way to choose the line in the initial problem to prevent this?
The answer is yes! If we choose a line that has a large margin, we can confidently label new data points.
Step15: Ta-Da!
Step16: But let's say we have an arrangement of balls that no line can separate. What can we do?
Step17: One of the big breakthroughs in machine learning is the kernel function. Using the kernel function, we can map the input data into a new feature space.
Step18: After mapping our points to a higher dimensional space, we can map this line back down to our original space.
Step19: Support Vector Machine
Step20: Let's create a new dataset
Step21: If we want to separate the two cluster of points with lines, there are several lines we can choose from. How would we determine which line is the best line.
Step22: Well... we can look at how big of margin each line has. The larger the margin, the more "confident" we are that the data is better separated.
Step24: Now, let's create a Support Vector Machine
Step25: We can plot the decision boundary of the SVM along with its margin.
Step26: Now, you may be wondering why this algorithm is called a Support Vector Machine. If you look back at the plot, you will notice there are three data points on the margin.
Step27: These points are called the Support vectors. The support vectors are used by the algorithm to determine the margin boundary. Now, let's look at a more interactive demo.
Step28: Let's look at a different dataset. In this dataset, it is obvious that the data points are not linearly separable.
Step29: We can use a kernel function to map the data to a higher dimension.
This is a commonly used kernel called the Radial Basis Function kernel
Step30: Let's see what the data looks like in this higher dimensional space
Step31: Now, we can map this boundary back to our original space
Step32: SVMs are a mathematically beautiful algorithm that are guaranteed to find a wide margin decision boundary, but they do not perform well on large datasets due to scaling issues.
Decision Trees
Step33: Decision Trees are commonly used by business analyst because they are easy to understand.
Photo Credits
Step34: Like the previous algorithms, the decision tree can partition up the space of points.
Step35: Notice how the boundaries of the decision tree are more form-fitting compared to the decision boundaries of our previous algorithms. One issue with decision trees is that they overfit to the data.
Step36: The Decision Tree is easy to interpret, but if there are too many features, the tree may become too large very quickly.
Random Forests
Step37: One Decision Tree is good, but what if we created a bunch of them and pooled their results together!
Step38: The idea of combining multiple models together is a good idea, but as we can see from above, it has some overfitting issues.
Ensemble Method
In video games, you pick party members to cover your weaknesses.
In machine learning, we want to pick classifiers that cover each other's "weaknesses".
That's the idea behind a random forest. Using a meta-heuristic known as the ensemble method, we can average the results of a group of over-fitted decision tree to create a random forest that has more generalizable results. | Python Code:
YouTubeVideo("IFACrIx5SZ0", start = 85, end = 95)
Explanation: This tutorial is based on example in Jake Vanderplas' PyCon 2015 tutorial.
What is Machine Learning
Machine Learning is a subfield of computer science that utilizes statistics and mathemathical optimization to learn generalizable patterns from data. In machine learning, we develop models with adjustable parameters, so we can learn the parameters that best fit the data. In this tutorial, I will cover five commonly used algorithms in supervised learning.
It's a simple idea that has led to some impressive results.
The goal of this talk is for you to understand what these algorithms are learning from the data.
Many machine learning tutorials just show classifier's performance with no in-depth explanation of how the algorithm works.
Photo Credits @ Randy Olsen
Imagine you were taught how to build a birdhouse, but never taught how to use a hammer.
End of explanation
from sklearn.datasets import load_iris
iris = load_iris()
Explanation: But I don't want scare people with spooky, scary math
Iris Dataset
The Iris dataset is a small dataset of 150 Iris flowers. For each data point we have the following features:
1.Sepal Length (cm)
2.Sepal Width (cm)
3.Petal Length (cm)
4.Petal Width (cm)
We want to predict whether the Iris is of type:
1.Setosa
2.Vesicolor
3.Virginica
Photo Credits @ Kaggle
End of explanation
x_ind = 0
y_ind = 1
X = iris.data[:,(x_ind, y_ind)]
labels = iris.target
print X.shape
print labels.shape
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.scatter(X[:, x_ind], X[:, y_ind],
c=labels, cmap=plt.cm.get_cmap('RdYlBu', 3))
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.clim(-0.5, 2.5)
plt.xlabel(iris.feature_names[x_ind])
plt.ylabel(iris.feature_names[y_ind]);
Explanation: For this example, we will only use the Sepal Length and Sepal Width feature, so we can plot the data in a 2D grid.
End of explanation
from sklearn import neighbors
Explanation: K-Nearest Neighbors
End of explanation
sample_size = 25
sample_size_zeroInd = sample_size - 1
rand_sample = np.random.randint(0,150, (sample_size,1))
x_sample = X[rand_sample].reshape((sample_size,2))
label_sample = labels[rand_sample]
Explanation: First, we'll create a random sample of 25 data points
End of explanation
plt.scatter(x_sample[:sample_size_zeroInd, 0], x_sample[:sample_size_zeroInd, 1],
c=label_sample[:sample_size_zeroInd], cmap=plt.cm.get_cmap('RdYlBu', 3), s = 40)
plt.colorbar(ticks=[0, 1, 2], format = formatter)
plt.clim(-0.5, 2.5)
plt.xlabel(iris.feature_names[x_ind])
plt.ylabel(iris.feature_names[y_ind]);
plt.scatter(x_sample[sample_size_zeroInd, x_ind], x_sample[sample_size_zeroInd, y_ind], s = 100, c ='g', alpha = 0.5)
Explanation: Let's look at a random sample of the Iris dataset. Suppose I were to ask you to classify the green dot as either red, yellow, or blue. What would you say?
End of explanation
n_neighbors = 13
h = 0.02
clf = neighbors.KNeighborsClassifier(n_neighbors)
clf.fit(X, labels)
Explanation: Your intuition is that the green point will have a label similar to the points surrounding it. This is the idea behind the K-Nearest Neighbors algorithm.
Start with test data point.
Calculate distance between test point and all data points in the training set.
Use the labels of the k closest points to assign a label to the test point.
End of explanation
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.get_cmap('RdYlBu', 3), alpha = 0.2)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=labels, cmap=plt.cm.get_cmap('RdYlBu', 3), s = 40)
plt.colorbar(ticks=[0, 1, 2], format = formatter)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xlabel(iris.feature_names[x_ind])
plt.ylabel(iris.feature_names[y_ind]);
Explanation: We created a K-NN classifier that uses 13 neighbors to decide each test point. Let's see how it would label the space.
End of explanation
from sklearn.linear_model import LogisticRegression
Explanation: K-NN is easy to understand and implement, but because it requires us to test each new data point against each point in the training set, it is slow and scales poorly to larger datasets.
Logistic Regression
End of explanation
clf = LogisticRegression(C=1e5)
clf.fit(X, labels)
Explanation: Logistic Regression is the classification cousin of linear regression. Our boundaries are made up of lines.
In logistic regression, we predict the likelihood of a given label y based on our features.
End of explanation
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4, 3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.get_cmap('RdYlBu', 3), alpha = 0.2)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=labels, edgecolors='k', cmap=plt.cm.get_cmap('RdYlBu', 3), s = 40)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
Explanation: Compared to KNN, we can create more concise decision boundaries for the data points.
End of explanation
Image(url = 'http://i.imgur.com/zDBbD.png')
Explanation: Logistic regression is quick and simple, but sometimes does not perform well due to the strict partitioning of the decision boundary.
Support Vector Machines: Motivation
Let's say we have red balls and blue balls on a table, and we want to separate them.
End of explanation
Image(url ='http://i.imgur.com/aLZlG.png')
Explanation: Well, we can put a line down the middle that separates them.
End of explanation
Image(url = 'http://i.imgur.com/kxWgh.png')
Explanation: But now if we add another red ball, the boundary does not hold
End of explanation
Image(url ='http://i.imgur.com/ePy4V.png')
Explanation: Could this problem have been avoided? Is there a way to choose the line in the initial problem to prevent this?
The answer is yes! If we choose a line that has a large margin, we can confidently label new data points.
End of explanation
Image(url ='http://i.imgur.com/BWYYZ.png')
Explanation: Ta-Da!
End of explanation
Image(url = 'http://i.imgur.com/R9967.png')
Explanation: But let's say we have an arrangement of balls that no line can separate. What can we do?
End of explanation
Image(url = 'http://i.imgur.com/WuxyO.png')
Explanation: One of the big breakthroughs in machine learning is the kernel function. Using the kernel function, we can map the input data into a new feature space.
End of explanation
Image(url = 'http://i.imgur.com/gWdPX.png')
Explanation: After mapping our points to a higher dimensional space, we can map this line back down to our original space.
End of explanation
from sklearn.datasets.samples_generator import make_blobs
Explanation: Support Vector Machine: Example
End of explanation
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring');
Explanation: Let's create a new dataset
End of explanation
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
Explanation: If we want to separate the two cluster of points with lines, there are several lines we can choose from. How would we determine which line is the best line.
End of explanation
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
Explanation: Well... we can look at how big of margin each line has. The larger the margin, the more "confident" we are that the data is better separated.
End of explanation
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
clf.fit(X, y)
def plot_svc_decision_function(clf, ax=None):
Plot the decision function for a 2D SVC
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function([xi, yj])
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
Explanation: Now, let's create a Support Vector Machine
End of explanation
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf);
Explanation: We can plot the decision boundary of the SVM along with its margin.
End of explanation
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
Explanation: Now, you may be wondering why this algorithm is called a Support Vector Machine. If you look back at the plot, you will notice there are three data points on the margin.
End of explanation
from IPython.html.widgets import interact
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = SVC(kernel='linear')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none')
interact(plot_svm, N=[10, 200], kernel='linear');
Explanation: These points are called the Support vectors. The support vectors are used by the algorithm to determine the margin boundary. Now, let's look at a more interactive demo.
End of explanation
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf);
Explanation: Let's look at a different dataset. In this dataset, it is obvious that the data points are not linearly separable.
End of explanation
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
Explanation: We can use a kernel function to map the data to a higher dimension.
This is a commonly used kernel called the Radial Basis Function kernel
End of explanation
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
Explanation: Let's see what the data looks like in this higher dimensional space
End of explanation
clf = SVC(kernel='rbf')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
Explanation: Now, we can map this boundary back to our original space
End of explanation
from sklearn.tree import DecisionTreeClassifier
Explanation: SVMs are a mathematically beautiful algorithm that are guaranteed to find a wide margin decision boundary, but they do not perform well on large datasets due to scaling issues.
Decision Trees
End of explanation
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
Explanation: Decision Trees are commonly used by business analyst because they are easy to understand.
Photo Credits: Databricks
Let's create a more complex dataset
End of explanation
def visualize_tree(estimator, X, y, boundaries=True,
xlim=None, ylim=None):
estimator.fit(X, y)
if xlim is None:
xlim = (X[:, 0].min() - 0.1, X[:, 0].max() + 0.1)
if ylim is None:
ylim = (X[:, 1].min() - 0.1, X[:, 1].max() + 0.1)
x_min, x_max = xlim
y_min, y_max = ylim
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
Z = estimator.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, alpha=0.2, cmap='rainbow')
plt.clim(y.min(), y.max())
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow')
plt.axis('off')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.clim(y.min(), y.max())
# Plot the decision boundaries
def plot_boundaries(i, xlim, ylim):
if i < 0:
return
tree = estimator.tree_
if tree.feature[i] == 0:
plt.plot([tree.threshold[i], tree.threshold[i]], ylim, '-k')
plot_boundaries(tree.children_left[i],
[xlim[0], tree.threshold[i]], ylim)
plot_boundaries(tree.children_right[i],
[tree.threshold[i], xlim[1]], ylim)
elif tree.feature[i] == 1:
plt.plot(xlim, [tree.threshold[i], tree.threshold[i]], '-k')
plot_boundaries(tree.children_left[i], xlim,
[ylim[0], tree.threshold[i]])
plot_boundaries(tree.children_right[i], xlim,
[tree.threshold[i], ylim[1]])
if boundaries:
plot_boundaries(0, plt.xlim(), plt.ylim())
Explanation: Like the previous algorithms, the decision tree can partition up the space of points.
End of explanation
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
Explanation: Notice how the boundaries of the decision tree are more form-fitting compared to the decision boundaries of our previous algorithms. One issue with decision trees is that they overfit to the data.
End of explanation
from sklearn.ensemble import RandomForestClassifier
Explanation: The Decision Tree is easy to interpret, but if there are too many features, the tree may become too large very quickly.
Random Forests
End of explanation
def fit_randomized_tree(random_state=0):
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
clf = DecisionTreeClassifier(max_depth=15)
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,
xlim=(X[:, 0].min(), X[:, 0].max()),
ylim=(X[:, 1].min(), X[:, 1].max()))
from IPython.html.widgets import interact
interact(fit_randomized_tree, random_state=[0, 100]);
Explanation: One Decision Tree is good, but what if we created a bunch of them and pooled their results together!
End of explanation
clf = RandomForestClassifier(n_estimators=100, random_state=0)
visualize_tree(clf, X, y, boundaries=False);
Explanation: The idea of combining multiple models together is a good idea, but as we can see from above, it has some overfitting issues.
Ensemble Method
In video games, you pick party members to cover your weaknesses.
In machine learning, we want to pick classifiers that cover each other's "weaknesses".
That's the idea behind a random forest. Using a meta-heuristic known as the ensemble method, we can average the results of a group of over-fitted decision tree to create a random forest that has more generalizable results.
End of explanation |
12,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experience
Based on annoted ground truth, we tried to learn a model to classify domains specific words.
We use as input a combinaison of 4 datasets
Step1: Considering the nature of the date (really close points for semanticly close concept - ie puppy, dog), Knn is not relevant.
As you can see we, there is a lot trained model (198), therefore,<br>
we need to find a method to select the best combinaison - ie
Step2: We observe several things here
Step3: Study errors
Here is the detail of classification error for combined animal, plant and vehicle
Step4: Be aware there is no cross validation here, so we are overfitting
Yet, we see collisions seems to be due to several meaning for one concept | Python Code:
summaryDf = pd.DataFrame([extractSummaryLine(l) for l in open('../../data/learnedModel/domain/summary.txt').readlines()],
columns=['domain', 'strict', 'clf', 'feature', 'post', 'precision', 'recall', 'f1'])
summaryDf = summaryDf[summaryDf['clf'] != 'KNeighborsClassifier'].sort_values('f1', ascending=False)
print len(summaryDf)
summaryDf[:5]
Explanation: Experience
Based on annoted ground truth, we tried to learn a model to classify domains specific words.
We use as input a combinaison of 4 datasets:
* animal
* vehicle
* plant
* other - a random sample from the whole vocabulary
To do so we will explore the carthesian product of:
* domains: a combinaison of N previously presented domains
* strict: try to compose missing concept
* randomForest / knn: knn allow us to check if there is anything consistent to learn, randomForest is a basic model as a first approach to learn the function
* feature: one of the feature presented in the guided tour
* postFeature: any extra processing to apply to the feature extraction (like normalise)
We use a 10 K-Fold cross validation.
Once you downloaded the files, you can use this script reproduce the experience at home:
python experiment/trainAll_domainClf.py > ../data/learnedModel/domain/log.txt
Results
Here is the summary of the results we gathered,
You can find details reports in logs.
End of explanation
summaryDf['f1'] = summaryDf['f1'].astype(float)
summaryDf[['feature', 'post', 'f1']].groupby(['feature', 'post']).describe().unstack(level=-1)
Explanation: Considering the nature of the date (really close points for semanticly close concept - ie puppy, dog), Knn is not relevant.
As you can see we, there is a lot trained model (198), therefore,<br>
we need to find a method to select the best combinaison - ie: robust to the number and variety of domains
To do so, we'll select the best average model depending of the dataset combinaison
End of explanation
summaryDf[summaryDf['domain'] == 'animal-plant-vehicle-other'][:1]
summaryDf[(summaryDf['feature'] == 'angular') & (summaryDf['post'] == 'noPost') & (summaryDf['strict'] == '')]
Explanation: We observe several things here:
* The f1-score decrease as we add variety of domains (from ~95% for 2 to ~80% for 4)
* In average, the results are satisfying for the basic model.
* The feature selected angular, polar, carthesian have a litte impact on the average score.
* Adding possibility to compose concept (strict) improve very slightly the score
If we had to select one model, we could choose angular feature with no post processing. which is the best in the edge case (4 domains)
End of explanation
!python ../../toolbox/script/detailConceptClfError.py ../../data/voc/npy/wikiEn-skipgram.npy ../../data/learnedModel/domain/animal-plant-vehicle__RandomForestClassifier_angular_noPost.dill ../../data/domain/luu_animal.txt animal ../../data/domain/luu_plant.txt plant ../../data/domain/luu_vehicle.txt vehicle
Explanation: Study errors
Here is the detail of classification error for combined animal, plant and vehicle:
End of explanation
!python ../../toolbox/script/detailConceptClfError.py ../../data/voc/npy/wikiEn-skipgram.npy ../../data/learnedModel/domain/animal-plant-vehicle-other__RandomForestClassifier_angular_noPost.dill ../../data/domain/luu_animal.txt animal ../../data/domain/luu_plant.txt plant ../../data/domain/luu_vehicle.txt vehicle ../../data/domain/all_1400.txt other
Explanation: Be aware there is no cross validation here, so we are overfitting
Yet, we see collisions seems to be due to several meaning for one concept:<br>
rocket for example, is both a plant and a vehicle, which make it an unsolvable case for this model.
Let's compare with the same domains but adding other:
End of explanation |
12,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inference plots - Histogram plots
This example builds on adaptive covariance MCMC, and shows you how to plot the MCMC chain histograms, also known as the marginal posterior distributions.
Other inference plots
Step1: Histograms
The plots below show the histograms of the chains generated by three independent runs of the adaptive MCMC routine (all from the same starting point). All three chains require an initial period before they converge to the same parameter values. The initial period is usually discarded as 'burn-in'.
Step2: Simple histograms
Step3: Histograms with KDE | Python Code:
import pints
import pints.toy as toy
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500] # growth rate, carrying capacity
times = np.linspace(0, 1000, 100)
org_values = model.simulate(real_parameters, times)
# Add noise
noise = 50
values = org_values + np.random.normal(0, noise, org_values.shape)
real_parameters = np.array(real_parameters + [noise])
# Get properties of the noise sample
noise_sample_mean = np.mean(values - org_values)
noise_sample_std = np.std(values - org_values)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, noise*0.1],
[0.02, 600, noise*100]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Perform sampling using MCMC, with three chains
xs = [
real_parameters * 1.1,
real_parameters * 1.15,
real_parameters * 0.9,
]
mcmc = pints.MCMCController(log_posterior, 3, xs)
mcmc.set_max_iterations(6000)
mcmc.set_log_to_screen(False)
Explanation: Inference plots - Histogram plots
This example builds on adaptive covariance MCMC, and shows you how to plot the MCMC chain histograms, also known as the marginal posterior distributions.
Other inference plots:
* Predicted time series
* Autocorrelation
* Pairwise scatterplots
Setting up an MCMC routine
See the adaptive covariance MCMC example for details.
End of explanation
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard 'burn-in'
chains = chains[:, 3000:, :]
Explanation: Histograms
The plots below show the histograms of the chains generated by three independent runs of the adaptive MCMC routine (all from the same starting point). All three chains require an initial period before they converge to the same parameter values. The initial period is usually discarded as 'burn-in'.
End of explanation
import pints.plot
pints.plot.histogram(chains, parameter_names=['growth rate', 'carrying capacity', 'noise'])
plt.show()
Explanation: Simple histograms
End of explanation
pints.plot.histogram([chains[0]], kde=True, parameter_names=['growth rate', 'carrying capacity', 'noise'])
plt.show()
Explanation: Histograms with KDE
End of explanation |
12,132 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How to plot a scatter plot using pyhton
| Python Code::
import matplotlib.pyplot as plt
plt.scatter(x, y)
plt.show()
|
12,133 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Example of generating samples from the two circle problem
| Python Code::
from sklearn.datasets import make_circles
from matplotlib import pyplot
from numpy import where
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
for i in range(2):
samples_ix = where(y == i)
pyplot.scatter(X[samples_ix, 0], X[samples_ix, 1])
pyplot.show()
|
12,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()
for idx,row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {v:k for k,v in enumerate(vocab)}
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(vocab))
for w in text.split(' '):
if w in vocab:
word_vector[word2idx[w]] += 1
return word_vector
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
12,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
If you are running this notebook on google collab, uncomment and execute the cell below. Otherwise you can jump down to the other import statements.
Step1: Multi-Dimensional Integration with MCMC
By Megan Bedell (Flatiron Institute)
10 September 2019
Problem 1
Step2: Problem 1a
Plot the data. Let's take a look at what we're working with!
Step3: Problem 1b
Write the sinusoid function that we want to fit and get ready to run MCMC with helper functions.
First let's write a "get_model_predictions" function - this will resemble yesterday's same-named function, but instead of returning a line it should return a sinusoid. I suggest using the following free parameters, although there are a few alternative options that you may use instead
Step4: Write a lnprior function with flat priors on all parameters - again, this will be similar to yesterday's function, but with different values.
Hint
Step5: The following functions can be reused as-is from the previous day's Metropolis-Hastings exercise, so just copy-and-paste or import them
Step6: Problem 1c
Run the MCMC.
Let's start with initialization values.
To save some time, I will assert that if we made a Lomb-Scargle periodogram of the RVs, there would be a peak near period = 3.53 days, so start with that guess and let's figure out what the best values might be for the other parameters.
(If you finish early and are up for a bonus problem, you can double-check my assertion using astropy timeseries!)
Step7: Now run the MCMC for 5000 steps. I'll give you (the diagonal of a) covariance matrix to start with. As you saw yesterday afternoon, this cov parameter sets the step sizes that the M-H algorithm will use when it proposes new values.
Step8: Do a pairs plot for the first two parameters. Does the behavior of this chain seem efficient?
Step9: This chain looks super inefficient for a couple of reasons
Step10: Plot the data points and your best-fit model. Does the fit look reasonable? (You may need to zoom into a small time range to tell.)
Step11: Another way to see if we're on the right track is to plot the data phased to the orbital period that we found. Do that and optionally overplot the phased model as well.
Step12: Now re-run the MCMC using these parameters as the initial values and make another pairs plot. Again, I'm going to give you some step size parameters to start with. Because we're now initializing the chain close to the likelihood maximum, we don't want it to move too far away, so I've lowered the values of cov.
Step13: The chain is now staying relatively stationary, which is good! However, it's still spending a long time at each point.
Problem 1e
Now let's tackle another issues
Step14: Writing an autocorrelation function for this purpose actually gets a bit tricky, so we'll use the built-in functionality of emcee.
For the documentation on these functions, check the emcee user guide.
For a more in-depth look at how this is calculated and why it's tricky, check out this tutorial.
Step15: This is worrying - it means we have achieved very few actual independent draws from the posterior in our chain.
Problem 1f
Change the step size of the MCMC. What does this do to the auto-correlation length? Does this seem better or worse, and why?
Step16: Much better!!
Problem 1g
Using the step sizes and starting conditions that you deem best, run your MCMC for at least 500x the auto-correlation length to get a large number of independent samples. Plot the posterior distribution of radial velocity semi-amplitude K. This parameter is arguably the most important output of an RV fit, because it is a measurement of the mass of the planet.
Step17: From these results, what can we say about the true value of K? What is the probability that K > 84 m/s? 85 m/s? 90 m/s? Are these numbers a reliable estimator of the true probability, in your opinion?
Step18: Note
Step19: Problem 2a
Again, let's start by plotting the data. Make plots of the time series and the time series phased to a period of 111.4 days.
Step20: This planet's orbit should look pretty different from a sine wave!
Problem 2b
Remake the get_model_predictions and lnprior functions to fit a Keplerian.
Since this is a bit in the weeds of astronomy for the purposes of this workshop, I've gone ahead and written a solver for Kepler's equation and a get_model_predictions function that will deliver RVs for you. Read over the docstring and use the information given there to write a lnprior function for theta.
Step21: Problem 2c
Play around with the starting parameters until you're convinced that you have a reasonable fit.
Step22: Problem 2d
Run the MCMC for 1000 steps and plot a trace of the eccentricity parameter. How efficiently is it running?
Optional challenge
Step23: Problem 2e
Make a corner plot of the results. Which parameters seem most correlated? Which are most and least well-constrained by the data?
Step24: It's hard to tell since we have very few independent samples, but $e$ and $\omega$ are definitely both highly correlated with many parameters and with each other!
Problem 2f
Ford (2005) suggests mitigating this issue by reparameterizing the orbital parameters $e$ and $\omega$ as $e cos\omega$ and $e sin\omega$. Modify the get_model_predictions and lnprior functions accordingly and rerun the MCMC. Does performance improve?
Note | Python Code:
#!pip install emcee==3.0rc2
#!pip install corner
import numpy as np
import pandas as pd
from scipy.optimize import minimize, newton
import emcee
import corner
import matplotlib.pyplot as plt
np.random.seed(42)
Explanation: If you are running this notebook on google collab, uncomment and execute the cell below. Otherwise you can jump down to the other import statements.
End of explanation
datafile = 'https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0108/0108859/data/UID_0108859_RVC_001.tbl'
data = pd.read_fwf(datafile, header=0, names=['t', 'rv', 'rv_err'], skiprows=22)
data['t'] -= data['t'][0]
Explanation: Multi-Dimensional Integration with MCMC
By Megan Bedell (Flatiron Institute)
10 September 2019
Problem 1: Fitting a Sinusoid to Data
In this example, we will download a time series of radial velocities for the star HD209458. This star hosts a Hot Jupiter exoplanet. In fact, this planet was the first to be seen in transit and was discovered 20 years ago yesterday!
Because the eccentricity is low for this planet, we can fit its orbit in the radial velocities with a relatively simple model: a sinusoid.
Below is a snippet of code that will download the time-series data from NASA Exoplanet Archive:
End of explanation
fig, ax = plt.subplots()
ax.errorbar(data['t'], data['rv'], data['rv_err'], fmt='o', ms=4)
ax.set_xlabel('Time (days)')
ax.set_ylabel(r'RV (m s$^{-1}$)');
Explanation: Problem 1a
Plot the data. Let's take a look at what we're working with!
End of explanation
def get_model_predictions(theta, t):
'''
Calculate RV predictions for parameters theta and timestamps t.
'''
period, amplitude, t0, rv0 = theta
model_preds = amplitude * np.sin(2. * np.pi / period * (t - t0)) + rv0
return model_preds
Explanation: Problem 1b
Write the sinusoid function that we want to fit and get ready to run MCMC with helper functions.
First let's write a "get_model_predictions" function - this will resemble yesterday's same-named function, but instead of returning a line it should return a sinusoid. I suggest using the following free parameters, although there are a few alternative options that you may use instead:
theta = [period, # period of the sinusoid
amplitude, # semi-amplitude of the sinusoid
t0, # reference x at which sine phase = 0
rv0] # constant offset in y
End of explanation
def lnprior(theta):
period, amplitude, t0, rv0 = theta
if 0 < period <= 1e4 and 0 <= amplitude <= 1e3: # physical priors
lnp = np.log(1e-4) + np.log(1e-3)
else:
return -np.inf
if np.abs(t0) <= 1e3 and np.abs(rv0) <= 1e3: # generous flat priors
lnp += 2 * np.log(1/2e3)
else:
return -np.inf
return lnp
Explanation: Write a lnprior function with flat priors on all parameters - again, this will be similar to yesterday's function, but with different values.
Hint: some of the bounds on these parameters will be physically motivated (i.e. orbital period cannot be negative). For others, you'll need to guess something reasonable but generous - i.e., a Hot Jupiter planet probably does not have an orbital period above a year or so.
End of explanation
def lnlikelihood(theta, y, x, y_unc):
model_preds = get_model_predictions(theta, x)
lnl = -np.sum((y-model_preds)**2/(2*y_unc**2))
return lnl
def lnposterior(theta, y, x, y_unc):
lnp = lnprior(theta)
if not np.isfinite(lnp):
return -np.inf
lnl = lnlikelihood(theta, y, x, y_unc)
lnpost = lnl + lnp
return lnpost
def hastings_ratio(theta_1, theta_0, y, x, y_unc):
lnpost1 = lnposterior(theta_1, y, x, y_unc)
lnpost0 = lnposterior(theta_0, y, x, y_unc)
h_ratio = np.exp(lnpost1 - lnpost0)
return h_ratio
def propose_jump(theta, cov):
if np.shape(theta) == np.shape(cov):
cov = np.diag(np.array(cov)**2)
proposed_position = np.random.multivariate_normal(theta, cov)
return proposed_position
def mh_mcmc(theta_0, cov, nsteps, y, x, y_unc):
positions = np.zeros((nsteps+1, len(theta_0)))
lnpost_at_pos = -np.inf*np.ones(nsteps+1)
acceptance_ratio = np.zeros_like(lnpost_at_pos)
accepted = 0
positions[0] = theta_0
lnpost_at_pos[0] = lnposterior(theta_0, y, x, y_unc)
for step_num in np.arange(1, nsteps+1):
proposal = propose_jump(positions[step_num-1], cov)
H = hastings_ratio(proposal, positions[step_num-1], y, x, y_unc)
R = np.random.uniform()
if H > R:
accepted += 1
positions[step_num] = proposal
lnpost_at_pos[step_num] = lnposterior(proposal, y, x, y_unc)
acceptance_ratio[step_num] = float(accepted)/step_num
else:
positions[step_num] = positions[step_num-1]
lnpost_at_pos[step_num] = lnpost_at_pos[step_num-1]
acceptance_ratio[step_num] = float(accepted)/step_num
return (positions, lnpost_at_pos, acceptance_ratio)
Explanation: The following functions can be reused as-is from the previous day's Metropolis-Hastings exercise, so just copy-and-paste or import them:
lnlikelihood, lnposterior, hastings_ratio, propose_jump, mh_mcmc
End of explanation
theta_0 = [3.53, 80, 0, 0] # [period, amplitude, t0, rv0] starting guesses
Explanation: Problem 1c
Run the MCMC.
Let's start with initialization values.
To save some time, I will assert that if we made a Lomb-Scargle periodogram of the RVs, there would be a peak near period = 3.53 days, so start with that guess and let's figure out what the best values might be for the other parameters.
(If you finish early and are up for a bonus problem, you can double-check my assertion using astropy timeseries!)
End of explanation
cov = [0.01, 1, 0.05, 0.01]
pos, lnpost, acc = mh_mcmc(theta_0, cov, 5000,
data['rv'], data['t'], data['rv_err'])
Explanation: Now run the MCMC for 5000 steps. I'll give you (the diagonal of a) covariance matrix to start with. As you saw yesterday afternoon, this cov parameter sets the step sizes that the M-H algorithm will use when it proposes new values.
End of explanation
fig, ax = plt.subplots()
ax.plot(pos[:,0], pos[:,1], 'o-', alpha=0.3)
ax.plot(theta_0[0], theta_0[1], '*', ms=30,
mfc='Crimson', mec='0.8', mew=2,
alpha=0.7)
ax.set_xlabel('Period', fontsize=14)
ax.set_ylabel(r'K (m s$^{-1}$)', fontsize=14)
fig.tight_layout()
Explanation: Do a pairs plot for the first two parameters. Does the behavior of this chain seem efficient?
End of explanation
def nll(*par):
'''
The negative ln(likelihood).
'''
return -1. * lnlikelihood(*par)
res = minimize(nll, theta_0,
args=(data['rv'], data['t'], data['rv_err']),
method='Powell')
print('Optimizer finished with message "{0}" and \n\
best-fit parameters {1}'.format(res['message'], res['x']))
Explanation: This chain looks super inefficient for a couple of reasons: one, it's wandering from from the starting point, which implies a poor initialization and would require us to drop samples from the beginning (burn-in); and two, the acceptance fraction is low and it spends a long time at each point.
Problem 1d
There were a couple of issues with the previous MCMC run. Let's start with this one: we started the chains running at a place that was not very close to the best-fit solution.
Find a better set of initialization values by optimizing before we run the MCMC.
We'll use scipy.optimize.minimize to get best-fit parameters. Remember that the lnlikelihood function needs to be maximized not minimized, so we'll need a new function that works the same way, but negative.
End of explanation
plt.errorbar(data['t'], data['rv'], data['rv_err'],
fmt='o', ms=4)
xs = np.linspace(-0.1, 6, 1000)
plt.plot(xs, get_model_predictions(res['x'], xs), c='DarkOrange')
plt.xlim([-0.1,6])
plt.xlabel('Time (days)')
plt.ylabel(r'RV (m s$^{-1}$)');
Explanation: Plot the data points and your best-fit model. Does the fit look reasonable? (You may need to zoom into a small time range to tell.)
End of explanation
period, amplitude, t0, rv0 = res['x']
fig, ax = plt.subplots()
phased_t = (data['t'] - t0) % period
ax.errorbar(phased_t / period, data['rv'], data['rv_err'],
fmt='o', ms=4)
phase_xs = np.linspace(0, period, 100)
ax.plot(phase_xs / period, get_model_predictions(res['x'], phase_xs + t0),
c='DarkOrange')
ax.set_xlabel('Phase')
ax.set_ylabel(r'RV (m s$^{-1}$)');
Explanation: Another way to see if we're on the right track is to plot the data phased to the orbital period that we found. Do that and optionally overplot the phased model as well.
End of explanation
theta_bestfit = res['x']
cov = [0.001, 0.1, 0.01, 0.1]
pos, lnpost, acc = mh_mcmc(theta_bestfit, cov, 5000,
data['rv'], data['t'], data['rv_err'])
fig, ax = plt.subplots()
ax.plot(pos[:,0], pos[:,1], 'o-', alpha=0.3)
ax.plot(theta_bestfit[0], theta_bestfit[1], '*', ms=30,
mfc='Crimson', mec='0.8', mew=2,
alpha=0.7)
ax.set_xlabel('Period', fontsize=14)
ax.set_ylabel(r'K (m s$^{-1}$)', fontsize=14)
fig.tight_layout()
Explanation: Now re-run the MCMC using these parameters as the initial values and make another pairs plot. Again, I'm going to give you some step size parameters to start with. Because we're now initializing the chain close to the likelihood maximum, we don't want it to move too far away, so I've lowered the values of cov.
End of explanation
plt.plot(pos[:,0]);
Explanation: The chain is now staying relatively stationary, which is good! However, it's still spending a long time at each point.
Problem 1e
Now let's tackle another issues: chain efficiency. Calculate the auto-correlation length of your chain.
First, let's just plot the sequence of orbital period values in the chain in a trace plot. From eyeballing this sequence, about how many steps do you think are needed to reach a sample that is independent from the previous one(s)?
End of explanation
acf = emcee.autocorr.function_1d(pos[:,0])
plt.plot(acf)
plt.xlabel('Lag')
plt.ylabel('Normalized ACF');
act = emcee.autocorr.integrated_time(pos[:,0], quiet=True)
print('The integrated autocorrelation time is estimated as: {0}'.format(act))
Explanation: Writing an autocorrelation function for this purpose actually gets a bit tricky, so we'll use the built-in functionality of emcee.
For the documentation on these functions, check the emcee user guide.
For a more in-depth look at how this is calculated and why it's tricky, check out this tutorial.
End of explanation
cov = [0.0001, 0.1, 0.01, 0.1]
pos, lnpost, acc = mh_mcmc(theta_bestfit, cov, 5000,
data['rv'], data['t'], data['rv_err'])
plt.plot(pos[:,0]);
acf = emcee.autocorr.function_1d(pos[:,0])
plt.plot(acf)
plt.xlabel('Lag')
plt.ylabel('Normalized ACF');
act = emcee.autocorr.integrated_time(pos[:,0])
print('The integrated autocorrelation time is estimated as: {0}'.format(act))
Explanation: This is worrying - it means we have achieved very few actual independent draws from the posterior in our chain.
Problem 1f
Change the step size of the MCMC. What does this do to the auto-correlation length? Does this seem better or worse, and why?
End of explanation
pos, lnpost, acc = mh_mcmc(theta_bestfit, cov, 20000,
data['rv'], data['t'], data['rv_err'])
plt.hist(pos[:,1])
plt.xlabel(r'K (m s$^{-1}$)');
Explanation: Much better!!
Problem 1g
Using the step sizes and starting conditions that you deem best, run your MCMC for at least 500x the auto-correlation length to get a large number of independent samples. Plot the posterior distribution of radial velocity semi-amplitude K. This parameter is arguably the most important output of an RV fit, because it is a measurement of the mass of the planet.
End of explanation
N_tot = len(pos[:,1])
print('The probability that K > 84 m/s is: {0:.2f}'.format(np.sum(pos[:,1] > 84.)/N_tot))
print('The probability that K > 85 m/s is: {0:.2f}'.format(np.sum(pos[:,1] > 85.)/N_tot))
print('The probability that K > 90 m/s is: {0:.2f}'.format(np.sum(pos[:,1] > 90.)/N_tot))
Explanation: From these results, what can we say about the true value of K? What is the probability that K > 84 m/s? 85 m/s? 90 m/s? Are these numbers a reliable estimator of the true probability, in your opinion?
End of explanation
datafile = 'https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0045/0045982/data/UID_0045982_RVC_006.tbl'
data = pd.read_fwf(datafile, header=0, names=['t', 'rv', 'rv_err'], skiprows=21)
data['t'] -= data['t'][0]
Explanation: Note: we have not actually sampled parameter space around K > 90 m/s, so take this estimate with a grain of salt -- we can certainly conclude that the probability of K > 90 is low, but we'd need to actually calculate posterior values around K = 90 before we'd have a reliable estimate of the PDF there.
Challenge Problem 1h
Try some different values of cov[0] (the step size for the orbital period). Make a plot of the acceptance fraction as a function of step size. Does this make sense?
Challenge Problem 1i
For different values of cov[0], plot the correlation length. Does this make sense?
Problem 2: Fitting a Keplerian to Data
In the previous example, the orbit we were fitting had negligible eccentricity, so we were able to fit it with a sinusoid. In this example, we'll look at the high-eccentricity planet HD 80606b and fit a full Keplerian model to its RV data. This requires introducing some new free parameters to the model, which as we will see are not always straightforward to sample!
End of explanation
fig, ax = plt.subplots()
ax.errorbar(data['t'], data['rv'], data['rv_err'], fmt='o', ms=4)
ax.set_xlabel('Time (days)')
ax.set_ylabel(r'RV (m s$^{-1}$)');
plt.errorbar(data['t'] % 111.4, data['rv'], data['rv_err'], fmt='o', ms=4)
plt.xlabel('Phase (days)')
plt.ylabel(r'RV (m s$^{-1}$)');
Explanation: Problem 2a
Again, let's start by plotting the data. Make plots of the time series and the time series phased to a period of 111.4 days.
End of explanation
def calc_ea(ma, ecc):
# Kepler solver - calculates eccentric anomaly
tolerance = 1e-3
ea = np.copy(ma)
while True:
diff = ea - ecc * np.sin(ea) - ma
ea -= diff / (1. - ecc * np.cos(ea))
if abs(diff).all() <= tolerance:
break
return ea
def get_model_predictions(theta, t):
'''
Calculate Keplerian orbital RVs
Input
-----
theta : list
A list of values for the following parameters:
Orbital period,
RV semi-amplitude,
eccentricity (between 0-1),
omega (argument of periastron; an angle in radians
denoting the orbital phase where the planet
passes closest to the host star)
Tp (time of periastron; reference timestamp for the above)
RV0 (constant RV offset)
t : list or array
Timestamps at which to calculate the RV
Returns
-------
rvs : list or array
Predicted RVs at the input times.
'''
P, K, ecc, omega, tp, rv0 = theta
ma = 2. * np.pi / P * (t - tp) # mean anomaly
ea = calc_ea(ma, ecc) # eccentric anomaly
f = 2.0 * np.arctan2(np.sqrt(1+ecc)*np.sin(ea/2.0),
np.sqrt(1-ecc)*np.cos(ea/2.0)) # true anomaly
rvs = - K * (np.cos(omega + f) + ecc*np.cos(omega))
return rvs + rv0
def lnprior(theta):
period, amplitude, ecc, omega, tp, rv0 = theta
if 0 < period <= 1e5 and 0 <= amplitude <= 1e4: # physical priors
lnp = np.log(1e-5) + np.log(1e-4)
else:
return -np.inf
if np.abs(tp) <= 1e4 and np.abs(rv0) <= 1e4: # generous flat priors
lnp += 2 * np.log(1/2e4)
else:
return -np.inf
if 0 <= ecc < 1 and 0 < omega < 2*np.pi: # more physical priors
lnp += np.log(1) + np.log(1/(2*np.pi))
else:
return -np.inf
return lnp
Explanation: This planet's orbit should look pretty different from a sine wave!
Problem 2b
Remake the get_model_predictions and lnprior functions to fit a Keplerian.
Since this is a bit in the weeds of astronomy for the purposes of this workshop, I've gone ahead and written a solver for Kepler's equation and a get_model_predictions function that will deliver RVs for you. Read over the docstring and use the information given there to write a lnprior function for theta.
End of explanation
theta_0 = [111.4, 480, 0.95, 2.0, 89, -200] # P, K, ecc, omega, tp, rv0
plt.errorbar(data['t'], data['rv'], data['rv_err'],
fmt='o', ms=4)
xs = np.linspace(900, 1050, 1000)
plt.plot(xs, get_model_predictions(theta_0, xs), c='DarkOrange')
plt.xlim([900,1050])
plt.xlabel('Time (days)')
plt.ylabel(r'RV (m s$^{-1}$)');
Explanation: Problem 2c
Play around with the starting parameters until you're convinced that you have a reasonable fit.
End of explanation
cov = [0.1, 100, 0.01, 0.1, 0.1, 100]
pos, lnpost, acc = mh_mcmc(theta_0, cov, 10000,
data['rv'], data['t'], data['rv_err'])
plt.plot(pos[:,2])
plt.ylabel('Eccentricity')
plt.xlabel('Step');
Explanation: Problem 2d
Run the MCMC for 1000 steps and plot a trace of the eccentricity parameter. How efficiently is it running?
Optional challenge: if you wrote a Gibbs sampler yesterday, use that instead of Metropolis-Hastings here!
End of explanation
corner.corner(pos, labels=['Period (days)', r"K (m s$^{-1}$)",
r"$e$", r"$\omega$",
r"T$_p$", r"RV$_0$ (m s$^{-1}$)"]);
Explanation: Problem 2e
Make a corner plot of the results. Which parameters seem most correlated? Which are most and least well-constrained by the data?
End of explanation
def get_model_predictions(theta, t):
P, K, esinw, ecosw, tp, rv0 = theta
omega = np.arctan2(esinw, ecosw)
ecc = esinw / np.sin(omega)
ma = 2. * np.pi / P * (t - tp) # mean anomaly
ea = calc_ea(ma, ecc) # eccentric anomaly
f = 2.0 * np.arctan2(np.sqrt(1+ecc)*np.sin(ea/2.0),
np.sqrt(1-ecc)*np.cos(ea/2.0)) # true anomaly
rvs = - K * (np.cos(omega + f) + ecc*np.cos(omega))
return rvs + rv0
def lnprior(theta):
period, amplitude, esinw, ecosw, tp, rv0 = theta
if 0 < period <= 1e5 and 0 <= amplitude <= 1e4: # physical priors
lnp = np.log(1e-5) + np.log(1e-4)
else:
return -np.inf
if np.abs(tp) <= 1e4 and np.abs(rv0) <= 1e4: # generous flat priors
lnp += 2 * np.log(1/2e4)
else:
return -np.inf
if -1 <= esinw < 1 and -1 < ecosw < 1: # more physical priors
lnp += 2 * np.log(1/2)
else:
return -np.inf
return lnp
theta_0 = [111.4, 480, 0.95 * np.cos(2), 0.95 * np.sin(2), 89, -200]
cov = [0.1, 100, 0.1, 0.1, 0.1, 100]
pos, lnpost, acc = mh_mcmc(theta_0, cov, 5000,
data['rv'], data['t'], data['rv_err'])
plt.plot(pos[:,2])
plt.ylabel('ecosw')
plt.xlabel('Step');
corner.corner(pos, labels=['Period (days)', r"K (m s$^{-1}$)",
r"$e\sin(\omega)$", r"$e\cos(\omega)$",
r"T$_p$", r"RV$_0$ (m s$^{-1}$)"]);
Explanation: It's hard to tell since we have very few independent samples, but $e$ and $\omega$ are definitely both highly correlated with many parameters and with each other!
Problem 2f
Ford (2005) suggests mitigating this issue by reparameterizing the orbital parameters $e$ and $\omega$ as $e cos\omega$ and $e sin\omega$. Modify the get_model_predictions and lnprior functions accordingly and rerun the MCMC. Does performance improve?
Note: the efficiency of a basic MCMC in this situation is never going to be excellent. We'll talk more about challenging cases like this and how to deal with them in later lectures!
End of explanation |
12,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contents
This notebook shows how to use the functionality in the HealpixTree class. This is a useful function to find groups of Healpixels at high resolution which are connected and nearby. In particular this notebook illustrates two useful methods
Step1: Instantiate the object
Step2: Find all the healpixels at the resolution one level higher
Step3: Find all the pixels at NSIDE=256, which are owned by pixelid =1 at NSIDE=1
Step4: Visusalize this | Python Code:
from mpl_toolkits.basemap import Basemap
import opsimsummary as oss
oss.__VERSION__
from opsimsummary import HealpixTree, pixelsForAng, HealpixTiles
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import healpy as hp
Explanation: Contents
This notebook shows how to use the functionality in the HealpixTree class. This is a useful function to find groups of Healpixels at high resolution which are connected and nearby. In particular this notebook illustrates two useful methods:
- How to find the descendent pixels of a sequence of pixels at the next higher resolution using pixelsAtNextLevel
- How to find the descendent pixels at a sequence of pixels at resolution res above the resolution using pixelsAtResolutionLevel
End of explanation
htree = HealpixTree(nside=1, nest=True)
Explanation: Instantiate the object
End of explanation
# By default the nside argument to this function is the nside at which htree was instantiated
print(htree.nside)
ipix = np.array([0, 1])
htree.pixelsAtNextLevel(ipix)
# We can also be specific, and do this for a particular NSIDE
htree.pixelsAtNextLevel(ipix, nside=128)
Explanation: Find all the healpixels at the resolution one level higher
End of explanation
# How many subdivisions required to go to NSIDE =256 ?
desideredNSIDE = 256
res = int(np.log2(desideredNSIDE))
# nsidenew should be the NSIDE at the resolution we want
nsidenew, pixels = htree.pixelsAtResolutionLevel(1, res, 1)
assert nsidenew == desideredNSIDE
Explanation: Find all the pixels at NSIDE=256, which are owned by pixelid =1 at NSIDE=1
End of explanation
n256 = hp.nside2npix(256)
n1 = hp.nside2npix(1)
arr1 = np.ones(n1) * hp.UNSEEN
arr1[1] = 1
arr256 = np.ones(n256) * hp.UNSEEN
arr256[pixels] = -2
hp.mollview(arr1, nest=True)
hp.mollview(arr256, nest=True)
Explanation: Visusalize this
End of explanation |
12,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Watch Me Code 1
Step1: Manual Plotting in Matplotlib
Step2: Plotting chart types
Step3: Plotting with Pandas | Python Code:
# Jupyter Directive
%matplotlib inline
# imports
import matplotlib
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.figsize'] = (20.0, 10.0) # larger figure size
Explanation: Watch Me Code 1: Matplotlib
We will demonstrate Pythons data visualization library Matplotlib using it two ways
Standalone
With Pandas
End of explanation
# Matplotlib requires lists to plot
x = [1,2,3,4,5]
xsquared = [1,4,9,16,25]
plt.plot(x,xsquared) # default is a blue line
# this can be overridden. consult help(plt.plot) for details
#MATLAB MATPLOTLIB
plt.plot(x, xsquared, 'ro') # red dots
# we can manipulate the axis too, rather than auto scale. In this case we must call plt.show() to display the plot
plt.plot(x, xsquared, 'ro') # red dots
plt.axis([0,6,0,26]) # a list in the form [xmin, xmax, ymin, ymax]
plt.show()
# Labels are simple
plt.bar(x, xsquared) #,'r--') # red dashes
plt.axis([0,6,0,26]) # a list in the form [xmin, xmax, ymin, ymax]
plt.xlabel("Value of X", fontsize=36)
plt.ylabel("Value of X Squared", fontsize=36)
plt.title("Plot of X versus X Squared", fontsize=48)
plt.grid(True)
plt.show()
Explanation: Manual Plotting in Matplotlib
End of explanation
plt.bar(x,xsquared)
plt.pie(x)
plt.scatter(x, xsquared)
Explanation: Plotting chart types
End of explanation
scores = pd.read_csv("https://raw.githubusercontent.com/mafudge/datasets/master/exam-scores/exam-scores.csv")
scores.sample(10)
# Plotting with Pandas is a bit more expressive
scores.plot.scatter(x ='Completion_Time', y ='Student_Score' )
scores.corr()
## Labels too small, we can fall back to Matplot lib!
p = scores.plot.scatter(x ='Completion Time', y ='Student Score', fontsize=20)
p.set_xlabel('Completetion Time', fontsize=20)
p.set_ylabel('Student Score', fontsize=20)
p
# Take the value counts of letter grade and create a data frame
letter_grades = pd.DataFrame( { 'Letter' : scores['Letter_Grade'].value_counts() } ).sort_index()
letter_grades.plot.bar(sort_columns=True)
letter_grades.plot.pie( y = 'Letter', fontsize = 20)
Explanation: Plotting with Pandas
End of explanation |
12,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traduction du notebook Discover your Poppy Ergo Jr par Georges Saliba sous licence CC BY SA
Découvrir votre Poppy Ergo Jr
Ce notebook qui permet à la fois d'insérer du code pour faire fonctionner le robot et de le commenter dans le même temps, va vous guider pour apprendre à programmer le Poppy Ergo Jr en Python.
Instancier votre robot
Accéder aux moteurs et les commander
Lire les valeurs des capteurs
S'initier à de la programmation de plus haut niveau
A partir de maintenant, votre robot Poppy Ergo Jr est connecté à l'aide du cable RJ 45 et votre caméra est aussi connectée.
Step1: % pylab inline est une commande python qui importe les modules numpy et matplotlib. L'option inline indique que les figures Matplotlib seront insérées dans le notebook lui-même plutôt que dans une fenêtre graphique séparée. C'est la seule option possible car votre système ne dispose pas d'interface graphique.
Elle exécute en particulier les deux intructions
* import numpy as np
* import matplotlib.pyplot as plt
La dernière instruction importe du module future le sous module print_function qui permet de gérer les problèmes de compatibilité avec de futures versions de Python.
Instancier votre robot
Pour commencer à utiliser votre robot en Python, vous devez tout d'abord l'instancier.
C'est le rôle du code suivant.
Step2: Cela crée un objet Robot qui peut être utilisé pour accéder aux moteurs et aux capteurs. Il prend en chage toute la communication de bas niveau pour vous. Vous n'avez donc pas besoin de connaitre les détails du protocole de communication pour faire fonctionner un moteur. Les champs motors et sensors du Robot sont automatiquement synchronisés pour correspondre à l'état de leurs équivalents matériels.
QUESTIONS
Dans la ligne de code ci-dessus, quel est le rôle de poppy ?
PoppyErgoJr() est suivi de parenthèses que devez-vous comprendre ?
COMPLETER
Le code ci-dessous permet de mettre le robot ... dans la position appelée ...
Si le robot avait été nommé poppy_bras et la position avait été position_leve, quelle instruction auriez-vous du envoyer au robot pour lui demander de se mettre dans cette position ?
Step3: Accéder aux moteurs
Dans un Poppy Ergo Jr, le moteur est défini comme illustré ci-dessous
Step4: Comme vous pouvez le constater, vous obtenez une liste de tous les moteurs de votre objet Robot.
Vous pouvez alors récupérer tous les noms des moteurs
En Python, une syntaxe possible de la boucle pour est la suivante
Step5: TRAVAIL
Step6: Lire les valeurs des moteurs
A partir de l'objet moteur vous pouvez accéder à ses divers registres. Les principaux sont
Step7: On peut avoir la position courante de tous les moteurs à l'aide de l'instruction ci-dessous.
QUESTIONS
Pourquoui a-t-on des crochets ?
Quelles sont toutes les valeurs que va prendre l'itérateur m ?
Quelle est la structure de poppy.motors ?
Step8: Il est important de comprendre que "poppy.m1.present_position" est automatiquement mis à jour avec la position courante du moteur réel (à 50Hz).
Comme pour les autres registres, la fréquence de mise à jour n'est pas la même et dépend de son importance. Par exemple, la température est rafraichie à 1Hz puisqu'elle ne varie pas très vite.
Remarque
Lorsque vous appuyez sur la touche tabulation après avoir mis un ., vous pouvez voir la liste des registres pour un objet.
Essayez dans la cellule de code suivante et observez.
Vous n'avez donc pas besoin de retenir les divers registres existants ni leurs noms.
Step9: Commander les moteurs
Au début des registres présentés précédemment, il y en a d'autres utilisés pour envoyer des instructions/commandes au robot. Par exemple, la position des moteurs est séparée en deux registres distincts.
En lecture seulement present_position du moteur
En lecture et écriture goal_position qui envoie au moteur une position cible qu'il va essayer d'atteindre.
Pour mettre un moteur dans une nouvelle position, vous pouvez écrire
Step10: QUESTIONS
Quel effet va avoir l'intruction ci-dessus ?
En quelle unité est exprimé 20 ?
Comment faire pour que le moteur tourne dans l'autre sens de la même manière ?
Step11: Dans ces exemples, le moteur tourne aussi vite que possible (c'est le fonctionnement par défaut). Vous pouvez changer la vitesse maximale du moteur qui est enregistrée dans le registre moving_speed du moteur
Step12: Maintenant le moteur m1 ne peut pas aller plus vite que 50 degrés par seconde. Faites le se déplacer une nouvelle fois pour constater la différence.
Step13: Les principaux registres sont
Step14: Vous pouvez normalement faire bouger ce moteur à la main. Par exemple, cela vous sera utile pour programmer votre robot par démonstration (cf. notebook correspondant).
Et le remettre en mode stiff
Step15: Contrôler les LED des moteurs
Les moteurs XL-320 motors utilisés dans le Poppy Ergo Jr ont un petite LED de couleur. Vous pouvez changer sa couleur en utilisant pypot. C'est utile pour rendre votre robot plus vivant, customisé ...
Chaque moteur possède un registre led auquel on peut affecter un couleur 'green', 'red' parmi les couleurs disponibles.
QUESTION
Comment faire pour que la led du moteur m3 du robot poppy soit verte ?
En affectant la valeur 'off' à ce registre on éteint cette led.
QUESTION
Eteindre la led du moteur m3 du robot poppy.
QUESTION
Avant de le lancer, pouvez-vous décrire l'effet de ce jeu d'instructions ?
Comment faire pour allumer une led sur deux en rouge et une led sur deux en vert dans cet ordre ?
En Python la syntaxe du *si ... alors ... sinon ... * est la suivante
Step16: Vous pouvez connaitre toutes les couleurs disponibles à l'aide la commande suivante
Step17: Lire des capteurs/sensors
Step18: Lire des capteurs se fait exactement de la même manière que lire des registres de votre robot. Vous pouvez accéder à vos capteurs via
Step19: Ici, nous avons 2 capteurs
Step20: Vous pouvez récupérer tous les registres existants d'un capteur
Step21: et recupérer et afficher une image à partir de la caméra
Step22: Comme pour les moteurs, les valeurs des capteurs sont automatiquement synchronisées en arrière plan avec le capteur physique. Si vous exécutez une nouvelle fois les instructions précédentes, vous obtiendrez une image plus récente.
Step23: Comportements de haut niveau
Le robot Poppy Ergo Jr intègre un jeu de comportements prédéfinis. Il peut s'agir de postures spécifiques comme la posture de repos - rest_posture utilisée au début - ou d'une danse, ...
Vous pouvez en trouver la liste exhaustive en utilisant l'instruction suivante qui liste les primitives ou comportements
Step24: Ces comportements (ou primitives en "terminologie poppy") peuvent être démarrés, arrétés, mis en pause, etc.
Step25: Vous pouvez faire danser le Poppy Ergo Jr pendant 10 secondes | Python Code:
%pylab inline
from __future__ import print_function
Explanation: Traduction du notebook Discover your Poppy Ergo Jr par Georges Saliba sous licence CC BY SA
Découvrir votre Poppy Ergo Jr
Ce notebook qui permet à la fois d'insérer du code pour faire fonctionner le robot et de le commenter dans le même temps, va vous guider pour apprendre à programmer le Poppy Ergo Jr en Python.
Instancier votre robot
Accéder aux moteurs et les commander
Lire les valeurs des capteurs
S'initier à de la programmation de plus haut niveau
A partir de maintenant, votre robot Poppy Ergo Jr est connecté à l'aide du cable RJ 45 et votre caméra est aussi connectée.
End of explanation
from poppy.creatures import PoppyErgoJr
poppy = PoppyErgoJr()
Explanation: % pylab inline est une commande python qui importe les modules numpy et matplotlib. L'option inline indique que les figures Matplotlib seront insérées dans le notebook lui-même plutôt que dans une fenêtre graphique séparée. C'est la seule option possible car votre système ne dispose pas d'interface graphique.
Elle exécute en particulier les deux intructions
* import numpy as np
* import matplotlib.pyplot as plt
La dernière instruction importe du module future le sous module print_function qui permet de gérer les problèmes de compatibilité avec de futures versions de Python.
Instancier votre robot
Pour commencer à utiliser votre robot en Python, vous devez tout d'abord l'instancier.
C'est le rôle du code suivant.
End of explanation
poppy.rest_posture.start()
Explanation: Cela crée un objet Robot qui peut être utilisé pour accéder aux moteurs et aux capteurs. Il prend en chage toute la communication de bas niveau pour vous. Vous n'avez donc pas besoin de connaitre les détails du protocole de communication pour faire fonctionner un moteur. Les champs motors et sensors du Robot sont automatiquement synchronisés pour correspondre à l'état de leurs équivalents matériels.
QUESTIONS
Dans la ligne de code ci-dessus, quel est le rôle de poppy ?
PoppyErgoJr() est suivi de parenthèses que devez-vous comprendre ?
COMPLETER
Le code ci-dessous permet de mettre le robot ... dans la position appelée ...
Si le robot avait été nommé poppy_bras et la position avait été position_leve, quelle instruction auriez-vous du envoyer au robot pour lui demander de se mettre dans cette position ?
End of explanation
poppy_ergo_jr.motors
Explanation: Accéder aux moteurs
Dans un Poppy Ergo Jr, le moteur est défini comme illustré ci-dessous :
Image à rajouter
A partir de l'objet Robot, vous pouvez récupérer directement la liste de tous les moteurs conectés à l'aide de l'instruction construite de la manière suivante :
nom_du_conteneur_de_l_objet_robot.motors
QUESTIONS
Quel est le nom du conteneur/variable qui représente votre robot ?
L'instruction ci-dessous est-elle valide ? Si non, la corriger.
End of explanation
for m in poppy.motors:
print(m.name)
#print ("terminé")
Explanation: Comme vous pouvez le constater, vous obtenez une liste de tous les moteurs de votre objet Robot.
Vous pouvez alors récupérer tous les noms des moteurs
En Python, une syntaxe possible de la boucle pour est la suivante :
for itérateur in une_liste :
*corps de la boucle pour*
Qui est le "l'itérateur" dans la boucle suivante ? Décrivez son fonctionnement.
Quelle est la liste ? Ecrivez-là de manière explicite.
End of explanation
poppy.m1
Explanation: TRAVAIL :
Si dans la dernière ligne, on supprime le caractère #, l'instruction print("terminé") sera éxécutée. Combien de fois sera affichée la chaine de caractères "terminé" ?
Les instructions répétées dans la boucle doivent avoir la même indentation ou être dans une structure qui est au même niveau d'indentation.
Faites en sorte que "terminé" soit affiché à la suite du nom de chaque moteur.
On peut aussi appeler chacun des moteurs par son nom (m1,m2,etc.) à l'aide l'instruction contenur_du_robot.nom_du_moteur
comme dans l'exemple ci-dessous.
End of explanation
poppy.m1.present_position
Explanation: Lire les valeurs des moteurs
A partir de l'objet moteur vous pouvez accéder à ses divers registres. Les principaux sont :
present_position: retourne la position courante du moteur en degrés
present_speed: la vitesse courante du moteur en degrés par seconde
present_load: la charge de travail du moteur (en pourcentage de la charge maximale)
present_temperature: la température du moteur en degrés celsius
angle_limit: les limites atteignables du moteur (en degrés)
Ils sont accessibles directement.
Syntaxe possible : nom_du_robot.nom_du_moteur.registre
QUESTION
Dans l'instruction suivante, quel registre de quel moteur de quel robot est demandé ? Que doit retourner cette instruction ?
End of explanation
[m.present_position for m in poppy.motors]
Explanation: On peut avoir la position courante de tous les moteurs à l'aide de l'instruction ci-dessous.
QUESTIONS
Pourquoui a-t-on des crochets ?
Quelles sont toutes les valeurs que va prendre l'itérateur m ?
Quelle est la structure de poppy.motors ?
End of explanation
poppy.
Explanation: Il est important de comprendre que "poppy.m1.present_position" est automatiquement mis à jour avec la position courante du moteur réel (à 50Hz).
Comme pour les autres registres, la fréquence de mise à jour n'est pas la même et dépend de son importance. Par exemple, la température est rafraichie à 1Hz puisqu'elle ne varie pas très vite.
Remarque
Lorsque vous appuyez sur la touche tabulation après avoir mis un ., vous pouvez voir la liste des registres pour un objet.
Essayez dans la cellule de code suivante et observez.
Vous n'avez donc pas besoin de retenir les divers registres existants ni leurs noms.
End of explanation
poppy.m1.goal_position = 20
Explanation: Commander les moteurs
Au début des registres présentés précédemment, il y en a d'autres utilisés pour envoyer des instructions/commandes au robot. Par exemple, la position des moteurs est séparée en deux registres distincts.
En lecture seulement present_position du moteur
En lecture et écriture goal_position qui envoie au moteur une position cible qu'il va essayer d'atteindre.
Pour mettre un moteur dans une nouvelle position, vous pouvez écrire :
End of explanation
#COMPLETER
poppy.m1.goal_position =
Explanation: QUESTIONS
Quel effet va avoir l'intruction ci-dessus ?
En quelle unité est exprimé 20 ?
Comment faire pour que le moteur tourne dans l'autre sens de la même manière ?
End of explanation
#COMPLETER : pour mettre le registre moving_speed du moteur m1 de votre robot poppy à 50
poppy. =50
Explanation: Dans ces exemples, le moteur tourne aussi vite que possible (c'est le fonctionnement par défaut). Vous pouvez changer la vitesse maximale du moteur qui est enregistrée dans le registre moving_speed du moteur :
End of explanation
poppy.m1.goal_position = 90
Explanation: Maintenant le moteur m1 ne peut pas aller plus vite que 50 degrés par seconde. Faites le se déplacer une nouvelle fois pour constater la différence.
End of explanation
poppy.m6.compliant = True
Explanation: Les principaux registres sont :
goal_position : position cible en degrés
moving_speed : la vitesse maximale atteignable en degrés par seconde
compliant : expliqué ci-après
Les servo moteurs dynamixel ont deux modes :
stiff : le mode normal des moteurs dans lequel ils peuvent être controlés.
compliant : un mode dans lequel les moteurs peuvent être bougés librement à la main. Ce mode est particulièrement utile lors d'interactions physiques humain-robot.
Vous pouvez les faire passer d'un mode à l'autre en utilisant le registre compliant. Ce registre ne pouvant être associé qu'à deux valeurs, c'est un booléen qui ne peut prendre que les valeurs True et False.
Vous pouvez par exemple mettre le moteur m6 en mode compliant via :
End of explanation
poppy.m6.compliant = False
Explanation: Vous pouvez normalement faire bouger ce moteur à la main. Par exemple, cela vous sera utile pour programmer votre robot par démonstration (cf. notebook correspondant).
Et le remettre en mode stiff :
End of explanation
import time
for m in poppy.motors:
time.sleep(0.5)
m.led = 'yellow'
time.sleep(1.0)
m.led = 'off'
Explanation: Contrôler les LED des moteurs
Les moteurs XL-320 motors utilisés dans le Poppy Ergo Jr ont un petite LED de couleur. Vous pouvez changer sa couleur en utilisant pypot. C'est utile pour rendre votre robot plus vivant, customisé ...
Chaque moteur possède un registre led auquel on peut affecter un couleur 'green', 'red' parmi les couleurs disponibles.
QUESTION
Comment faire pour que la led du moteur m3 du robot poppy soit verte ?
En affectant la valeur 'off' à ce registre on éteint cette led.
QUESTION
Eteindre la led du moteur m3 du robot poppy.
QUESTION
Avant de le lancer, pouvez-vous décrire l'effet de ce jeu d'instructions ?
Comment faire pour allumer une led sur deux en rouge et une led sur deux en vert dans cet ordre ?
En Python la syntaxe du *si ... alors ... sinon ... * est la suivante :
if (condition) :
instructions du alors
else :
instructions du sinon
suite des instructions du programme ...
C'est l'indentation qui structure le programme.
End of explanation
from pypot.dynamixel.conversion import XL320LEDColors
print(list(XL320LEDColors))
Explanation: Vous pouvez connaitre toutes les couleurs disponibles à l'aide la commande suivante :
End of explanation
import cv2
# pour utiliser la caméra
%matplotlib inline
import matplotlib.pyplot as plt
from hampy import detect_markers
# pour détecter des marqueurs
img = poppy.camera.frame
plt.imshow(img)
# Que contient le conteneur img ?
# Que signifie plt.imshow(img) par quelle autre commande peut-on le remplacer ?
Explanation: Lire des capteurs/sensors
End of explanation
poppy.sensors
Explanation: Lire des capteurs se fait exactement de la même manière que lire des registres de votre robot. Vous pouvez accéder à vos capteurs via :
End of explanation
poppy.camera
Explanation: Ici, nous avons 2 capteurs :
* une caméra
* un détecteur de marqueur
Vous pouvez y accéder via leur nom :
End of explanation
poppy.camera.registers
Explanation: Vous pouvez récupérer tous les registres existants d'un capteur :
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
img = poppy.camera.frame
plt.imshow(img)
Explanation: et recupérer et afficher une image à partir de la caméra :
Les deux premières lignes suivantes permettent d'importer la bibliothèque qui gère l'image.
img est un conteneur qui contient l'image récupérée par la caméra du poppy.
La ligne 4 permet d'afficher l'image contenue dans le conteneur img.
End of explanation
plt.imshow(poppy.camera.frame)
Explanation: Comme pour les moteurs, les valeurs des capteurs sont automatiquement synchronisées en arrière plan avec le capteur physique. Si vous exécutez une nouvelle fois les instructions précédentes, vous obtiendrez une image plus récente.
End of explanation
[p.name for p in poppy.primitives]
Explanation: Comportements de haut niveau
Le robot Poppy Ergo Jr intègre un jeu de comportements prédéfinis. Il peut s'agir de postures spécifiques comme la posture de repos - rest_posture utilisée au début - ou d'une danse, ...
Vous pouvez en trouver la liste exhaustive en utilisant l'instruction suivante qui liste les primitives ou comportements :
End of explanation
poppy.tetris_posture.start()
Explanation: Ces comportements (ou primitives en "terminologie poppy") peuvent être démarrés, arrétés, mis en pause, etc.
End of explanation
import time
poppy.dance.start()
time.sleep(10)
poppy.dance.stop()
Explanation: Vous pouvez faire danser le Poppy Ergo Jr pendant 10 secondes :
End of explanation |
12,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of DOV search methods for lithologische beschrijvingen
Use cases
Step1: Get information about code base
Step2: The cost is an arbitrary attribute to indicate if the information is retrieved from a wfs query (cost = 1),
or from an xml (cost = 10)
Step3: Try-out of use cases
Select interpretations in a bbox
Step4: Select interpretations in a bbox with selected properties
Step5: The property feature methodes listed above are available from the owslib module. These were not adapted for use in pydov.
Step6: Select interpretations in a municipality
Step7: Visualize results
Using Folium, we can display the results of our search on a map. | Python Code:
%matplotlib inline
import os, sys
import inspect
import pydov
Explanation: Example of DOV search methods for lithologische beschrijvingen
Use cases:
Select records in a bbox
Select records in a bbox with selected properties
Select records in a municipality
Get records using info from wfs fields, not available in the standard output dataframe
End of explanation
from pydov.search.interpretaties import LithologischeBeschrijvingenSearch
ip_litho = LithologischeBeschrijvingenSearch()
# information about the HydrogeologischeStratigrafie type (In Dutch):
ip_litho.get_description()
# information about the available fields for a HydrogeologischeStratigrafie object
fields = ip_litho.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
# print information for a certain field
fields['beschrijving']
Explanation: Get information about code base
End of explanation
# if an attribute can have several values, these are listed under 'values', e.g. for 'Type_proef':
fields['Type_proef']
Explanation: The cost is an arbitrary attribute to indicate if the information is retrieved from a wfs query (cost = 1),
or from an xml (cost = 10)
End of explanation
from pydov.util.location import Within, Box
# Get all lithological descriptions in a bounding box (llx, lly, ulx, uly)
# the pkey_boring link is not available below, but is in the df
df = ip_litho.search(location=Within(Box(152145, 204930, 153150, 206935)))
df = df[df.beschrijving.notnull()]
df.head()
Explanation: Try-out of use cases
Select interpretations in a bbox
End of explanation
# list available query methods
methods = [i for i,j in inspect.getmembers(sys.modules['owslib.fes'],
inspect.isclass)
if 'Property' in i]
methods
from owslib.fes import PropertyIsGreaterThanOrEqualTo
Explanation: Select interpretations in a bbox with selected properties
End of explanation
# Get deep boreholes in a bounding box
from owslib.fes import PropertyIsEqualTo
# the propertyname can be any of the fields of the hydrogeological interpretations object that belong to the wfs source
# the literal is always a string, no matter what its definition is in the boring object (string, float...)
query = PropertyIsGreaterThanOrEqualTo(
propertyname='betrouwbaarheid_interpretatie', literal='goed')
df = ip_litho.search(location=Within(Box(153145, 206930, 153150, 206935)),
query=query
)
df.head()
Explanation: The property feature methodes listed above are available from the owslib module. These were not adapted for use in pydov.
End of explanation
query = PropertyIsEqualTo(propertyname='gemeente',
literal='Aartselaar')
df = ip_litho.search(query=query)
df.head()
Explanation: Select interpretations in a municipality
End of explanation
# import the necessary modules (not included in the requirements of pydov!)
import folium
from folium.plugins import MarkerCluster
from pyproj import Transformer
# convert the coordinates to lat/lon for folium
def convert_latlon(x1, y1):
transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True)
x2,y2 = transformer.transform(x1, y1)
return x2, y2
df['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y']))
# convert to list
loclist = df[['lat', 'lon']].values.tolist()
# initialize the Folium map on the centre of the selected locations, play with the zoom until ok
fmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=12)
marker_cluster = MarkerCluster().add_to(fmap)
for loc in range(0, len(loclist)):
# limit marker size for folium (:10)
folium.Marker(loclist[loc], popup=df['beschrijving'][loc][:10]).add_to(marker_cluster)
fmap
Explanation: Visualize results
Using Folium, we can display the results of our search on a map.
End of explanation |
12,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing fast XGM data from two simultaneous recordings
Here we will look at XGM data that was recorded by the X-ray photon diagnostics group at the same short time interval, but at different locations of the EuXFEL-SASE. We will compare an XGM in SASE1 (XTD2) to another one in SASE3 (XTD10). These data were stored in two different runs, belonging to two different proposals even.
Conceptually, this section makes use of the data-object format xarray.DataArray.
Step1: SASE1
Load the SASE1 run
Step2: We are interested in fast, i.e. pulse-resolved data from the instrument source SA1_XTD2_XGM/DOOCS
Step3: We are particularly interested in data for quantity "intensityTD". The xarray DataArray class is suited for work with axis-labeled data, and the karabo_data method get_array() serves the purpose of shaping a 2D array of that type from pulse-resolved data (which is originally stored "flat" in terms of pulses
Step4: Next, we will plot a portion of the data in two dimensions, taking the first 1500 trains for the x-Axis and the first 30 pulses per train for the y-Axis (1500, 30). Because the Matplotlib convention takes the slow axis to be y, we have to transpose to (30, 1500)
Step5: The pattern tells us what was done in this experiment
Step6: Accordingly for the odd "off" pulses
Step7: Now we can calculate the ratio of averages for every train - data types like numpy ndarray or xarray DataArray may be just divided "as such", a shortcut notation for dividing every corresponding element - and plot.
Step8: Moreover, the relative error of this ratio can be calculated by multiplicative error propagation as the square root of the sum of squared relative errors (enumerator and denominator), and from it the absolute error. The Numpy functions "sqrt" and "square" applied to array-like structures perform these operations element-wise, so the entire calculation can be conveniently done using the arrays as arguments, and we obtain individual errors for every train in the end.
Step9: We can as well plot the suppression ratio values with individual error bars according to the respective absolute error. Here, we restrict ourselves to the first 50 trains for clarity
Step10: Finally, we draw a histogram of suppression ratio values
Step11: We see that there is a suppression of signal from odd pulses to approximately 4% of the intensity of even pulses.
SASE3
We repeat everything for the second data set from the different run - SASE3
Step12: The difference here is that the selection scheme (indexing and slicing) shifts by one with respect to SASE1 data
Step13: The suppression ratio calculation and its plot
Step14: The error calculation with (selective) plot
Step15: The histogram
Step16: Here, suppression of signal for even "off" pulses is to approximately 0.5% of intensity from odd "on" pulses. The "suppression factor" is almost 10 times the value of SASE1. However, the relative error of these values is larger as well, as can be seen in the error-bar plot. For the smaller quantities, it is ~ 100% (!).
Overall comparison of suppression ratio (with error)
We ultimately want a single overall compression ratio with error for both beamlines, to complement the error-bar plots. In order to keep the error calculation simple, we do not average the mean values, but create one mean and standard deviation from a flat array of original values.
Because labeled axes are not required for this purpose, we can afford to move from the xarray.DataArray regime to Numpy array. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from karabo_data import RunDirectory
Explanation: Comparing fast XGM data from two simultaneous recordings
Here we will look at XGM data that was recorded by the X-ray photon diagnostics group at the same short time interval, but at different locations of the EuXFEL-SASE. We will compare an XGM in SASE1 (XTD2) to another one in SASE3 (XTD10). These data were stored in two different runs, belonging to two different proposals even.
Conceptually, this section makes use of the data-object format xarray.DataArray.
End of explanation
sa1_data = RunDirectory('/gpfs/exfel/exp/XMPL/201750/p700000/raw/r0008')
sa1_data.info()
Explanation: SASE1
Load the SASE1 run:
End of explanation
sa1_data.keys_for_source('SA1_XTD2_XGM/XGM/DOOCS:output')
Explanation: We are interested in fast, i.e. pulse-resolved data from the instrument source SA1_XTD2_XGM/DOOCS:output.
End of explanation
sa1_flux = sa1_data.get_array('SA1_XTD2_XGM/XGM/DOOCS:output', 'data.intensityTD')
print(sa1_flux)
Explanation: We are particularly interested in data for quantity "intensityTD". The xarray DataArray class is suited for work with axis-labeled data, and the karabo_data method get_array() serves the purpose of shaping a 2D array of that type from pulse-resolved data (which is originally stored "flat" in terms of pulses: there is one dimension of N(train) x N(pulse) values in HDF5, and the same number of train and pulse identifiers for reference).
The unique train identifier values are taken as coordinate values ("labels").
End of explanation
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
image = ax.imshow(sa1_flux[:1500, :30].transpose(), origin='lower', cmap='inferno')
ax.set_title('SASE1 XTD2 XGM intensity (fast)')
fig.colorbar(image, orientation='horizontal')
ax.set_xlabel('train index')
ax.set_ylabel('pulseIndex')
ax.set_aspect(15)
Explanation: Next, we will plot a portion of the data in two dimensions, taking the first 1500 trains for the x-Axis and the first 30 pulses per train for the y-Axis (1500, 30). Because the Matplotlib convention takes the slow axis to be y, we have to transpose to (30, 1500):
End of explanation
sa1_mean_on = np.mean(sa1_flux[:, :20:2], axis=1)
sa1_stddev_on = np.std(sa1_flux[:, :20:2], axis=1)
print(sa1_mean_on)
Explanation: The pattern tells us what was done in this experiment: the lasing scheme was set to provide an alternating X-ray pulse delivery within a train, where every "even" electron bunch caused lasing in SASE1 and every "odd" bunch caused lasing in SASE3. This scheme was applied for the first 20 pulses.
Therefore, we see signal only for data at even pulses here (0,2,...18), throughout all trains, of which 1500 are depicted. The intensity varies somewhat around 2000 units, but for odd pulses it is suppressed and neglegibly small.
A relevant measure to judge the efficiency of pulse suppression is the ratio of mean intensity between the odd and even set. The numpy mean method can work with DataArray objects and average over a specified dimension.
We make use of the numpy indexing and slicing syntax with square brackets and comma to seperate axes (dimensions). We specify [:, :20:2] to take every element of the slow axis (trains) and every second pulse up to but excluding # 20. That is, start:end:step = 0:20:2 (start index 0 is default, thus not put, and stop means first index beyond range). We specify axis=1 to explicitly average over that dimension. The result is a DataArray reduced to the "trainId" dimension.
End of explanation
sa1_mean_off = np.mean(sa1_flux[:, 1:21:2], axis=1)
sa1_stddev_off = np.std(sa1_flux[:, 1:21:2], axis=1)
print(sa1_mean_off)
Explanation: Accordingly for the odd "off" pulses:
End of explanation
sa1_suppression = sa1_mean_off / sa1_mean_on
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.plot(sa1_suppression.coords['trainId'].values, sa1_suppression)
ax.set_xlabel('train identifier')
ax.ticklabel_format(style='plain', useOffset=False)
plt.xticks(rotation=60)
ax.set_ylabel('suppression')
Explanation: Now we can calculate the ratio of averages for every train - data types like numpy ndarray or xarray DataArray may be just divided "as such", a shortcut notation for dividing every corresponding element - and plot.
End of explanation
sa1_rel_error = np.sqrt(np.square(sa1_stddev_off / sa1_mean_off) + np.square(sa1_stddev_on / sa1_mean_on))
sa1_abs_error = sa1_rel_error * sa1_suppression
Explanation: Moreover, the relative error of this ratio can be calculated by multiplicative error propagation as the square root of the sum of squared relative errors (enumerator and denominator), and from it the absolute error. The Numpy functions "sqrt" and "square" applied to array-like structures perform these operations element-wise, so the entire calculation can be conveniently done using the arrays as arguments, and we obtain individual errors for every train in the end.
End of explanation
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.errorbar(sa1_suppression.coords['trainId'].values[:50], sa1_suppression[:50], yerr=sa1_abs_error[:50], fmt='ro')
ax.set_xlabel('train identifier')
ax.ticklabel_format(style='plain', useOffset=False)
plt.xticks(rotation=60)
ax.set_ylabel('suppression')
Explanation: We can as well plot the suppression ratio values with individual error bars according to the respective absolute error. Here, we restrict ourselves to the first 50 trains for clarity:
End of explanation
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
_ = ax.hist(sa1_suppression, bins=50)
ax.set_xlabel('suppression')
ax.set_ylabel('frequency')
Explanation: Finally, we draw a histogram of suppression ratio values:
End of explanation
sa3_data = RunDirectory('/gpfs/exfel/exp/XMPL/201750/p700000/raw/r0009')
sa3_data.info()
sa3_flux = sa3_data.get_array('SA3_XTD10_XGM/XGM/DOOCS:output', 'data.intensityTD')
print(sa3_flux.shape)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
image = ax.imshow(sa3_flux[:1500, :30].transpose(), origin='lower', cmap='inferno')
ax.set_title('SASE3 XTD10 XGM intensity (fast)')
fig.colorbar(image, orientation='horizontal')
ax.set_xlabel('train index')
ax.set_ylabel('pulseIndex')
ax.set_aspect(15)
Explanation: We see that there is a suppression of signal from odd pulses to approximately 4% of the intensity of even pulses.
SASE3
We repeat everything for the second data set from the different run - SASE3:
End of explanation
sa3_mean_on = np.mean(sa3_flux[:, 1:21:2], axis=1)
sa3_stddev_on = np.std(sa3_flux[:, 1:21:2], axis=1)
print(sa3_mean_on)
sa3_mean_off = np.mean(sa3_flux[:, :20:2], axis=1)
sa3_stddev_off = np.std(sa3_flux[:, :20:2], axis=1)
print(sa3_mean_off)
Explanation: The difference here is that the selection scheme (indexing and slicing) shifts by one with respect to SASE1 data: odd pulses are "on", even pulses are "off". Moreover, while the alternating scheme is upheld to pulse # 19, pulses beyond that exclusively went to SASE3. There is signal up to pulse # 70, which we could see with a wider plotting range (but not done due to emphasis on the alternation).
End of explanation
sa3_suppression = sa3_mean_off / sa3_mean_on
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.plot(sa3_suppression.coords['trainId'].values, sa3_suppression)
ax.set_xlabel('train identifier')
ax.ticklabel_format(style='plain', useOffset=False)
plt.xticks(rotation=60)
ax.set_ylabel('suppression')
Explanation: The suppression ratio calculation and its plot:
End of explanation
sa3_rel_error = np.sqrt(np.square(sa3_stddev_off / sa3_mean_off) + np.square(sa3_stddev_on / sa3_mean_on))
sa3_abs_error = sa3_rel_error * sa3_suppression
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
ax.errorbar(sa1_suppression.coords['trainId'].values[:50], sa3_suppression[:50], yerr=sa3_abs_error[:50], fmt='ro')
ax.set_xlabel('train identifier')
ax.ticklabel_format(style='plain', useOffset=False)
plt.xticks(rotation=60)
ax.set_ylabel('suppression')
Explanation: The error calculation with (selective) plot
End of explanation
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1, 1, 1)
_ = ax.hist(sa3_suppression, bins=50)
ax.set_xlabel('suppression')
ax.set_ylabel('frequency')
Explanation: The histogram:
End of explanation
sa1_on_all = np.array(sa1_flux[:, :20:2]).flatten()
sa1_on_all.shape
sa1_mean_on_overall = np.mean(sa1_on_all)
sa1_stddev_on_overall = np.std(sa1_on_all)
sa1_off_all = np.array(sa1_flux[:, 1:21:2]).flatten()
sa1_off_all.shape
sa1_mean_off_overall = np.mean(sa1_off_all)
sa1_stddev_off_overall = np.std(sa1_off_all)
sa1_suppression_overall = sa1_mean_off_overall / sa1_mean_on_overall
sa1_rel_error_overall = np.sqrt(np.square(sa1_stddev_off_overall / sa1_mean_off_overall) + \
np.square(sa1_stddev_on_overall / sa1_mean_on_overall))
sa1_abs_error_overall = sa1_rel_error_overall * sa1_suppression_overall
print('SA1 suppression ratio =', sa1_suppression_overall, '\u00b1', sa1_abs_error_overall)
sa3_on_all = np.array(sa3_flux[:, 1:21:2]).flatten()
sa3_on_all.shape
sa3_mean_on_overall = np.mean(sa3_on_all)
sa3_stddev_on_overall = np.std(sa3_on_all)
sa3_off_all = np.array(sa3_flux[:, :20:2]).flatten()
sa3_off_all.shape
sa3_mean_off_overall = np.mean(sa3_off_all)
sa3_stddev_off_overall = np.std(sa3_off_all)
sa3_suppression_overall = sa3_mean_off_overall / sa3_mean_on_overall
sa3_rel_error_overall = np.sqrt(np.square(sa3_stddev_off_overall / sa3_mean_off_overall) + \
np.square(sa3_stddev_on_overall / sa3_mean_on_overall))
sa3_abs_error_overall = sa3_rel_error_overall * sa3_suppression_overall
print('SA3 suppression ratio =', sa3_suppression_overall, '\u00b1', sa3_abs_error_overall)
Explanation: Here, suppression of signal for even "off" pulses is to approximately 0.5% of intensity from odd "on" pulses. The "suppression factor" is almost 10 times the value of SASE1. However, the relative error of these values is larger as well, as can be seen in the error-bar plot. For the smaller quantities, it is ~ 100% (!).
Overall comparison of suppression ratio (with error)
We ultimately want a single overall compression ratio with error for both beamlines, to complement the error-bar plots. In order to keep the error calculation simple, we do not average the mean values, but create one mean and standard deviation from a flat array of original values.
Because labeled axes are not required for this purpose, we can afford to move from the xarray.DataArray regime to Numpy array.
End of explanation |
12,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DeepDreaming with TensorFlow
Loading and displaying the model graph
Naive feature visualization
Multiscale image generation
Laplacian Pyramid Gradient Normalization
Playing with feature visualzations
DeepDream
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science
Step1: <a id='loading'></a>
Loading and displaying the model graph
The pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network
Step6: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
Step7: <a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
Step8: <a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this
Step9: <a id="laplacian"></a>
Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normalization.
Step10: <a id="playing"></a>
Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
Step11: Lower layers produce features of lower complexity.
Step12: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
Step13: <a id="deepdream"></a>
DeepDream
Now let's reproduce the DeepDream algorithm with TensorFlow.
Step14: Let's load some image and populate it with DogSlugs (in case you've missed them).
Step15: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works | Python Code:
# boilerplate code
from __future__ import print_function
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
Explanation: DeepDreaming with TensorFlow
Loading and displaying the model graph
Naive feature visualization
Multiscale image generation
Laplacian Pyramid Gradient Normalization
Playing with feature visualzations
DeepDream
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see GoogLeNet and VGG16 galleries)
embed TensorBoard graph visualizations into Jupyter notebooks
produce high-resolution images with tiled computation (example)
use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
generate DeepDream-like images with TensorFlow (DogSlugs included)
The network under examination is the GoogLeNet architecture, trained to classify images into one of 1000 categories of the ImageNet dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for GoogLeNet and VGG16 architectures.
End of explanation
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
# model_fn = 'tensorflow_inception_graph.pb'
#CHANGED THIS TO MY GRAPH
model_fn = 'retrained_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
#HAD TO CHANGE THIS
tf.import_graph_def(graph_def, name='')
#tf.import_graph_def(graph_def, {'input':t_preprocessed})
Explanation: <a id='loading'></a>
Loading and displaying the model graph
The pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network:
End of explanation
#HAD TO CHANGE THIS - NOT LAYERS WITH 'IMPORT/' IN NAME
# layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D']
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
#ADDED THIS LINE TO SEE ALL LAYERS PRINTED
print([op.name for op in graph.get_operations()])
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = bytes("<stripped %d bytes>"%size, 'utf-8')
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
Explanation: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
End of explanation
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
#SWITCHED LAYER TO THE FINAL LAYER OF MY GRAPH
#layer = 'mixed4d_3x3_bottleneck_pre_relu'
layer = 'final_result:0'
channel = 0 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
# return graph.get_tensor_by_name("import/%s:0"%layer)
#TRYING TO RESIZE THE TENSOR TO GET IT TO WORK, BUT JUST GUESSING HONESTLY
# print(tf.shape(tf.reshape(graph.get_tensor_by_name(layer), [-1,-1,-1,-1], name=None)))
# print(graph.get_tensor_by_name(layer))
return tf.reshape(graph.get_tensor_by_name(layer), [2,1,1,1], name=None)
# return graph.get_tensor_by_name(layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
#g, score = sess.run([t_grad, t_score], {t_input:img})
g, score = sess.run(t_obj, {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:][:,:,:,channel])
Explanation: <a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
End of explanation
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
Explanation: <a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
End of explanation
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
Explanation: <a id="laplacian"></a>
Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normalization.
End of explanation
render_lapnorm(T(layer)[:,:,:,65])
Explanation: <a id="playing"></a>
Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
End of explanation
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
Explanation: Lower layers produce features of lower complexity.
End of explanation
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
Explanation: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
End of explanation
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
Explanation: <a id="deepdream"></a>
DeepDream
Now let's reproduce the DeepDream algorithm with TensorFlow.
End of explanation
img0 = PIL.Image.open('eschertest.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
Explanation: Let's load some image and populate it with DogSlugs (in case you've missed them).
End of explanation
render_deepdream(T(layer)[:,:,:,139], img0)
Explanation: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works:
End of explanation |
12,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
http
Step1: p
Step2: Train - Test | Python Code:
from __future__ import division
from os import path, remove
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from sklearn.metrics import r2_score
from fastdtw import fastdtw
from collections import OrderedDict
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
from skopt.space.space import Integer, Real
from skopt import gp_minimize
from skopt.plots import plot_convergence
import pickle
import inspect
import dill
import sys
from pandas import read_csv
from pandas import datetime
from pandas.plotting import autocorrelation_plot
from statsmodels.graphics.tsaplots import plot_pacf
from statsmodels.tsa.arima_model import ARIMA
from pandas import DataFrame
from sklearn.metrics import mean_squared_error
%matplotlib inline
data_path = '../../../../../Dropbox/data'
score_dic_filepath = data_path + "/arima/scoredic_testing.npy"
test = np.load(score_dic_filepath)[()]
len(test.keys())
ph_data_path = data_path + '/price_history'
npz_path = ph_data_path + '/price_history_mobattrs_date_dp_60to30_62020_trimmed'
npz_train = npz_path + '_train.npz'
assert path.isfile(npz_train)
path.abspath(npz_train)
dic = np.load(npz_train)
dic.keys()
dic = {
"inputs": dic['inputs'][:, :, 0],
"targets": dic['targets'],
#"full": np.hstack((dic['inputs'][:, :, 0], dic['targets'])),
}
for key, val in dic.iteritems():
print key, val.shape
#fix targets
fulls = np.array([np.concatenate( (item_in, item_tar + item_in[-1]) )
for item_in, item_tar in zip(dic['inputs'], dic['targets'])])
fulls.shape
sns.tsplot(fulls[100])
autocorrelation_plot(fulls[100])
plot_pacf(fulls[100])
plt.show()
Explanation: http://machinelearningmastery.com/arima-for-time-series-forecasting-with-python/
http://ucanalytics.com/blogs/arima-models-manufacturing-case-study-example-part-3/
https://stats.stackexchange.com/questions/23036/estimating-same-model-over-multiple-time-series
The idea with ARIMA models is that the final residual should look like white noise otherwise, there is juice or information available in the data to extract
End of explanation
target_len = 30
XX = fulls[:, :-target_len]
XX.shape
YY = fulls[:, -target_len:]
YY.shape
from arima_estimator import ArimaEstimator
ae = ArimaEstimator(p_auto_regression_order=5, d_integration_level=2, q_moving_average=5, easy_mode=True)
ae
#duration estimation in hours!
55800 * (1.09 /12) * 30 / 3600
15*60 / (13.2 / 3) #secs each
import warnings
%%time
# with warnings.catch_warnings():
# warnings.filterwarnings("ignore")
# ae.fit(inputs=XX[:3], targets=YY[:3])
from collections import OrderedDict
parameters = OrderedDict([
('p_auto_regression_order', range(6)), #0-5
('d_integration_level', range(3)), #0-2
('q_moving_average', range(6)), #0-5
])
def cartesian_coord(*arrays):
grid = np.meshgrid(*arrays)
coord_list = [entry.ravel() for entry in grid]
points = np.vstack(coord_list).T
return points
cart = cartesian_coord(*parameters.values())
cart.shape
108 * 15 / 60 / 2 #in hours!
count_inds=100
random_state = np.random.RandomState(seed=16011984)
random_inds = random_state.choice(range(len(XX)), count_inds, replace=False)
xx = XX[random_inds]
yy = YY[random_inds]
filepath = 'scoredic.npy'
%%time
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
for pp, dd, qq in cart:
scoredic = np.load(filepath)[()] if path.isfile(filepath) else OrderedDict()
cur_tuple = (pp, dd, qq)
if cur_tuple in scoredic:
continue
else:
ae = ArimaEstimator(p_auto_regression_order=pp, d_integration_level=dd, q_moving_average=qq,
easy_mode=True)
scoredic[cur_tuple] = ae.fit(inputs=xx, targets=yy).score(xx, yy)
np.save(filepath, scoredic)
from sklearn.model_selection import GridSearchCV
# Pick hyperparameters
# There is NO separation of training and testing set because there is no transfer of knowledge from the training
# that could be useful for validation
# Run Arima prediction steps for all instances of training dataset and get a fastdtw score from each case
# Get the mean of the fastdtw and this is the CV score
#
# Do this for all possible parameters
fitted = model.fit()
fitted.
model_fit = model.fit(disp=0)
model_fit.summary()
residuals = DataFrame(model_fit.resid)
residuals.plot()
plt.show()
residuals.plot(kind='kde')
plt.show()
residuals.describe()
Explanation: p: The number of lag observations included in the model, also called the lag order.
d: The number of times that the raw observations are differenced, also called the degree of differencing.
q: The size of the moving average window, also called the order of moving average.
End of explanation
XX = series.values
XX.shape
#splitting
train_size = int(len(XX)*2/3)
train_size
train_set = XX[:train_size]
test_set = XX[train_size:]
train_set.shape, test_set.shape
train_set[0]
history = list(train_set)
history[0] = 3333
history = list(train_set)
len(history)
predictions = []
for tt in range(len(test_set)):
output = ARIMA(history, order=(5,1,0)).fit(disp=0).forecast()
y_hat = output[0] #0th is the index of the prediction
predictions.append(y_hat)
observation = test_set[tt]
#history.append(observation)
history.append(y_hat)
print "predicted: {}, expected: {}".format(y_hat, observation)
error = mean_squared_error(predictions, test_set)
error
plt.figure(figsize=(15,6))
plt.plot(predictions, label='preds')
plt.plot(test_set, label='test set')
plt.legend()
plt.show()
Explanation: Train - Test
End of explanation |
12,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fill Database WaveForm Headers
1) Import de las librerias que utilizaremos
Step1: 2) Leemos el archivo con las WaveForm que vamos a utilizar
Step2: 3) Limpiamos los caracteres extraños y Dividimos la cadena donde pXXNNNN-YYYY-MM-DD-hh-mm donde XXNNNN es el identificador unico del paciente SUBJECT_ID y YYYY-MM-DD-hh-mm es la fecha de la estadia del paciente'
Step3: 4) Leemos el encabezado del waveform, para obtener la información del paciente que almacenaremos
Step4: Le agregamos el subject_id y la fecha del record a los campos
Step5: convertimos los campos en un diccionario
Step6: Nos conectamos a la base de datos postgres donde almacenaremos los datos
Step7: Creamos la tabla donde quedaran almacenados los datos
Step8: verificamos si ya existe el dato
Step9: Insertamos los datos
Step10: Hacemos commit
Step11: cerramos conexion | Python Code:
import urllib.request
import wfdb
import psycopg2
from psycopg2.extensions import AsIs
Explanation: Fill Database WaveForm Headers
1) Import de las librerias que utilizaremos
End of explanation
target_url = "https://physionet.org/physiobank/database/mimic2wdb/matched/RECORDS-waveforms"
data = urllib.request.urlopen(target_url) # it's a file like object and works just like a file
lines = data.readlines();
line = str(lines[2])
line
Explanation: 2) Leemos el archivo con las WaveForm que vamos a utilizar
End of explanation
line = line.replace('b\'','').replace('\'','').replace('\\n','')
splited = line.split("/")
carpeta,onda = line.split("/")
subject_id = carpeta.replace('s','')
recordDate = onda.replace(carpeta+"-","")
print("subject_id: ",subject_id)
print("recordDate: ",recordDate)
print("onda: ",onda)
print("carpeta: ",carpeta)
Explanation: 3) Limpiamos los caracteres extraños y Dividimos la cadena donde pXXNNNN-YYYY-MM-DD-hh-mm donde XXNNNN es el identificador unico del paciente SUBJECT_ID y YYYY-MM-DD-hh-mm es la fecha de la estadia del paciente'
End of explanation
try:
sig, fields = wfdb.srdsamp(onda,pbdir='mimic2wdb/matched/'+carpeta, sampto=1)
print(fields)
except Exception as inst:
print("onda vacia")
Explanation: 4) Leemos el encabezado del waveform, para obtener la información del paciente que almacenaremos
End of explanation
fields['subject_id'] = subject_id
fields['recordDate'] = recordDate
fields['database'] = "mimic2"
Explanation: Le agregamos el subject_id y la fecha del record a los campos
End of explanation
columns = fields.keys()
values = [fields[column] for column in columns]
print(columns)
Explanation: convertimos los campos en un diccionario
End of explanation
conn = psycopg2.connect("dbname=mimic")
cur = conn.cursor()
Explanation: Nos conectamos a la base de datos postgres donde almacenaremos los datos
End of explanation
table = "waveformFields"
#cur.execute("DROP TABLE "+table)
cur.execute('''CREATE TABLE IF NOT EXISTS waveformFields
(id serial PRIMARY KEY,
comments character varying(255)[],
fs integer, signame character varying(255)[],
units character varying(255)[],
subject_id integer,
recordDate character varying(255),
database character varying(50))''')
Explanation: Creamos la tabla donde quedaran almacenados los datos
End of explanation
def track_not_exists(cur, subject_id,recordDate,database):
select_stament = 'select id from waveformFields where subject_id= %s and recorddate = %s and database = %s'
cur.execute(select_stament,(int(subject_id),recordDate,database))
return cur.fetchone() is None
def track_subject(cur,subject_id):
select_stament= 'SELECT id FROM subjectwords WHERE subject_id= %s'
cur.execute(select_stament,(int(subject_id),))
return cur.fetchone() is None
def patient_dead(cur,subject_id):
select_stament= 'SELECT dod FROM patients WHERE subject_id= %s'
cur.execute(select_stament,(int(subject_id),))
row = cur.fetchone()
if(row[0] is None):
return False
else :
print("row "+row[0])
return True
notExist = False
if track_not_exists(cur,subject_id,recordDate,"mimic2") and track_subject(cur,subject_id) and patient_dead(cur,subject_id) :
notExist = True
print("not exist %s " % subject_id)
Explanation: verificamos si ya existe el dato
End of explanation
insert_statement = 'insert into '+table+' (%s) values %s'
print(cur.mogrify(insert_statement, (AsIs(','.join(columns)), tuple(values))))
if notExist:
cur.execute(insert_statement, (AsIs(','.join(columns)), tuple(values)))
Explanation: Insertamos los datos
End of explanation
conn.commit()
Explanation: Hacemos commit
End of explanation
conn.close()
Explanation: cerramos conexion
End of explanation |
12,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Predicting house prices using Linear Regression
(See Getting Started with SFrames for setup instructions)
Step2: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step3: Exploring the data
The house price is correlated with the number of square feet of living space.
Step4: Create a simple regression model of sqft_living to price
Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step5: Build the regression model using only sqft_living as a feature
Step6: Evaluate the simple model
Step7: RMSE of about \$255,170!
Visualize the prediction
Step8: Above
Step9: Explore other features in the data
To build a more elaborate model, we will explore using more features.
Step10: Pull the bar at the bottom to view more of the data.
98039 is the most expensive zip code.
Build a regression model with more features
Step11: Comparing the results of the simple model with adding more features
Step12: The RMSE goes down from \$255,170 to \$179,508 with more features.
Apply learned models to predict prices of 3 houses
The first house we will use is considered an "average" house in Seattle.
Step13: <img src="http
Step14: In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.
Prediction for a second, fancier house
We will now examine the predictions for a fancier house.
Step15: <img src="https
Step16: In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house.
Last house, super fancy
Our last house is a very large one owned by a famous Seattleite.
Step17: <img src="https
Step18: The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.) | Python Code:
import os
from urllib import urlretrieve
import graphlab
# Limit number of worker processes. This preserves system memory, which prevents hosted notebooks from crashing.
graphlab.set_runtime_config('GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS', 4)
URL = 'https://d396qusza40orc.cloudfront.net/phoenixassets/home_data.csv'
def get_data(filename='home_data.csv', url=URL, force_download=False):
Download and cache the fremont data
Parameters
----------
filename: string (optional)
location to save the data
url: string (optional)
force_download: bool (optional)
if True, force redownload of data
Returns
-------
data: graphlab SFrame. Similer to a pandas DataFrame,
but with capacity for faster analysis of larger data sets
if force_download or not os.path.exists(filename):
urlretrieve(url, filename)
sf = graphlab.SFrame('home_data.csv')
return sf
#sales = get_data()
#sales.head()
Explanation: Predicting house prices using Linear Regression
(See Getting Started with SFrames for setup instructions)
End of explanation
sales = graphlab.SFrame('home_data.gl')
sales
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
graphlab.canvas.set_target('ipynb')
sales.show(view='Scatter Plot', x='sqft_living', y='price')
Explanation: Exploring the data
The house price is correlated with the number of square feet of living space.
End of explanation
train_data, test_data = sales.random_split(.8, seed=0)
Explanation: Create a simple regression model of sqft_living to price
Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'])
Explanation: Build the regression model using only sqft_living as a feature
End of explanation
print test_data['price'].mean()
print sqft_model.evaluate(test_data)
Explanation: Evaluate the simple model
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(test_data['sqft_living'], test_data['price'], '.',
test_data['sqft_living'], sqft_model.predict(test_data), '-')
Explanation: RMSE of about \$255,170!
Visualize the prediction
End of explanation
sqft_model.coefficients
Explanation: Above: blue dots are original data, green line is the prediction from the simple regression.
Below: we can view the learned regression coefficients.
End of explanation
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
sales[my_features].show()
sales.show(view='BoxWhisker Plot', x='bathrooms', y='price')
Explanation: Explore other features in the data
To build a more elaborate model, we will explore using more features.
End of explanation
my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features,validation_set=None)
print my_features
Explanation: Pull the bar at the bottom to view more of the data.
98039 is the most expensive zip code.
Build a regression model with more features
End of explanation
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
Explanation: Comparing the results of the simple model with adding more features
End of explanation
house1 = sales[sales['id'] =='5309101200']
house1
Explanation: The RMSE goes down from \$255,170 to \$179,508 with more features.
Apply learned models to predict prices of 3 houses
The first house we will use is considered an "average" house in Seattle.
End of explanation
print house1['price']
print sqft_model.predict(house1)
print my_features_model.predict(house1)
Explanation: <img src="http://info.kingcounty.gov/Assessor/eRealProperty/MediaHandler.aspx?Media=2916871">
End of explanation
house2 = sales[sales['id']=='1925069082']
house2
Explanation: In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.
Prediction for a second, fancier house
We will now examine the predictions for a fancier house.
End of explanation
print sqft_model.predict(house2)
print my_features_model.predict(house2)
Explanation: <img src="https://ssl.cdn-redfin.com/photo/1/bigphoto/302/734302_0.jpg">
End of explanation
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
Explanation: In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house.
Last house, super fancy
Our last house is a very large one owned by a famous Seattleite.
End of explanation
print my_features_model.predict(graphlab.SFrame(bill_gates))
Explanation: <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/Bill_gates%27_house.jpg/2560px-Bill_gates%27_house.jpg">
End of explanation
Fancy_zip = sales[sales['zipcode']=='98039']
Fancy_zip['price'].mean()
sqftover2000 = sales[sales['sqft_living'] > 2000]
sqftover2000under4000 = sqftover2000[sqftover2000['sqft_living'] < 4000]
import numpy as np
advanced_features = [
'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',
'condition', # condition of house
'grade', # measure of quality of construction
'waterfront', # waterfront property
'view', # type of view
'sqft_above', # square feet above ground
'sqft_basement', # square feet in basement
'yr_built', # the year built
'yr_renovated', # the year renovated
'lat', 'long', # the lat-long of the parcel
'sqft_living15', # average sq.ft. of 15 nearest neighbors
'sqft_lot15', # average lot size of 15 nearest neighbors
]
advanced_features_model = graphlab.linear_regression.create(train_data, target='price', features=advanced_features, validation_set=None)
print my_features_model.evaluate(test_data)['rmse'] - advanced_features_model.evaluate(test_data)['rmse']
print advanced_features_model.predict(house1)
Explanation: The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.)
End of explanation |
12,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis
Jose Manuel Vera Aray
Import libraries to be used
Step1: Import training data
Step2: Separate tweets into two sets
Step3: Split the data into the training set and test set for crossvalidation
Step4: Create a pipeline for each classifier algorithm
.....vectorizer => transformer => classifier
Step5: Parameter tuning
We will use GridSearchCV, which exhaustively considers all parameter combinations, to find the best model for the data. A search consists of
Step6: Set models and do the search of the parameter
Step7: Set the classifiers to the have the best combination of parameters
Step8: Train algorithms
Step9: Predict on the test set and check metrics
Step10: Perform k-fold
Step12: Plot Confusion Matrix
Step13: Print the metrics of the performance of the algorithms
Step14: ROC Curves
Step15: Predict on unclassified data
Step16: Predict sentiment
Step18: Categorize data by political party
Step19: Put data into a dataframe
Step20: Get some information about the predictions | Python Code:
import numpy as np
import itertools
import math
import pandas as pd
import csv
import time
from sklearn.cross_validation import train_test_split, KFold
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.model_selection import learning_curve
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier, BaggingClassifier
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import make_scorer, accuracy_score, precision_score, recall_score, f1_score, roc_curve, auc, confusion_matrix
from sklearn.grid_search import GridSearchCV
from sklearn.utils import shuffle
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid", color_codes=True)
Explanation: Sentiment Analysis
Jose Manuel Vera Aray
Import libraries to be used
End of explanation
with open("/resources/data/classified_tweets.txt", "r",encoding="utf8") as myfile:
data = myfile.readlines()
Explanation: Import training data
End of explanation
X=[]
y=[]
for x in data:
X.append(x[1:])
y.append(x[0])
Explanation: Separate tweets into two sets: tweet data and its respective classification
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=42, test_size = 0.3)
Explanation: Split the data into the training set and test set for crossvalidation
End of explanation
#Logistic Regression
Log_clf=Pipeline([('vect', CountVectorizer(analyzer='word')), ('tfidf', TfidfTransformer()), ('clf', LogisticRegression())])
#Multinomial Naive Bayes
MNB_clf=Pipeline([('vect', CountVectorizer(analyzer='word')), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB())])
Explanation: Create a pipeline for each classifier algorithm
.....vectorizer => transformer => classifier
End of explanation
parameters_log = {'vect__ngram_range': [(1, 1), (1, 2)],'tfidf__use_idf': (True, False), 'clf__penalty': ['l1','l2'], 'clf__solver':['liblinear']}
parameters_mnb = {'vect__ngram_range': [(1, 1), (1, 2)],'tfidf__use_idf': (True, False),'clf__alpha': (1,1e-2, 1e-3)}
Explanation: Parameter tuning
We will use GridSearchCV, which exhaustively considers all parameter combinations, to find the best model for the data. A search consists of:
- an estimator (regressor or classifier )
- a parameter space;
- a method for searching or sampling candidates;
- a cross-validation scheme;
- a score function, such as accurracy_score()
Set parameters we will test for each algorithm
End of explanation
#Set models
acc_scorer = make_scorer(accuracy_score)
gs_log_clf = GridSearchCV(Log_clf, parameters_log, n_jobs=-1, scoring=acc_scorer)
gs_mnb_clf = GridSearchCV(MNB_clf, parameters_mnb, n_jobs=-1, scoring=acc_scorer)
# Grid search of best parameters
print ("-----Tunning of parameters-----")
start = time.time()
gs_log_clf = gs_log_clf.fit(X_train, y_train)
end = time.time()
print ("Logistic Regression -"," Running Time:", end - start,"s")
start = time.time()
gs_mnb_clf= gs_mnb_clf.fit(X_train, y_train)
end = time.time()
print ("Multinomial Naive Bayes -"," Running Time:", end - start,"s")
Explanation: Set models and do the search of the parameter
End of explanation
Log_clf= gs_log_clf .best_estimator_
MNB_clf= gs_mnb_clf .best_estimator_
Explanation: Set the classifiers to the have the best combination of parameters
End of explanation
start = time.time()
Log_clf = Log_clf.fit(X_train, y_train)
end = time.time()
print ("Logistic Regression -"," Running Time:", end - start,"s")
start = time.time()
MNB_clf = MNB_clf.fit(X_train, y_train)
end = time.time()
print ("Multinomial Naive Bayes -"," Running Time:", end - start,"s")
Explanation: Train algorithms
End of explanation
predicted_Log = Log_clf.predict(X_test)
predicted_MNB =MNB_clf.predict(X_test)
dec_Log = Log_clf.decision_function(X_test)
dec_MNB =MNB_clf.predict_proba(X_test)
Explanation: Predict on the test set and check metrics
End of explanation
def run_kfold(clf):
#run KFold with 10 folds instead of the default 3
#on the 200000 records in the training_data
kf = KFold(200000, n_folds=10,shuffle=True)
X_new=np.array(X)
y_new=np.array(y)
outcomes = []
fold = 0
for train_index, test_index in kf:
fold += 1
X1_train, X1_test = X_new[train_index], X_new[test_index]
y1_train, y1_test = y_new[train_index], y_new[test_index]
clf.fit(X1_train, y1_train)
predictions = clf.predict(X1_test)
accuracy = accuracy_score(y1_test, predictions)
outcomes.append(accuracy)
print("Fold {0} accuracy: {1}".format(fold, accuracy))
mean_outcome = np.mean(outcomes)
print("Mean Accuracy: {0}".format(mean_outcome))
run_kfold(Log_clf)
run_kfold(MNB_clf)
Explanation: Perform k-fold
End of explanation
Log_matrix=confusion_matrix(y_test, predicted_Log)
Log_matrix=Log_matrix[::-1, ::-1]
MNB_matrix=confusion_matrix(y_test, predicted_MNB)
MNB_matrix=MNB_matrix[::-1, ::-1]
def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i,round(cm[i, j],2), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plot_confusion_matrix(Log_matrix, classes=["Positive","Negative"], normalize=True, title='Normalized confusion matrix')
plot_confusion_matrix(MNB_matrix, classes=["Positive","Negative"], normalize=True, title='Normalized confusion matrix')
Explanation: Plot Confusion Matrix
End of explanation
X_test_new_Log=[]
X_test_new_MNB=[]
y_new=[]
Log_list=predicted_Log.tolist()
MNB_list=predicted_MNB.tolist()
for x in Log_list:
X_test_new_Log.append(int(x))
for x in MNB_list:
X_test_new_MNB.append(int(x))
for x in y_test:
y_new.append(int(x))
Log_list_prediction=[1 if x==4 else x for x in X_test_new_Log]
MNB_list_prediction=[1 if x==4 else x for x in X_test_new_MNB]
target_new=[1 if x==4 else x for x in y_new]
print('Metrics Logistic Regression')
print('-------------------------------------')
print("Accuracy:",accuracy_score(target_new, Log_list_prediction))
print("Recall:",recall_score(target_new,Log_list_prediction))
print("Precision:",precision_score(target_new, Log_list_prediction))
print("F1 Score:",f1_score(target_new, Log_list_prediction))
print('' '')
print('Metrics Multinomial Naive Bayes')
print('-------------------------------------')
print("Accuracy:",accuracy_score(target_new, MNB_list_prediction))
print("Recall:",recall_score(target_new,MNB_list_prediction))
print("Precision:",precision_score(target_new, MNB_list_prediction))
print("F1 Score:",f1_score(target_new, MNB_list_prediction))
Explanation: Print the metrics of the performance of the algorithms
End of explanation
predicted_Log_new=[]
y_actual=[]
for x in dec_Log:
predicted_Log_new.append(int(x))
for x in y_test:
y_actual.append(int(x))
Log_list_prediction=[1 if x==4 else x for x in predicted_Log_new]
target_new=[1 if x==4 else x for x in y_actual]
fpr, tpr, thresholds=roc_curve(target_new, Log_list_prediction, pos_label=1)
roc_auc= auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.3f)' % roc_auc )
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Logistic Regression ROC Curve')
plt.legend(loc="lower right")
plt.show()
plt.savefig('Log_roc.jpg')
predicted_MNB_new=[]
y_actual=[]
for x in range(0,60000):
predicted_MNB_new.append(dec_MNB[x][1])
#for x in dec_MNB:
# predicted_MNB_new.append(int(x))
for x in y_test:
y_actual.append(int(x))
#MNB_list_prediction=[1 if x==4 else x for x in predicted_MNB_new]
target_new=[1 if x==4 else x for x in y_actual]
fpr, tpr, thresholds=roc_curve(target_new, predicted_MNB_new)
roc_auc= auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.3f)' % roc_auc )
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Multinomial Naive Bayes ROC Curve')
plt.legend(loc="lower right")
plt.show()
plt.savefig('MNB_roc.jpg')
Explanation: ROC Curves
End of explanation
with open("/resources/data/unclassified_tweets.txt", "r",encoding="utf8") as myfile:
unclass_data = myfile.readlines()
Explanation: Predict on unclassified data
End of explanation
MNB_clf = MNB_clf.fit(X_train, y_train)
predicted_MNB_unclass =MNB_clf.predict(unclass_data)
Explanation: Predict sentiment
End of explanation
def party(tw):
For NDP is chosen the name of the candidate of that party in various forms and its campaign slogan
For Liberals is chosen the name of the candidate of that party in various forms and its campaign slogan
For Conservatives was chosen the name of the candidate of that partym and associations related with the party (tcot,ccot),
nickname used by them (like tory) and the bill that was inroduced by the conservative government (c51)
tw_clean=tw.split()
hashtags=[]
NDP_list=['tommulcair','mulcair','ndp','tm4pm', 'ready4change','thomasmulcair']
Lib_list=['justin', 'trudeau\'s','lpc','trudeau','realchange','liberal','liberals','justintrudeau','teamtrudeau']
Cons_list=['c51','harper','cpc','conservative', 'tory','tcot','stephenharper','ccot','harper','conservatives']
for x in range(0,len(tw_clean)):
if tw_clean[x].find('#')!= -1:
hashtags.append(tw_clean[x].replace('#',''))
result=''
if hashtags:
for x in hashtags:
if x in NDP_list:
result= 'NDP'
return result
elif x in Lib_list:
result= 'Liberal'
return result
elif x in Cons_list:
result= 'Conservative'
return result
if result=='':
result='Other'
return result
party_af=[]
for x in range(0,len(unclass_data)):
party_af.append(party(unclass_data[x]))
Explanation: Categorize data by political party
End of explanation
predictions=[]
for x in range(0,len(unclass_data)):
predictions.append((unclass_data[x],party_af[x],predicted_MNB_unclass[x]))
tweets=pd.DataFrame(predictions, columns=["Tweet","Party","Classification - MNB"])
tweets.head()
def get_sent(tweet):
classf = tweet
return 'Positive' if classf=='4' else 'Negative'
tweets_clean= tweets[tweets.Party !="Other"]
tweets_clean['Sentiment'] = tweets_clean['Classification - MNB'].apply(get_sent)
tweets_clean.drop(labels=['Classification - MNB'], axis=1, inplace=True)
tweets_clean.head()
sns.countplot(x='Sentiment', hue="Party", data=tweets_clean)
Explanation: Put data into a dataframe
End of explanation
print ("Number of tweets of classfied for each party")
tweets_clean.Party.value_counts().head()
print ("Number of tweets of classfied for each party")
tweets_clean[tweets_clean.Sentiment=='Positive'].Party.value_counts().head()
print ("Number of tweets of classfied for each party")
tweets_clean[tweets_clean.Sentiment=='Negative'].Party.value_counts().head()
Explanation: Get some information about the predictions
End of explanation |
12,146 | Given the following text description, write Python code to implement the functionality described.
Description:
Minimum number of given operations required to reduce the array to 0 element
Function to return the minimum operations required ; Count the frequency of each element ; Maximum element from the array ; Find all the multiples of i ; Delete the multiples ; Increment the operations ; Driver code
| Python Code:
def minOperations(arr , n ) :
result = 0
freq =[0 ] * 1000001
for i in range(0 , n ) :
freq[arr[i ] ] += 1
maxi = max(arr )
for i in range(1 , maxi + 1 ) :
if freq[i ] != 0 :
for j in range(i * 2 , maxi + 1 , i ) :
freq[j ] = 0
result += 1
return result
if __name__== "__main __":
arr =[2 , 4 , 2 , 4 , 4 , 4 ]
n = len(arr )
print(minOperations(arr , n ) )
|
12,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
You will learn how to
Step1: 2 - Dataset
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables X and Y.
Step2: Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
Step3: You have
Step4: Expected Output
Step5: You can now plot the decision boundary of these models. Run the code below.
Step7: Expected Output
Step9: Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step17: Expected output
Step19: Expected Output
Step21: Expected Output
Step22: Expected Output
Step23: Expected Output
Step24: Expected Output
Step25: Interpretation | Python Code:
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
Explanation: Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
You will learn how to:
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- sklearn provides simple and efficient tools for data mining and data analysis.
- matplotlib is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
End of explanation
X, Y = load_planar_dataset()
Explanation: 2 - Dataset
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables X and Y.
End of explanation
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
Explanation: Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
End of explanation
### START CODE HERE ### (≈ 3 lines of code)
shape_X = None
shape_Y = None
m = None # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
Explanation: You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
Exercise: How many training examples do you have? In addition, what is the shape of the variables X and Y?
Hint: How do you get the shape of a numpy array? (help)
End of explanation
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
End of explanation
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
Explanation: You can now plot the decision boundary of these models. Run the code below.
End of explanation
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
### START CODE HERE ### (≈ 3 lines of code)
n_x = None # size of input layer
n_h = None
n_y = None # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
Interpretation: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
Here is our model:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
Mathematically:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{2} > 0.5 \ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
Reminder: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call nn_model(). Once you've built nn_model() and learnt the right parameters, you can make predictions on new data.
4.1 - Defining the neural network structure
Exercise: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
Hint: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
4.2 - Initialize the model's parameters
Exercise: Implement the function initialize_parameters().
Instructions:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: np.random.randn(a,b) * 0.01 to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: np.zeros((a,b)) to initialize a matrix of shape (a,b) with zeros.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = None
A1 = None
Z2 = None
A2 = None
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
4.3 - The Loop
Question: Implement forward_propagation().
Instructions:
- Look above at the mathematical representation of your classifier.
- You can use the function sigmoid(). It is built-in (imported) in the notebook.
- You can use the function np.tanh(). It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of initialize_parameters()) by using parameters[".."].
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "cache". The cache will be given as an input to the backpropagation function.
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = None
cost = None
### END CODE HERE ###
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "A2"), which contains $a^{2}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
Exercise: Implement compute_cost() to compute the value of the cost $J$.
Instructions:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{2})$:
python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
(you can use either np.multiply() and then np.sum() or directly np.dot()).
End of explanation
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = None
W2 = None
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = None
A2 = None
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = None
dW2 = None
db2 = None
dZ1 = None
dW1 = None
db1 = None
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**cost**</td>
<td> 0.693058761... </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
Question: Implement the function backward_propagation().
Instructions:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
Tips:
To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using (1 - np.power(A1, 2)).
End of explanation
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = None
db1 = None
dW2 = None
db2 = None
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[-0.16655712]] </td>
</tr>
</table>
Question: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
General gradient descent rule: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
Illustration: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
End of explanation
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
### START CODE HERE ### (≈ 5 lines of code)
parameters = None
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = None
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = None
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = None
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = None
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model()
Question: Build your neural network model in nn_model().
Instructions: The neural network model has to use the previous functions in the right order.
End of explanation
# GRADED FUNCTION: predict
def predict(parameters, X):
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = None
predictions = None
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td>
**cost after iteration 0**
</td>
<td>
0.692739
</td>
</tr>
<tr>
<td>
<center> $\vdots$ </center>
</td>
<td>
<center> $\vdots$ </center>
</td>
</tr>
<tr>
<td>**W1**</td>
<td> [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.20459656]] </td>
</tr>
</table>
4.5 Predictions
Question: Use your model to predict by building predict().
Use forward propagation to predict results.
Reminder: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: X_new = (X > threshold)
End of explanation
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.666666666667 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
End of explanation
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
End of explanation
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
Explanation: Expected Output:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
4.6 - Tuning hidden layer size (optional/ungraded exercise)
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
End of explanation
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
Explanation: Interpretation:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
Optional questions:
Note: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right.
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
You've learnt to:
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
End of explanation |
12,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas and Scikit-learn
Pandas is a Python library that contains high-level data structures and manipulation tools designed for data analysis. Think of Pandas as a Python version of Excel. Scikit-learn, on the other hand, is an open-source machine learning library for Python.
While Scikit-learn does a lot of the heavy lifting, what's equally important is ensuring that raw data is processed in such a way that we are able to 'feed' it to Scikit-learn. Hence, the ability to manipulate raw data with Pandas makes it an indispensible part of our toolkit.
Kaggle
Kaggle is the leading platform for data science competitions. Participants compete for cash prizes by submitting the best predictive model to problems posted on the competition website.
https
Step1: Pandas - Cleaning data
We then review a selection of the data.
Step2: We notice that the columns describe features of the Titanic passengers, such as age, sex, and class. Of particular interest is the column Survived, which describes whether or not the passenger survived. When training our model, what we are essentially doing is assessing how each feature impacts whether or not the passenger survived (or if the feature makes an impact at all).
Exercise
Step3: Next, we review the type of data in the columns, and their respective counts.
Step4: We notice that the columns Age and Embarked have NAs or missing values. As previously discussed, we take the approach of simply removing the rows with missing values.
Step5: Question
If you were to fill in the missing values, with what values would you fill them with? Why?
Scikit-learn only takes numerical arrays as inputs. As such, we would need to convert the categorical columns Sex and Embarked into numerical ones. We first review the range of values for the column Sex, and create a new column that represents the data as numbers.
Step6: Similarly for Embarked, we review the range of values and create a new column called Port that represents, as a numerical value, where each passenger embarks from.
Step7: Question
- What problems might we encounter by mapping C, S, and Q in the column Embarked to the values 1, 2, and 3? In other words, what does the ordering imply? Does the same problem exist for the column Sex?
Now that we have numerical columns that encapsulate the information provided by the columns Sex and Embarked, we can proceed to drop them from our data set.
Step8: We review the columns our final, processed data set.
Step9: For convenience, we move the column Survived to the left-most column. We note that the left-most column is indexed as 0.
Step10: In our final review of our training data, we check that (1) the column Survived is the left-most column (2) there are no NA values, and (3) all the values are in numerical form.
Step11: Finally, we convert the processed training data from a Pandas dataframe into a numerical (Numpy) array.
Step12: Scikit-learn - Training the model
In this section, we'll simply use the model as a black box. We'll review more sophisticated techniques in later sections.
Here we'll be using the Random Forest model. The intuition is as follows
Step13: We use the processed training data to 'train' (or 'fit') our model. The column Survived will be our first input, and the set of other features (with the column PassengerId omitted) as the second.
Step14: Scikit-learn - Making predictions
We first load the test data.
Step15: We then review a selection of the data.
Step16: We notice that test data has columns similar to our training data, but not the column Survived. We'll use our trained model to predict values for the column Survived.
As before, we process the test data in a similar fashion to what we did to the training data.
Step17: We now apply the trained model to the test data (omitting the column PassengerId) to produce an output of predictions.
Step18: Pandas - Preparing submission
We simply create a Pandas dataframe by combining the index from the test data with the output of predictions.
Step19: We briefly review our predictions.
Step20: Finally, we output our results to a .csv file.
Step21: However, it appears that we have a problem. The Kaggle submission website expects "the solution file to have 418 predictions."
https | Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv('../data/train.csv')
Explanation: Pandas and Scikit-learn
Pandas is a Python library that contains high-level data structures and manipulation tools designed for data analysis. Think of Pandas as a Python version of Excel. Scikit-learn, on the other hand, is an open-source machine learning library for Python.
While Scikit-learn does a lot of the heavy lifting, what's equally important is ensuring that raw data is processed in such a way that we are able to 'feed' it to Scikit-learn. Hence, the ability to manipulate raw data with Pandas makes it an indispensible part of our toolkit.
Kaggle
Kaggle is the leading platform for data science competitions. Participants compete for cash prizes by submitting the best predictive model to problems posted on the competition website.
https://www.kaggle.com/competitions
Learning machine learning via Kaggle problems allows us to take a highly-directed approach because:
1. The problems are well-defined and the data is provided, allowing us to immediately focus on manipulating the data, and
2. The leaderboard allows us to keep track of how well we're doing.
In the following set of exercises, we will be reviewing the data from the Kaggle Titanic competition. Our aim is to make predictions on whether or not specific passengers on the Titanic survived, based on characteristics such as age, sex and class.
Section 1-0 - First Cut
We will start by processing the training data, after which we will be able to use to 'train' (or 'fit') our model. With the trained model, we apply it to the test data to make the predictions. Finally, we output our predictions into a .csv file to make a submission to Kaggle and see how well they perform.
It is very common to encounter missing values in a data set. In this section, we will take the simplest (or perhaps, simplistic) approach of ignoring the whole row if any part of it contains an NA value. We will build on this approach in later sections.
Pandas - Extracting data
First, we load the training data from a .csv file. This is the similar to the data found on the Kaggle website:
https://www.kaggle.com/c/titanic-gettingStarted/data
End of explanation
df.head(10)
Explanation: Pandas - Cleaning data
We then review a selection of the data.
End of explanation
df = df.drop(['Name', 'Ticket', 'Cabin'], axis=1)
Explanation: We notice that the columns describe features of the Titanic passengers, such as age, sex, and class. Of particular interest is the column Survived, which describes whether or not the passenger survived. When training our model, what we are essentially doing is assessing how each feature impacts whether or not the passenger survived (or if the feature makes an impact at all).
Exercise:
- Write the code to review the tail-end section of the data.
We observe that the columns Name and Cabin are, for our current purposes, irrelevant. We proceed to remove them from our data set.
End of explanation
df.info()
Explanation: Next, we review the type of data in the columns, and their respective counts.
End of explanation
df = df.dropna()
Explanation: We notice that the columns Age and Embarked have NAs or missing values. As previously discussed, we take the approach of simply removing the rows with missing values.
End of explanation
df['Sex'].unique()
df['Gender'] = df['Sex'].map({'female': 0, 'male':1}).astype(int)
Explanation: Question
If you were to fill in the missing values, with what values would you fill them with? Why?
Scikit-learn only takes numerical arrays as inputs. As such, we would need to convert the categorical columns Sex and Embarked into numerical ones. We first review the range of values for the column Sex, and create a new column that represents the data as numbers.
End of explanation
df['Embarked'].unique()
df['Port'] = df['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int)
Explanation: Similarly for Embarked, we review the range of values and create a new column called Port that represents, as a numerical value, where each passenger embarks from.
End of explanation
df = df.drop(['Sex', 'Embarked'], axis=1)
Explanation: Question
- What problems might we encounter by mapping C, S, and Q in the column Embarked to the values 1, 2, and 3? In other words, what does the ordering imply? Does the same problem exist for the column Sex?
Now that we have numerical columns that encapsulate the information provided by the columns Sex and Embarked, we can proceed to drop them from our data set.
End of explanation
cols = df.columns.tolist()
print(cols)
Explanation: We review the columns our final, processed data set.
End of explanation
cols = [cols[1]] + cols[0:1] + cols[2:]
df = df[cols]
Explanation: For convenience, we move the column Survived to the left-most column. We note that the left-most column is indexed as 0.
End of explanation
df.head(10)
df.info()
Explanation: In our final review of our training data, we check that (1) the column Survived is the left-most column (2) there are no NA values, and (3) all the values are in numerical form.
End of explanation
train_data = df.values
Explanation: Finally, we convert the processed training data from a Pandas dataframe into a numerical (Numpy) array.
End of explanation
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators = 100)
Explanation: Scikit-learn - Training the model
In this section, we'll simply use the model as a black box. We'll review more sophisticated techniques in later sections.
Here we'll be using the Random Forest model. The intuition is as follows: each feature is reviewed to see how much impact it makes to the outcome. The most prominent feature is segmented into a 'branch'. A collection of branches is a 'tree'. The Random Forest model, broadly speaking, creates a 'forest' of trees and aggregates the results.
http://en.wikipedia.org/wiki/Random_forest
End of explanation
model = model.fit(train_data[0:,2:],train_data[0:,0])
Explanation: We use the processed training data to 'train' (or 'fit') our model. The column Survived will be our first input, and the set of other features (with the column PassengerId omitted) as the second.
End of explanation
df_test = pd.read_csv('../data/test.csv')
Explanation: Scikit-learn - Making predictions
We first load the test data.
End of explanation
df_test.head(10)
Explanation: We then review a selection of the data.
End of explanation
df_test = df_test.drop(['Name', 'Ticket', 'Cabin'], axis=1)
df_test = df_test.dropna()
df_test['Gender'] = df_test['Sex'].map({'female': 0, 'male':1})
df_test['Port'] = df_test['Embarked'].map({'C':1, 'S':2, 'Q':3})
df_test = df_test.drop(['Sex', 'Embarked'], axis=1)
test_data = df_test.values
Explanation: We notice that test data has columns similar to our training data, but not the column Survived. We'll use our trained model to predict values for the column Survived.
As before, we process the test data in a similar fashion to what we did to the training data.
End of explanation
output = model.predict(test_data[:,1:])
Explanation: We now apply the trained model to the test data (omitting the column PassengerId) to produce an output of predictions.
End of explanation
result = np.c_[test_data[:,0].astype(int), output.astype(int)]
df_result = pd.DataFrame(result[:,0:2], columns=['PassengerId', 'Survived'])
Explanation: Pandas - Preparing submission
We simply create a Pandas dataframe by combining the index from the test data with the output of predictions.
End of explanation
df_result.head(10)
Explanation: We briefly review our predictions.
End of explanation
df_result.to_csv('../results/titanic_1-0.csv', index=False)
Explanation: Finally, we output our results to a .csv file.
End of explanation
df_result.shape
Explanation: However, it appears that we have a problem. The Kaggle submission website expects "the solution file to have 418 predictions."
https://www.kaggle.com/c/titanic-gettingStarted/submissions/attach
We compare this to our result.
End of explanation |
12,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
=================================
Decoding sensor space data (MVPA)
=================================
Decoding, a.k.a MVPA or supervised machine learning applied to MEG
data in sensor space. Here the classifier is applied to every time
point.
Step1: Set parameters
Step2: Temporal decoding
We'll use a Logistic Regression for a binary classification as machine
learning model.
Step3: Temporal Generalization
This runs the analysis used in [1] and further detailed in [2]
The idea is to fit the models on each time instant and see how it
generalizes to any other time point. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import (SlidingEstimator, GeneralizingEstimator,
cross_val_multiscore, LinearModel, get_coef)
data_path = sample.data_path()
plt.close('all')
# sphinx_gallery_thumbnail_number = 4
Explanation: =================================
Decoding sensor space data (MVPA)
=================================
Decoding, a.k.a MVPA or supervised machine learning applied to MEG
data in sensor space. Here the classifier is applied to every time
point.
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax = -0.200, 0.500
event_id = dict(audio_left=1, visual_left=3)
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# The subsequent decoding analyses only capture evoked responses, so we can
# low-pass the MEG data. Usually a value more like 40 Hz would be used,
# but here low-pass at 20 so we can more heavily decimate, and allow
# the examlpe to run faster.
raw.filter(None, 20., fir_design='firwin')
events = mne.find_events(raw, 'STI 014')
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0.), preload=True,
reject=dict(grad=4000e-13, eog=150e-6), decim=10)
epochs.pick_types(meg=True, exclude='bads')
Explanation: Set parameters
End of explanation
# We will train the classifier on all left visual vs auditory trials on MEG
X = epochs.get_data() # MEG signals: n_epochs, n_channels, n_times
y = epochs.events[:, 2] # target: Audio left or right
clf = make_pipeline(StandardScaler(), LogisticRegression())
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc')
scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot
fig, ax = plt.subplots()
ax.plot(epochs.times, scores, label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC') # Area Under the Curve
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Sensor space decoding')
plt.show()
# You can retrieve the spatial filters and spatial patterns if you explicitly
# use a LinearModel
clf = make_pipeline(StandardScaler(), LinearModel(LogisticRegression()))
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc')
time_decod.fit(X, y)
coef = get_coef(time_decod, 'patterns_', inverse_transform=True)
evoked = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])
evoked.plot_joint(times=np.arange(0., .500, .100), title='patterns')
Explanation: Temporal decoding
We'll use a Logistic Regression for a binary classification as machine
learning model.
End of explanation
# define the Temporal Generalization object
time_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc')
scores = cross_val_multiscore(time_gen, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot the diagonal (it's exactly the same as the time-by-time decoding above)
fig, ax = plt.subplots()
ax.plot(epochs.times, np.diag(scores), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC')
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Decoding MEG sensors over time')
plt.show()
# Plot the full matrix
fig, ax = plt.subplots(1, 1)
im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.)
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Temporal Generalization')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
plt.show()
Explanation: Temporal Generalization
This runs the analysis used in [1] and further detailed in [2]
The idea is to fit the models on each time instant and see how it
generalizes to any other time point.
End of explanation |
12,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Using Markdown
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return on your keyboard to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar.
Code cells
One great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
Step1: Nicely formatted results
IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in
the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a
preview of what IPython notebook can do.
Step2: Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
Step3: Once you've run all three cells, try modifying the first one to set class_name to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.
You should have seen that the third cell still printed "Intro to Data Analysis is awesome!" That's because you didn't rerun the second cell, so even though the class_name variable was updated, the message variable was not. Now try rerunning the second cell, and then the third.
You should have seen the output change to "your name is awesome!" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below".
One final thing to remember | Python Code:
# Hit shift + enter or use the run button to run this cell and see the results
print 'hello world'
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
Explanation: Text Using Markdown
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return on your keyboard to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar.
Code cells
One great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
End of explanation
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
Explanation: Nicely formatted results
IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in
the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a
preview of what IPython notebook can do.
End of explanation
class_name = "Intro to Data Analysis"
message = class_name + " is awesome!"
message
Explanation: Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
End of explanation
import unicodecsv
## Longer version of code (replaced with shorter, equivalent version below)
# enrollments = []
# f = open(enrollments_filename, 'rb')
# reader = unicodecsv.DictReader(f)
# for row in reader:
# enrollments.append(row)
# f.close()
def read_csv(filename):
with open(filename, 'rb') as f:
reader = unicodecsv.DictReader(f)
lines = list(reader)
return lines
### Write code similar to the above to load the engagement
### and submission data. The data is stored in files with
### the given filenames. Then print the first row of each
### table to make sure that your code works. You can use the
### "Test Run" button to see the output of your code.
enrollments_filename = 'enrollments.csv'
engagement_filename = 'daily_engagement.csv'
submissions_filename = 'project_submissions.csv'
enrollments = read_csv(enrollments_filename)
daily_engagement = read_csv(engagement_filename)
project_submissions = read_csv(submissions_filename)
enrollments[0]
daily_engagement[0]
project_submissions[0]
Explanation: Once you've run all three cells, try modifying the first one to set class_name to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.
You should have seen that the third cell still printed "Intro to Data Analysis is awesome!" That's because you didn't rerun the second cell, so even though the class_name variable was updated, the message variable was not. Now try rerunning the second cell, and then the third.
You should have seen the output change to "your name is awesome!" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below".
One final thing to remember: if you shut down the kernel after saving your notebook, the cells' output will still show up as you left it at the end of your session when you start the notebook back up. However, the state of the kernel will be reset. If you are actively working on a notebook, remember to re-run your cells to set up your working environment to really pick up where you last left off.
Excercise 6 in parsing CSVs
End of explanation |
12,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Crime prediction from Hawkes processes
Here we continue to explore the EM algorithm for Hawkes processes, but now concentrating upon
Step1: Simulation of the process in a single cell
Step2: Model fitting for cells with varying background rate
We'll create 100 cells with varying background rate, but the same $\omega, \theta$. We use our library to perform this simulation.
Step3: To simulate a steady state, we'll discard the first half of time in each cell.
Step4: The number of events in each cell varies quite a lot.
Step5: Noting that our initial estimate for every $\mu$ is $0.5$, this is good convergence.
More extreme parameters
However, if we try a rather smaller value of $\omega$, then the optimisation doesn't find the real parameters, tending to systematically over-estimate the background rate $\mu$ and under-estimate the aftershock rate.
Step6: Sampling the whole process, not just a "steady state"
Step7: Taking a smaller sample | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Crime prediction from Hawkes processes
Here we continue to explore the EM algorithm for Hawkes processes, but now concentrating upon:
Mohler et al. "Randomized Controlled Field Trials of Predictive Policing". Journal of the American Statistical Association (2015) DOI:10.1080/01621459.2015.1077710
End of explanation
import open_cp.sources.sepp as source_sepp
process = source_sepp.SelfExcitingPointProcess(
background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1),
trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=10))
events = process.sample(0, 1000)
fig, ax = plt.subplots(figsize=(18,1))
ax.scatter(events, (np.random.random(len(events))-0.5) * 0.03, alpha=.5)
ax.set(xlim=[900, 1000], ylim=[-0.1,0.1])
Explanation: Simulation of the process in a single cell
End of explanation
rates = np.random.random(size=100)
simulation = source_sepp.GridHawkesProcess(rates, 0.5, 10)
cells = simulation.sample(0, 1000)
Explanation: Model fitting for cells with varying background rate
We'll create 100 cells with varying background rate, but the same $\omega, \theta$. We use our library to perform this simulation.
End of explanation
for i in range(100):
times = cells[i]
cells[i] = times[times>=500] - 500
Explanation: To simulate a steady state, we'll discard the first half of time in each cell.
End of explanation
min(len(t) for t in cells), max(len(t) for t in cells)
import open_cp.seppexp
def optimise(cells, initial_omega=10, iterations=100, time=500):
omega = initial_omega
theta = .5
mu = np.zeros_like(cells) + 0.5
for _ in range(iterations):
omega, theta, mu = open_cp.seppexp.maximisation(cells, omega, theta, mu, time)
return omega, theta, mu
def optimise_corrected(cells, initial_omega=10, iterations=100, time=500):
omega = initial_omega
theta = .5
mu = np.zeros_like(cells) + 0.5
for _ in range(iterations):
omega, theta, mu = open_cp.seppexp.maximisation_corrected(cells, omega, theta, mu, time)
return omega, theta, mu
omega, theta, mu = optimise(cells)
omega, theta
omegac, thetac, muc = optimise_corrected(cells)
omegac, thetac
def plot(rates, mu, ax, title):
ax.plot([0,1], [0,1], color="red", linewidth=1)
ax.scatter(rates, mu)
ax.set(xlim=[0,1], ylim=[0,np.max(mu)*1.05], xlabel="$\\mu$", ylabel="predicted $\\mu$",
title=title)
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
plot(rates, mu, ax[0], "From EM algorithm")
plot(rates, muc,ax[1], "From EM algorithm with edge corrections")
Explanation: The number of events in each cell varies quite a lot.
End of explanation
rates = np.random.random(size=100)
simulation = source_sepp.GridHawkesProcess(rates, 0.5, .1)
cells = simulation.sample(0, 1000)
for i in range(100):
times = cells[i]
cells[i] = times[times>=500] - 500
omega, theta, mu = optimise(cells, .1, 100)
omega, theta
omegac, thetac, muc = optimise_corrected(cells, .1, 100)
omegac, thetac
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
plot(rates, mu, ax[0], "From EM algorithm")
plot(rates, muc, ax[1], "From EM algorithm with edge corrections")
Explanation: Noting that our initial estimate for every $\mu$ is $0.5$, this is good convergence.
More extreme parameters
However, if we try a rather smaller value of $\omega$, then the optimisation doesn't find the real parameters, tending to systematically over-estimate the background rate $\mu$ and under-estimate the aftershock rate.
End of explanation
rates = np.random.random(size=100)
simulation = source_sepp.GridHawkesProcess(rates, 0.5, 10)
cells = simulation.sample(0, 1000)
omega, theta, mu = optimise(cells, 1, 100, 1000)
omega, theta
omegac, thetac, muc = optimise_corrected(cells, 1, 100, 1000)
omegac, thetac
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
plot(rates, mu, ax[0], "From EM algorithm")
plot(rates, muc, ax[1], "From EM algorithm with edge corrections")
Explanation: Sampling the whole process, not just a "steady state"
End of explanation
rates = np.random.random(size=100)
simulation = source_sepp.GridHawkesProcess(rates, 0.5, 10)
cells = simulation.sample(0, 350)
omega, theta, mu = optimise(cells, 1, 100, 350)
omega, theta
omegac, thetac, muc = optimise_corrected(cells, 1, 100, 350)
omegac, thetac
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
plot(rates, mu, ax[0], "From EM algorithm")
plot(rates, muc, ax[1], "From EM algorithm with edge corrections")
Explanation: Taking a smaller sample
End of explanation |
12,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grade
Step1: 2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
Step2: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?
Step3: 4) Print a list of Lil's that are more popular than Lil' Kim.
Step4: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
Step5: 6) Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit? | Python Code:
# !pip3 install requests
import requests
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
data.keys()
artist_data = data['artists']['items']
for artist in artist_data:
print(artist['name'], artist['popularity'], artist['genres'])
Explanation: Grade: 6 / 8 -- take a look at TA-COMMENTS (you can Command + F to search for "TA-COMMENT")
I don't have the spotify portion of your HW5 -- is it somewhere else in your repo? Let me know because that part of the homework is another 8 points.
1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
End of explanation
genre_list = []
for artist in artist_data:
for genre in artist['genres']:
genre_list.append(genre)
print(genre_list)
genre_count = 0
# TA-COMMENT: (-0.5) This is actually calling your temporary variable "artist" from your for loop above.
# You want to call 'artist['genres']'
for genre in artist['genres']:
# TA-COMMENT: This is actually always true and therefore, it's not doing what you'd want it to do!
if True:
genre_count = genre_count + 1
print(artist['genres'])
else:
print("No genres listed")
# TA-COMMENT: see example
if True:
print("hello")
import requests # TA-COMMENT: No need to import requests everytime -- you just once to do it once within a notebook
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
type(data)
data.keys()
type(data['artists'])
data['artists'].keys()
artists = data['artists']['items']
artists
# TA-COMMENT: Excellent for loop!!
for artist in artists:
print(artist['name'], artist['popularity'])
if len(artist['genres']) > 0:
genres = ", ".join(artist['genres'])
print("Genre list: ", genres)
else:
print("No genres listed")
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
# genres = data['genres']
# genre_count = 0
# for genre in genres:
# print(genres['name'])
# g = genres + 1
# genre_count.append(g)
data.keys()
# to figure out what data['artists'] is, we have a couple of options. My favorite!! Printing!!
print(data['artists']) #its a dictionary so look inside!
data['artists'].keys() #calling out what's inside of the dictionary
print(data['artists']['items']) #calling out a dictionary inside of a dicitonary...and look below its a list '[]'
data['artists']['items'][0] #set the list at zero.
artists = data['artists']['items'] #declare the dictionary and list as a variable to make it easier
for artist in artists:
# print(artist.keys())
print(artist['genres'])
from collections import Counter
artists = data['artists']['items']
genre_list = []
for artist in artists:
# print(artist.keys())
# print(artist['genres'])
genre_list = genre_list + artist['genres']
print(genre_list)
Counter(genre_list) #counter function - it rocks
# TA-COMMENT: Yassss
unique_genres = set(genre_list)
unique_genres
genre_count_dict = {}
for unique_item in unique_genres:
count_variable = 0
for item in genre_list:
if item == unique_item:
count_variable = count_variable + 1
genre_count_dict[unique_item] = count_variable
# TA-COMMENT: Beautiful!
genre_count_dict
Explanation: 2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
End of explanation
most_popular_name = ""
most_popular_score = 0
for artist in artists:
print("Looking at", artist['name'], "who has a popularity score of", artist['popularity'])
#THE CONDITIONAL - WHAT YOU'RE TESTING
print("Comparing", artist['popularity'], "to", most_popular_score, "of")
if artist['popularity'] > most_popular_score and artist ['name'] != "Lil Wayne":
#THE CHANGE - WHAT YOU'RE KEEPING TRACK OF
most_popular_name = artist['name']
most_popular_score = artist['popularity']
print(most_popular_name, most_popular_score)
# TA-COMMENT: Excellent!
Explanation: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?
End of explanation
target_score = 72
#PART ONE: INITIAL CONDITON
second_best_artists = []
#AGGREGATION PROBLEM - when you're looping through a series of objects and you someitmes you want to add some
#of those objects to a DIFFERENT list
for artists in artists:
print("Looking at", artist['name'], "who has a popularity of", artist['popularity'])
#PART TWO: CONDITONAL - when we want to add someone to our list
if artist['popularity'] == 72:
#PART THREE: THE CHANGE - add artist to our list
second_best_artists.append(artist['name'])
print("OUR SECOND BEST ARTISTS ARE:")
for artist in second_best_artists:
print(artist)
# TA-COMMENT: This code doesn't work as you'd want because your temporary variable name "artists" is the same
# as your list name!
for artists in artists:
#print("Looking at", artist['name'])
if artist['name'] == "Lil' Kim":
print("Found Lil' Kim")
print(artist['popularity'])
else:
pass
#print("Not Lil' Kim")
import requests
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
data.keys()
artist_data = data['artists']['items']
for artist in artist_data:
print(artist['name'], artist['popularity'], artist['genres'])
Explanation: 4) Print a list of Lil's that are more popular than Lil' Kim.
End of explanation
import requests
response = requests.get('https://api.spotify.com/v1/search?query=Lil+&offset=0&limit=50&type=artist&market=US')
data = response.json()
data.keys()
artist_data = data['artists']['items']
for artist in artist_data:
print(artist['name'], "ID is:", artist['id'])
import requests
response = requests.get('https://api.spotify.com/v1/artists/55Aa2cqylxrFIXC767Z865/top-tracks?country=US')
lil_wayne = response.json()
print(lil_wayne)
type(lil_wayne)
print(type(lil_wayne))
print(lil_wayne.keys())
print(lil_wayne['tracks'])
# TA-COMMENT: (-1) Remember what our for loop structures should look like! Below, there is nothing indented below your
# for loop!
# and you cannot call ['tracks'] without specifying which dictionary to find ['tracks']
for lil_wayne in ['tracks']
print("Lil Wayne's top tracks are:")
import requests
response = requests.get('https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US')
lil_jon = response.json()
print(lil_jon)
type(lil_jon)
print(type(lil_jon))
print(lil_jon.keys())
print(lil_jon['tracks'])
tracks = ['tracks']
for lil_jon in tracks:
print("Lil Jon's top tracks are:")
Explanation: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
End of explanation
response = requests.get("https://api.spotify.com/v1/artists/55Aa2cqylxrFIXC767Z865/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
explicit_count = 0
clean_count = 0
for track in tracks:
print(track['name'], track['explicit'])
# TA-COMMENT: (-0.5) if True is always True! This happens to work out because all the tracks were explicit.
if True:
explicit_count = explicit_count + 1
if not track['explicit']:
clean_count = clean_count + 1
print("We have found", explicit_count, "explicit tracks, and", clean_count, "tracks are clean")
if track_count > 0:
print("Overall, We discovered", explicit_count, "explicit tracks")
print("And", clean_count, "were non-explicit")
print("Which means", 100 * clean_count / explicit_count, " percent of tracks were clean")
else:
print("No top tracks found")
# TA-COMMENT: It's a good idea to comment out code so if you read it back later, you know what's happening at each stage
import requests
response = requests.get('https://api.spotify.com/v1/artists/55Aa2cqylxrFIXC767Z865/top-tracks?country=US')
data = response.json()
tracks = data['tracks']
#print(tracks)
explicit_count = 0
clean_count = 0
popularity_explicit = 0
popularity_nonexplicit = 0
minutes_explicit = 0
minutes_not_explicit = 0
for track in tracks:
print(track['name'], track['explicit'])
if track['explicit'] == True:
explicit_count = explicit_count + 1
popularity_explicit = popularity_explicit + track['popularity']
minutes_explicit = minutes_explicit + track['duration_ms']
print("The number of explicit songs are", explicit_count, "with a popularity of", popularity_explicit)
print( "Lil Wayne has", minutes_explicit/10000, "minutes of explicit songs" )
elif track['explicit'] == False:
clean_count = clean_count + 1
popularity_nonexplicit = popularity_nonexplicit + track['popularity']
minutes_not_explicit = minutes_not_explicit + track['duration_ms']
print("Lil Wayne has", explicit_count, "explicit tracks, and", clean_count, "clean tracks")
print( "Lil Wayne has", minutes_not_explicit/10000, "minutes of clean songs")
print("The average popularity of Lil Wayne's explicit tracks is", popularity_explicit/explicit_count)
import requests
response = requests.get('https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US')
data = response.json()
tracks = data['tracks']
#print(tracks)
explicit_count = 0
clean_count = 0
for track in tracks:
print(track['name'], track['explicit'])
if True:
explicit_count = explicit_count + 1
if not track['explicit']:
clean_count = clean_count + 1
print("Lil Jon has", explicit_count, "explicit tracks, and", clean_count, "clean tracks")
response = requests.get("https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
explicit_count = 0
clean_count = 0
for track in tracks:
print(track['name'], track['explicit'])
if True:
explicit_count = explicit_count + 1
if not track['explicit']:
clean_count = clean_count + 1
print("We have found", explicit_count, "explicit tracks, and", clean_count, "tracks are clean")
if track_count > 0:
print("Overall, We discovered", explicit_count, "explicit tracks")
print("And", clean_count, "were non-explicit")
print("Which means", 100 * clean_count / explicit_count, " percent of tracks were clean")
else:
print("No top tracks found")
import requests
response = requests.get('https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US')
data = response.json()
tracks = data['tracks']
#print(tracks)
explicit_count = 0
clean_count = 0
popularity_explicit = 0
popularity_nonexplicit = 0
minutes_explicit = 0
minutes_not_explicit = 0
for track in tracks:
print(track['name'], track['explicit'])
if track['explicit'] == True:
explicit_count = explicit_count + 1
popularity_explicit = popularity_explicit + track['popularity']
minutes_explicit = minutes_explicit + track['duration_ms']
print("The number of explicit songs are", explicit_count, "with a popularity of", popularity_explicit)
print( "Lil Jon has", minutes_explicit/1000, "minutes of explicit songs" )
elif track['explicit'] == False:
clean_count = clean_count + 1
popularity_nonexplicit = popularity_nonexplicit + track['popularity']
minutes_not_explicit = minutes_not_explicit + track['duration_ms']
print("Lil Jon has", explicit_count, "explicit tracks, and", clean_count, "clean tracks")
print( "Lil Jon has", minutes_not_explicit/1000, "minutes of clean songs")
print("The average popularity of Lil Jon's explicit tracks is", popularity_explicit/explicit_count)
Explanation: 6) Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
End of explanation |
12,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test distribution of errors
Step1: Attempts to fit nonparametric distributions | Python Code:
import scipy
import scipy.stats
diff = (network_out-true_out[:,8:])
#print(diff.shape)
y = diff[:,5]
print(y.shape)
#y = np.square(y)
x = np.arange(-3,3,0.01)
size = diff.shape[0]
h = plt.hist(y, bins=100, color='w')
plt.xlim(-3,3)
plt.ylim(0,1000)
dist_names = ['t']
for dist_name in dist_names:
dist = getattr(scipy.stats, dist_name)
param = dist.fit(y)
#param = (1.5, param[1], param[2])
pdf_fitted = dist.pdf(x, *param[:-2], loc=param[-2], scale=param[-1])*2000
plt.plot(x, pdf_fitted, label=dist_name)
plt.legend(loc='upper right')
plt.show()
import scipy.stats as stats
stats.probplot(y, dist="t", sparams=(2), plot=pylab)
pylab.show()
print(param)
print(*param[:-2])
print(param[-2])
print(param[-1])
Explanation: Test distribution of errors:
The best fit (R^2 = 0.855) is Student's T with 1.067 DOF, location 0.003919 (network slightly overestimates), scale = 0.01165
However, T overestimates the probability of large errors
End of explanation
# Gaussian kernel density estimation
from scipy import stats
import matplotlib.pyplot as plt
y = diff[:,4]
print(y.shape)
kde1 = stats.gaussian_kde(y)
#kde2 = stats.gaussian_kde(y, bw_method='silverman')
fig = plt.figure()
ax = fig.add_subplot(111)
#ax.plot(y, np.zeros(y.shape), 'b+', ms=20) # rug plot
x_eval = np.linspace(-2, 2, num=200)
#ax.plot(x_eval, kde1(x_eval), 'k-', label="Scott's Rule")
#ax.plot(x_eval, kde2(x_eval), 'r-', label="Silverman's Rule")
#plt.legend(loc='upper right')
#plt.show()
err_min, err_max = -0.1,0.1
print("Probability of the error between %.2f and %.2f meters: %.4f"%(err_min, err_max,kde1.integrate_box_1d(err_min, err_max)))
### Gaussian kernel density estimation for "active" portions of the data only
from scipy import stats
import matplotlib.pyplot as plt
active_start = 130
active_stop = 160
y = diff[:,0]
active_y = y.reshape(-1,193).transpose()[active_start:active_stop,:].reshape(-1)
print("Min: %.4fm, Max: %.4fm" %(np.amin(active_y),np.amax(active_y)))
kde1 = stats.gaussian_kde(active_y)
kde2 = stats.gaussian_kde(y, bw_method='silverman')
#fig = plt.figure()
#ax = fig.add_subplot(111)
#ax.plot(active_y, np.zeros(active_y.shape), 'b+', ms=20) # rug plot
x_eval = np.linspace(-2, 2, num=200)
#ax.plot(x_eval, kde1(x_eval), 'k-', label="Scott's Rule")
#ax.plot(x_eval, kde2(x_eval), 'r-', label="Silverman's Rule")
#plt.legend(loc='upper right')
#plt.show()
err_min, err_max = -0.13,0.13
print("Probability of the error between %.2f and %.2f meters: %.4f"%(err_min, err_max,kde1.integrate_box_1d(err_min, err_max)))
Explanation: Attempts to fit nonparametric distributions
End of explanation |
12,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handling bad channels
This tutorial covers manual marking of bad channels and reconstructing bad
channels based on good signals at other sensors.
As usual we'll start by importing the modules we need, and loading some example
data
Step1: Marking bad channels
Sometimes individual channels malfunction and provide data that is too noisy
to be usable. MNE-Python makes it easy to ignore those channels in the
analysis stream without actually deleting the data in those channels. It does
this by
keeping track of the bad channel indices in a list and looking at that list
when doing analysis or plotting tasks. The list of bad channels is stored in
the 'bads' field of the
Step2: Here you can see that the
Step3: We can do the same thing for the bad MEG channel (MEG 2443). Since we
know that Neuromag systems (like the one used to record the example data) use
the last digit of the MEG channel number to indicate sensor type, here our
regular expression_ will pick all the channels that start with 2 and end
with 3
Step4: Notice first of all that the channels marked as "bad" are plotted in a light
gray color in a layer behind the other channels, to make it easy to
distinguish them from "good" channels. The plots make it clear that EEG
053 is not picking up scalp potentials at all, and MEG 2443 looks like
it's got a lot more internal noise than its neighbors — its signal is a few
orders of magnitude greater than the other MEG channels, making it a clear
candidate for exclusion.
If you want to change which channels are marked as bad, you can edit
raw.info['bads'] directly; it's an ordinary Python
Step5: .. sidebar
Step6: When to look for bad channels
You can start looking for bad channels during the experiment session when the
data is being acquired. If you notice any flat or excessively noisy channels,
you can note them in your experiment log or protocol sheet. If your system
computes online averages, these can be a good way to spot bad channels as
well. After the data has been collected, you can do a more thorough check for
bad channels by browsing the raw data using
Step7: The bad EEG channel is not so obvious, but the bad gradiometer is easy to
see.
Remember, marking bad channels should be done as early as possible in the
analysis pipeline. When bad channels are marked in a
Step8: By default,
Step9: Note that we used the exclude=[] trick in the call to | Python Code:
import os
from copy import deepcopy
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
Explanation: Handling bad channels
This tutorial covers manual marking of bad channels and reconstructing bad
channels based on good signals at other sensors.
As usual we'll start by importing the modules we need, and loading some example
data:
End of explanation
print(raw.info['bads'])
Explanation: Marking bad channels
Sometimes individual channels malfunction and provide data that is too noisy
to be usable. MNE-Python makes it easy to ignore those channels in the
analysis stream without actually deleting the data in those channels. It does
this by
keeping track of the bad channel indices in a list and looking at that list
when doing analysis or plotting tasks. The list of bad channels is stored in
the 'bads' field of the :class:~mne.Info object that is attached to
:class:~mne.io.Raw, :class:~mne.Epochs, and :class:~mne.Evoked objects.
End of explanation
picks = mne.pick_channels_regexp(raw.ch_names, regexp='EEG 05.')
raw.plot(order=picks, n_channels=len(picks))
Explanation: Here you can see that the :file:.fif file we loaded from disk must have
been keeping track of channels marked as "bad" — which is good news, because
it means any changes we make to the list of bad channels will be preserved if
we save our data at intermediate stages and re-load it later. Since we saw
above that EEG 053 is one of the bad channels, let's look at it alongside
some other EEG channels to see what's bad about it. We can do this using the
standard :meth:~mne.io.Raw.plot method, and instead of listing the channel
names one by one (['EEG 050', 'EEG 051', ...]) we'll use a regular
expression_ to pick all the EEG channels between 050 and 059 with the
:func:~mne.pick_channels_regexp function (the . is a wildcard
character):
End of explanation
picks = mne.pick_channels_regexp(raw.ch_names, regexp='MEG 2..3')
raw.plot(order=picks, n_channels=len(picks))
Explanation: We can do the same thing for the bad MEG channel (MEG 2443). Since we
know that Neuromag systems (like the one used to record the example data) use
the last digit of the MEG channel number to indicate sensor type, here our
regular expression_ will pick all the channels that start with 2 and end
with 3:
End of explanation
original_bads = deepcopy(raw.info['bads'])
raw.info['bads'].append('EEG 050') # add a single channel
raw.info['bads'].extend(['EEG 051', 'EEG 052']) # add a list of channels
bad_chan = raw.info['bads'].pop(-1) # remove the last entry in the list
raw.info['bads'] = original_bads # change the whole list at once
Explanation: Notice first of all that the channels marked as "bad" are plotted in a light
gray color in a layer behind the other channels, to make it easy to
distinguish them from "good" channels. The plots make it clear that EEG
053 is not picking up scalp potentials at all, and MEG 2443 looks like
it's got a lot more internal noise than its neighbors — its signal is a few
orders of magnitude greater than the other MEG channels, making it a clear
candidate for exclusion.
If you want to change which channels are marked as bad, you can edit
raw.info['bads'] directly; it's an ordinary Python :class:list so the
usual list methods will work:
End of explanation
# default is exclude='bads':
good_eeg = mne.pick_types(raw.info, meg=False, eeg=True)
all_eeg = mne.pick_types(raw.info, meg=False, eeg=True, exclude=[])
print(np.setdiff1d(all_eeg, good_eeg))
print(np.array(raw.ch_names)[np.setdiff1d(all_eeg, good_eeg)])
Explanation: .. sidebar:: Blocking execution
If you want to build an interactive bad-channel-marking step into an
analysis script, be sure to include the parameter ``block=True`` in your
call to ``raw.plot()`` or ``epochs.plot()``. This will pause the script
while the plot is open, giving you time to mark bad channels before
subsequent analysis or plotting steps are executed. This can be
especially helpful if your script loops over multiple subjects.
You can also interactively toggle whether a channel is marked "bad" in the
plot windows of raw.plot() or epochs.plot() by clicking on the
channel name along the vertical axis (in raw.plot() windows you can also
do this by clicking the channel's trace in the plot area). The bads field
gets updated immediately each time you toggle a channel, and will retain its
modified state after the plot window is closed.
The list of bad channels in the :class:mne.Info object's bads field is
automatically taken into account in dozens of functions and methods across
the MNE-Python codebase. This is done consistently with a parameter
exclude='bads' in the function or method signature. Typically this
exclude parameter also accepts a list of channel names or indices, so if
you want to include the bad channels you can do so by passing
exclude=[] (or some other list of channels to exclude). For example:
End of explanation
raw2 = raw.copy()
raw2.info['bads'] = []
events = mne.find_events(raw2, stim_channel='STI 014')
epochs = mne.Epochs(raw2, events=events)['2'].average().plot()
Explanation: When to look for bad channels
You can start looking for bad channels during the experiment session when the
data is being acquired. If you notice any flat or excessively noisy channels,
you can note them in your experiment log or protocol sheet. If your system
computes online averages, these can be a good way to spot bad channels as
well. After the data has been collected, you can do a more thorough check for
bad channels by browsing the raw data using :meth:mne.io.Raw.plot, without
any projectors or ICA applied. Finally, you can compute offline averages
(again with projectors, ICA, and EEG referencing disabled) to look for
channels with unusual properties. Here's an example of ERP/F plots where the
bad channels were not properly marked:
End of explanation
raw.crop(tmin=0, tmax=3).load_data()
Explanation: The bad EEG channel is not so obvious, but the bad gradiometer is easy to
see.
Remember, marking bad channels should be done as early as possible in the
analysis pipeline. When bad channels are marked in a :class:~mne.io.Raw
object, the markings will be automatically transferred through the chain of
derived object types: including :class:~mne.Epochs and :class:~mne.Evoked
objects, but also :class:noise covariance <mne.Covariance> objects,
:class:forward solution computations <mne.Forward>, :class:inverse
operators <mne.minimum_norm.InverseOperator>, etc. If you don't notice the
badness until later stages of your analysis pipeline, you'll probably need to
go back and re-run the pipeline, so it's a good investment of time to
carefully explore the data for bad channels early on.
Why mark bad channels at all?
Many analysis computations can be strongly affected by the presence of bad
channels. For example, a malfunctioning channel with completely flat signal
will have zero channel variance, which will cause noise estimates to be
unrealistically low. This low noise estimate will lead to a strong channel
weight in the estimate of cortical current, and because the channel is flat,
the magnitude of cortical current estimates will shrink dramatically.
Conversely, very noisy channels can also cause problems. For example, they
can lead to too many epochs being discarded based on signal amplitude
rejection thresholds, which in turn can lead to less robust estimation of the
noise covariance across sensors. Noisy channels can also interfere with
:term:SSP computations, because the projectors will be
spatially biased in the direction of the noisy channel, which can cause
adjacent good channels to be suppressed. ICA is corrupted by noisy channels
for similar reasons. On the other hand, when performing machine learning
analyses, bad channels may have limited, if any impact (i.e., bad channels
will be uninformative and therefore ignored / deweighted by the algorithm).
Interpolating bad channels
In some cases simply excluding bad channels is sufficient (for example, if
you plan only to analyze a specific sensor ROI, and the bad channel is
outside that ROI). However, in cross-subject analyses it is often helpful to
maintain the same data dimensionality for all subjects, and there is no
guarantee that the same channels will be bad for all subjects. It is possible
in such cases to remove each channel that is bad for even a single subject,
but that can lead to a dramatic drop in data rank (and ends up discarding a
fair amount of clean data in the process). In such cases it is desirable to
reconstruct bad channels by interpolating its signal based on the signals of
the good sensors around them.
How interpolation works
Interpolation of EEG channels in MNE-Python is done using the spherical
spline method :footcite:PerrinEtAl1989, which projects the sensor
locations onto a unit sphere
and interpolates the signal at the bad sensor locations based on the signals
at the good locations. Mathematical details are presented in
channel-interpolation. Interpolation of MEG channels uses the field
mapping algorithms used in computing the forward solution
<tut-forward>.
Interpolation in MNE-Python
Interpolating bad channels in :class:~mne.io.Raw objects is done with the
:meth:~mne.io.Raw.interpolate_bads method, which automatically applies the
correct method (spherical splines or field interpolation) to EEG and MEG
channels, respectively (there is a corresponding method
:meth:mne.Epochs.interpolate_bads that works for :class:~mne.Epochs
objects). To illustrate how it works, we'll start by cropping the raw object
to just three seconds for easier plotting:
End of explanation
eeg_data = raw.copy().pick_types(meg=False, eeg=True, exclude=[])
eeg_data_interp = eeg_data.copy().interpolate_bads(reset_bads=False)
for title, data in zip(['orig.', 'interp.'], [eeg_data, eeg_data_interp]):
fig = data.plot(butterfly=True, color='#00000022', bad_color='r')
fig.subplots_adjust(top=0.9)
fig.suptitle(title, size='xx-large', weight='bold')
Explanation: By default, :meth:~mne.io.Raw.interpolate_bads will clear out
raw.info['bads'] after interpolation, so that the interpolated channels
are no longer excluded from subsequent computations. Here, for illustration
purposes, we'll prevent that by specifying reset_bads=False so that when
we plot the data before and after interpolation, the affected channels will
still plot in red:
End of explanation
grad_data = raw.copy().pick_types(meg='grad', exclude=[])
grad_data_interp = grad_data.copy().interpolate_bads(reset_bads=False)
for data in (grad_data, grad_data_interp):
data.plot(butterfly=True, color='#00000009', bad_color='r')
Explanation: Note that we used the exclude=[] trick in the call to
:meth:~mne.io.Raw.pick_types to make sure the bad channels were not
automatically dropped from the selection. Here is the corresponding example
with the interpolated gradiometer channel; since there are more channels
we'll use a more transparent gray color this time:
End of explanation |
12,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute spatial resolution metrics in source space
Compute peak localisation error and spatial deviation for the point-spread
functions of dSPM and MNE. Plot their distributions and difference
distributions.
This example mimics some results from [1]_, namely Figure 3 (peak localisation
error for PSFs, L2-MNE vs dSPM) and Figure 4 (spatial deviation for PSFs,
L2-MNE vs dSPM).
Step1: MNE
Compute resolution matrices, peak localisation error (PLE) for point spread
functions (PSFs), spatial deviation (SD) for PSFs
Step2: dSPM
Do the same for dSPM
Step3: Visualize results
Visualise peak localisation error (PLE) across the whole cortex for PSF
Step4: These plots show that dSPM has generally lower peak localization error (red
color) than MNE in deeper brain areas, but higher error (blue color) in more
superficial areas.
Next we'll visualise spatial deviation (SD) across the whole cortex for PSF | Python Code:
# Author: Olaf Hauk <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_resolution_matrix
from mne.minimum_norm import resolution_metrics
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects/'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif'
# read forward solution
forward = mne.read_forward_solution(fname_fwd)
# forward operator with fixed source orientations
mne.convert_forward_solution(forward, surf_ori=True,
force_fixed=True, copy=False)
# noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# evoked data for info
evoked = mne.read_evokeds(fname_evo, 0)
# make inverse operator from forward solution
# free source orientation
inverse_operator = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward, noise_cov=noise_cov, loose=0.,
depth=None)
# regularisation parameter
snr = 3.0
lambda2 = 1.0 / snr ** 2
Explanation: Compute spatial resolution metrics in source space
Compute peak localisation error and spatial deviation for the point-spread
functions of dSPM and MNE. Plot their distributions and difference
distributions.
This example mimics some results from [1]_, namely Figure 3 (peak localisation
error for PSFs, L2-MNE vs dSPM) and Figure 4 (spatial deviation for PSFs,
L2-MNE vs dSPM).
End of explanation
rm_mne = make_inverse_resolution_matrix(forward, inverse_operator,
method='MNE', lambda2=lambda2)
ple_mne_psf = resolution_metrics(rm_mne, inverse_operator['src'],
function='psf', metric='peak_err')
sd_mne_psf = resolution_metrics(rm_mne, inverse_operator['src'],
function='psf', metric='sd_ext')
del rm_mne
Explanation: MNE
Compute resolution matrices, peak localisation error (PLE) for point spread
functions (PSFs), spatial deviation (SD) for PSFs:
End of explanation
rm_dspm = make_inverse_resolution_matrix(forward, inverse_operator,
method='dSPM', lambda2=lambda2)
ple_dspm_psf = resolution_metrics(rm_dspm, inverse_operator['src'],
function='psf', metric='peak_err')
sd_dspm_psf = resolution_metrics(rm_dspm, inverse_operator['src'],
function='psf', metric='sd_ext')
del rm_dspm
Explanation: dSPM
Do the same for dSPM:
End of explanation
brain_ple_mne = ple_mne_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=1,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_mne.add_text(0.1, 0.9, 'PLE MNE', 'title', font_size=16)
brain_ple_dspm = ple_dspm_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=2,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_dspm.add_text(0.1, 0.9, 'PLE dSPM', 'title', font_size=16)
# Subtract the two distributions and plot this difference
diff_ple = ple_mne_psf - ple_dspm_psf
brain_ple_diff = diff_ple.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=3,
clim=dict(kind='value', pos_lims=(0., 1., 2.)))
brain_ple_diff.add_text(0.1, 0.9, 'PLE MNE-dSPM', 'title', font_size=16)
Explanation: Visualize results
Visualise peak localisation error (PLE) across the whole cortex for PSF
End of explanation
brain_sd_mne = sd_mne_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=4,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_mne.add_text(0.1, 0.9, 'SD MNE', 'title', font_size=16)
brain_sd_dspm = sd_dspm_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=5,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_dspm.add_text(0.1, 0.9, 'SD dSPM', 'title', font_size=16)
# Subtract the two distributions and plot this difference
diff_sd = sd_mne_psf - sd_dspm_psf
brain_sd_diff = diff_sd.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=6,
clim=dict(kind='value', pos_lims=(0., 1., 2.)))
brain_sd_diff.add_text(0.1, 0.9, 'SD MNE-dSPM', 'title', font_size=16)
Explanation: These plots show that dSPM has generally lower peak localization error (red
color) than MNE in deeper brain areas, but higher error (blue color) in more
superficial areas.
Next we'll visualise spatial deviation (SD) across the whole cortex for PSF:
End of explanation |
12,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 2
Imports
Step1: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http
Step2: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data
Step3: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
Step4: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 2
Imports
End of explanation
!head -n 30 open_exoplanet_catalogue.txt
Explanation: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http://iopscience.iop.org/1402-4896/2008/T130/014001
Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:
https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue
A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:
End of explanation
data = np.genfromtxt('open_exoplanet_catalogue.txt',delimiter=',')
assert data.shape==(1993,24)
Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:
End of explanation
MD = data[:,2]
fig, ax = plt.subplots(figsize=(9,6));
ax.hist(MD,bins=20,range=(0,10));
plt.xlabel('Planetary Mass (Jupiter Masses)');
plt.ylabel('Number of planets');
plt.yticks(range(0,401,50));
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.title('Number of Planets vs Planetary Mass')
assert True # leave for grading
Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
End of explanation
OE = data[:,6]
SMA = data[:,5]
fig, ax = plt.subplots(figsize=(9,6));
ax.scatter(SMA,OE)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.semilogx()
plt.xlim(.01,100);
plt.ylim(-0.05,1);
plt.xlabel('Semimajor Axis');
plt.ylabel('Orbital Eccentricity');
plt.title('Semimajor Axis vs Orbital Eccentricity')
assert True # leave for grading
Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation |
12,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graficación
Antes que nada, tenemos que aprender a graficar en Python, lo manera mas fácil de graficar es usando la función plot de la libería matplotlib, asi que importamos esta función
Step1: y la usamos como cualquier función, dandole dos listas, una con todos los valores de $x$, y otra con todos los valores de $y$
Step2: Sin embargo no aparece nada, para esto es necesario decirle a la librería matplotlib que queremos que nos muestre las gráfcias en linea con nuestro notebook, por lo que utilizamos
Step3: y si graficamos ahora
Step4: De la misma manera, podemos gráficar cualquier función, creada o importada, empecemos con
Step5: La función linspace nos ayudara a crear un arreglo lineal de datos el cual define al eje $x$
Step6: Ahora solo tenemos que meter estos datos a la función $\sin$
Step7: Con lo que vemos que tenemos nuestros datos, pero lo principal es graficarlo, por lo que le damos estos dos arreglos a plot y obtendremos
Step8: Una opción que puedo utilizar para darle formato a mi gráfica es "o", con lo que nos mostrará los datos como puntos, en lugar de una linea que conecta a estos
Step9: y despues de esta breve introducción a la graficación en Python, empecemos con nuestro problema
Step10: Utilizando la formula de interpolación por polinomios de Lagrange, tenemos que
Step11: Sin embargo esta solución solo calcula uno de los polinomios de Lagrange y solo sirve para el caso de $4$ datos; el primer paso para crear una solución total, es separar estos polinomios en cada uno de los multiplicandos
Step12: Por lo que vemos que este método es equivalente, tan solo tenemos que crear un metodo iterativo para encontrar estos polinomios.
Quiero hacer notar que estos objetos que creamos son funciones, de la misma manera que sus elementos
Step13: Tambien notar, que para la creación de estas funciones, utilice un subconjunto de estos datos, que excluia al primer elemento; crearé una lista que excluya justo a este primer elemento
Step14: y con un ciclo for, agregaré funciones en una lista, en la cual estarán cada uno de los multiplicandos del primer polinomio de Lagrange
Step15: Note que L0s esta compuesto de funciones, exactamente igual que L01, L02 y L03
Step16: Ahora solo tengo que multiplicar estas funciones, para obtener L0, utilizaré una función llamada reduce, la cual evaluará por pares, segun la regla que le doy (multiplicación)
Step17: Una vez que tengo L0, puedo obtener los siguientes polinomios
Step18: Pero esto me obligaría a escribir código tantas veces como datos tenga, lo cual no es aceptable; de la misma manera en que los multiplicandos los metí en una lista, agregaré estos polinomios de Lagrange en una lista
Step19: Con lo que tenemos las funciones asociadas a cada uno de los polinomios de Lagrange, estos a su vez, tienen que se multiplicados por los datos_y, por lo que creamos para el primer polinomio
Step20: O bien, para todos de un solo jalón
Step21: Por ultimo, todos estos terminos que se encuentran guardados en Lfs, podemos sumarlos usando reduce nuevamente, pero utilizando la regla de suma ahora
Step22: Y al evaluarlo en un número real, se comporta como esperamos
Step23: Si ahora creamos un arreglo de datos desde $0$ hasta $10$, tan solo para asegurar que vemos todos los datos
Step24: Y graficar esta función en conjunto con los datos originales es facil
Step25: De tal manera que podemos crear una función que haga todo el trabajo por nosotros | Python Code:
from matplotlib.pyplot import plot
Explanation: Graficación
Antes que nada, tenemos que aprender a graficar en Python, lo manera mas fácil de graficar es usando la función plot de la libería matplotlib, asi que importamos esta función:
End of explanation
plot([0,1], [2,3])
Explanation: y la usamos como cualquier función, dandole dos listas, una con todos los valores de $x$, y otra con todos los valores de $y$:
End of explanation
%matplotlib inline
Explanation: Sin embargo no aparece nada, para esto es necesario decirle a la librería matplotlib que queremos que nos muestre las gráfcias en linea con nuestro notebook, por lo que utilizamos:
End of explanation
plot([0,1],[2,3])
Explanation: y si graficamos ahora:
End of explanation
from numpy import sin
from numpy import linspace
Explanation: De la misma manera, podemos gráficar cualquier función, creada o importada, empecemos con:
$$
y=\sin{x}
$$
Antes que nada, importemos esta función de la librería numpy, así como la función linspace:
End of explanation
xs = linspace(0, 10, 100)
xs
Explanation: La función linspace nos ayudara a crear un arreglo lineal de datos el cual define al eje $x$:
End of explanation
ys = sin(xs)
ys
Explanation: Ahora solo tenemos que meter estos datos a la función $\sin$:
End of explanation
plot(xs, ys)
Explanation: Con lo que vemos que tenemos nuestros datos, pero lo principal es graficarlo, por lo que le damos estos dos arreglos a plot y obtendremos:
End of explanation
plot(xs, ys, "o")
Explanation: Una opción que puedo utilizar para darle formato a mi gráfica es "o", con lo que nos mostrará los datos como puntos, en lugar de una linea que conecta a estos:
End of explanation
datos_x = [0, 1, 3, 6]
datos_y = [-3, 0, 5, 7]
Explanation: y despues de esta breve introducción a la graficación en Python, empecemos con nuestro problema:
Lo que queremos es una función que pase exactamente por los siguientes puntos:
$i$ | 0|1|2|3
---------|--|-|-|-
$f(x_i)$ |-3|0|5|7
$x_i$ | 0|1|3|6
El primer paso será guardar estos datos en variables de Python, especificamente listas:
End of explanation
L0 = lambda x: ((x - datos_x[1])*(x - datos_x[2])*(x - datos_x[3]))/((datos_x[0] - datos_x[1])*(datos_x[0] - datos_x[2])*(datos_x[0] - datos_x[3]))
L0(5)
Explanation: Utilizando la formula de interpolación por polinomios de Lagrange, tenemos que:
$$
p(x) = L_0(x)f(x_0) + L_1(x)f(x_1) + L_2(x)f(x_2)
$$
en donde cada uno de los polinomios de Lagrange, se calcula con la formula:
$$
L_i(x) = \prod_{j=0, j\ne i}^n \frac{x-x_j}{x_i-x_j}
$$
en una sola linea, esto se ve asi:
End of explanation
L01 = lambda x: (x - datos_x[1])/(datos_x[0] - datos_x[1])
L02 = lambda x: (x - datos_x[2])/(datos_x[0] - datos_x[2])
L03 = lambda x: (x - datos_x[3])/(datos_x[0] - datos_x[3])
L0 = lambda x: L01(x)*L02(x)*L03(x)
L01(5)
L02(5)
L03(5)
L0(5)
Explanation: Sin embargo esta solución solo calcula uno de los polinomios de Lagrange y solo sirve para el caso de $4$ datos; el primer paso para crear una solución total, es separar estos polinomios en cada uno de los multiplicandos:
End of explanation
L0
L01, L02, L03
Explanation: Por lo que vemos que este método es equivalente, tan solo tenemos que crear un metodo iterativo para encontrar estos polinomios.
Quiero hacer notar que estos objetos que creamos son funciones, de la misma manera que sus elementos:
End of explanation
dato_x0 = datos_x[0]
datos_x0 = datos_x[1:]
dato_x0
datos_x0
Explanation: Tambien notar, que para la creación de estas funciones, utilice un subconjunto de estos datos, que excluia al primer elemento; crearé una lista que excluya justo a este primer elemento:
End of explanation
L0s = []
for i in range(len(datos_x0)):
L0s.append(lambda x, i=i: (x - datos_x0[i])/(dato_x0 - datos_x0[i]))
Explanation: y con un ciclo for, agregaré funciones en una lista, en la cual estarán cada uno de los multiplicandos del primer polinomio de Lagrange:
End of explanation
L0s
Explanation: Note que L0s esta compuesto de funciones, exactamente igual que L01, L02 y L03:
End of explanation
from functools import reduce
L0 = reduce(lambda x, y: lambda z :x(z)*y(z), L0s)
L0(5)
Explanation: Ahora solo tengo que multiplicar estas funciones, para obtener L0, utilizaré una función llamada reduce, la cual evaluará por pares, segun la regla que le doy (multiplicación):
End of explanation
dato_x1 = datos_x[1]
datos_x1 = datos_x[:1] + datos_x[2:]
L1s = []
for i in range(len(datos_x1)):
L1s.append(lambda x, i=i: (x - datos_x1[i])/(dato_x1 - datos_x1[i]))
L1 = reduce(lambda x, y: lambda z :x(z)*y(z), L1s)
L1(5)
Explanation: Una vez que tengo L0, puedo obtener los siguientes polinomios:
End of explanation
Ls = []
for j in range(len(datos_x)):
dato_xi = datos_x[j]
datos_xi = datos_x[:j] + datos_x[j+1:]
Lis = []
for i in range(len(datos_xi)):
Lis.append(lambda x, i=i, dato_xi=dato_xi, datos_xi=datos_xi: (x - datos_xi[i])/(dato_xi - datos_xi[i]))
Li = reduce(lambda x, y: lambda z: x(z)*y(z), Lis)
Ls.append(Li)
Ls
Explanation: Pero esto me obligaría a escribir código tantas veces como datos tenga, lo cual no es aceptable; de la misma manera en que los multiplicandos los metí en una lista, agregaré estos polinomios de Lagrange en una lista:
End of explanation
Lf0 = lambda x: Ls[0](x)*datos_y[0]
Lf0(5)
Explanation: Con lo que tenemos las funciones asociadas a cada uno de los polinomios de Lagrange, estos a su vez, tienen que se multiplicados por los datos_y, por lo que creamos para el primer polinomio:
End of explanation
Lfs = []
for j in range(len(datos_y)):
Lfi = lambda x, j=j: Ls[j](x)*datos_y[j]
Lfs.append(Lfi)
Explanation: O bien, para todos de un solo jalón:
End of explanation
interp = reduce(lambda x, y: lambda z: x(z)+y(z), Lfs)
Explanation: Por ultimo, todos estos terminos que se encuentran guardados en Lfs, podemos sumarlos usando reduce nuevamente, pero utilizando la regla de suma ahora:
$$
p(x) = \sum_{i=0}^n L_i(x) f(x_i)
$$
End of explanation
interp(5)
Explanation: Y al evaluarlo en un número real, se comporta como esperamos:
End of explanation
xs = linspace(0, 10, 100)
ys = interp(xs)
plot(xs, ys)
Explanation: Si ahora creamos un arreglo de datos desde $0$ hasta $10$, tan solo para asegurar que vemos todos los datos:
End of explanation
plot(xs, ys)
plot(datos_x, datos_y, "o")
Explanation: Y graficar esta función en conjunto con los datos originales es facil:
End of explanation
def interpolacion_Lagrange(datos_x, datos_y):
Ls = []
for j in range(len(datos_x)):
dato_xi = datos_x[j]
datos_xi = datos_x[:j] + datos_x[j+1:]
Lis = []
for i in range(len(datos_xi)):
Lis.append(lambda x, i=i, dato_xi=dato_xi, datos_xi=datos_xi: (x - datos_xi[i])/(dato_xi - datos_xi[i]))
Li = reduce(lambda x, y: lambda z: x(z)*y(z), Lis)
Ls.append(Li)
Lfs = []
for j in range(len(datos_y)):
Lfi = lambda x, j=j: Ls[j](x)*datos_y[j]
Lfs.append(Lfi)
interp = reduce(lambda x, y: lambda z: x(z)+y(z), Lfs)
return interp
dx = [0, 1, 3, 6]
dy = [-3, 0, 5, 7]
poli = interpolacion_Lagrange(dx, dy)
xs = linspace(0, 6, 100)
ys = poli(xs)
plot(xs, ys)
plot(dx, dy , "o")
Explanation: De tal manera que podemos crear una función que haga todo el trabajo por nosotros:
End of explanation |
12,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summary
We will use PyMC3 to estimate the posterior PDF for the true rating of a set of artificial teams using data from a simulated season. The idea is to test our model on a small set of artificial data where we know the answer to begin with, so we can learn about MCMC and make sure our model is sensible.
Step1: We have two teams, one of which is much better than the other. Let's make a simulated season between these teams.
Step2: Prior on each team is a normal distribution with mean of 0 and standard deviation of 1.
Step3: Hmm, something looks odd here. The posterior pdf for these two teams has significant overlap. Does this mean that our model is not sure about which team is better?
Step4: Ah, so the posterior pdf is actually quite clear | Python Code:
import pandas as pd
import os
import numpy as np
import pymc3 as pm
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
true_rating = {
'All Stars': 2.0,
'Average': 0.0,
'Just Having Fun': -1.2,
}
true_index = {
0: 'All Stars',
1: 'Average',
2: 'Just Having Fun',
}
n_teams = len(true_rating)
team_numbers = range(n_teams)
team_names = [true_index[i] for i in team_numbers]
true_rating
team_names
Explanation: Summary
We will use PyMC3 to estimate the posterior PDF for the true rating of a set of artificial teams using data from a simulated season. The idea is to test our model on a small set of artificial data where we know the answer to begin with, so we can learn about MCMC and make sure our model is sensible.
End of explanation
season_length = [5, 20, 100]
traces = []
simulatedSeasons = []
for n_games in season_length:
games = range(n_games)
database = []
for game in games:
game_row = {}
matchup = np.random.choice(team_numbers, size=2, replace=False)
team0 = true_index[matchup[0]]
team1 = true_index[matchup[1]]
game_row['Team A'] = team0
game_row['Team B'] = team1
game_row['Index A'] = matchup[0]
game_row['Index B'] = matchup[1]
deltaRating = true_rating[team0] - true_rating[team1]
p = 1 / (1 + np.exp(-deltaRating))
randomNumber = np.random.random()
outcome_A = p > randomNumber
game_row['Team A Wins'] = outcome_A
database.append(game_row)
simulatedSeason = pd.DataFrame(database)
simulatedSeasons.append(simulatedSeason)
with pm.Model() as model:
rating = pm.Normal('rating', mu=0, sd=1, shape=n_teams)
deltaRating = rating[simulatedSeason['Index A'].values] - rating[simulatedSeason['Index B'].values]
p = 1 / (1 + np.exp(-deltaRating))
win = pm.Bernoulli('win', p, observed=simulatedSeason['Team A Wins'].values)
trace = pm.sample(1000)
traces.append(trace)
simulatedSeasons[1].groupby('Team A').sum()
1 / (1 + np.exp(-2))
sns.set_context('poster')
f, axes = plt.subplots(nrows=3, ncols=1, figsize=(10, 15))
# plt.figure(figsize=(10, 5))
for ax_index, n_games in enumerate(season_length):
ax = axes[ax_index]
for team_number in team_numbers:
rating_posterior = traces[ax_index]['rating'][:, team_number]
team_name = true_index[team_number]
sns.distplot(rating_posterior, label=team_name, ax=ax)
ax.legend()
ax.set_xlabel('Rating')
ax.set_ylabel('Density')
ax.set_title("Season length: {} games".format(n_games))
plt.tight_layout()
simulatedSeason = pd.DataFrame(database)
simulatedSeason
project_dir = '/Users/rbussman/Projects/BUDA/buda-ratings'
scores_dir = os.path.join(project_dir, 'data', 'raw', 'game_scores')
simulatedSeason.to_csv(os.path.join(scores_dir, 'artificial_scores_big.csv'))
Explanation: We have two teams, one of which is much better than the other. Let's make a simulated season between these teams.
End of explanation
simulatedSeason.shape
with pm.Model() as model:
rating = pm.Normal('rating', mu=0, sd=1, shape=n_teams)
deltaRating = rating[simulatedSeason['Index A'].values] - rating[simulatedSeason['Index B'].values]
p = 1 / (1 + np.exp(-deltaRating))
win = pm.Bernoulli('win', p, observed=simulatedSeason['Team A Wins'].values)
with model:
trace = pm.sample(1000)
sns.set_context('poster')
plt.figure(figsize=(10, 5))
for team_number in team_numbers:
rating_posterior = trace['rating'][:, team_number]
team_name = true_index[team_number]
sns.distplot(rating_posterior, label=team_name)
plt.legend()
plt.xlabel('Rating')
plt.ylabel('Density')
Explanation: Prior on each team is a normal distribution with mean of 0 and standard deviation of 1.
End of explanation
sns.set_context('poster')
plt.figure(figsize=(10, 5))
for team_number in team_numbers[:-1]:
rating_posterior = trace['rating'][:, team_number] - trace['rating'][:, -1]
team_name = true_index[team_number]
sns.distplot(rating_posterior, label="{} - {}".format(team_name, true_index[team_numbers[-1]]))
plt.legend()
plt.xlabel('Rating')
plt.ylabel('Density')
gt0 = rating_posterior > 0
print("Percentage of samples where 'All Stars' have a higher rating than 'Just Having Fun': {:.2f}%".format(
100. * rating_posterior[gt0].size / rating_posterior.size))
Explanation: Hmm, something looks odd here. The posterior pdf for these two teams has significant overlap. Does this mean that our model is not sure about which team is better?
End of explanation
rating_posterior
.75 ** 14
estimatedratings = trace['rating'].mean(axis=0)
estimatedratings
for key in true_rating:
print("True: {:.2f}; Estimated: {:.2f}".format((true_rating[key], estimatedratings[key]))
key
Explanation: Ah, so the posterior pdf is actually quite clear: There is a 99.22% chance that "All Stars" are better than "Just Having Fun".
How does our confidence change as a function of the number of games in the season?
End of explanation |
12,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create Basic Charts (Plots)
In this notebook we'll be creating a number of basic charts from our data, including a histogram, box plot, and scatterplot.
Step1: Import The Data
Step2: Create a Histogram for a Column
Create a histogram of the number of casualties
Step3: Plot the Data as a Probability Distribution
Step4: Plot a Cumulative Distribution Function
Step5: Show the Histogram as a Stepped Line
Step6: Plot Two Sets of Values in a Probability Distribution
Step7: Create a Customized Box Plot with Whiskers
Step8: Create a Basic Bar Chart of Casualties Over Time | Python Code:
# To show matplotlib plots in iPython Notebook we can use an iPython magic function
%matplotlib inline
# Import everything we need
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
Explanation: Create Basic Charts (Plots)
In this notebook we'll be creating a number of basic charts from our data, including a histogram, box plot, and scatterplot.
End of explanation
# Import the dataset from the CSV file
accidents_data_file = '/Users/robert.dempsey/Dropbox/Private/Art of Skill Hacking/' \
'Books/Python Business Intelligence Cookbook/Data/Stats19-Data1979-2004/Accidents7904.csv'
accidents = pd.read_csv(accidents_data_file,
sep=',',
header=0,
index_col=False,
parse_dates=['Date'],
dayfirst=True,
tupleize_cols=False,
error_bad_lines=True,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False,
nrows=1000000
)
accidents.head()
Explanation: Import The Data
End of explanation
# Create a frequency table of casualty counts from the previous recipe
casualty_count = accidents.groupby('Date').agg({'Number_of_Casualties': np.sum})
# Create a histogram from the casualty count dataframe
plt.hist(casualty_count['Number_of_Casualties'],
bins=30)
plt.title('Number of Casualties Histogram')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.show()
Explanation: Create a Histogram for a Column
Create a histogram of the number of casualties
End of explanation
# Show the probability of finding a number in a bin
plt.hist(casualty_count['Number_of_Casualties'],
bins=30,
normed=True)
plt.title('Probability Distribution')
plt.xlabel('Value')
plt.ylabel('Probability')
plt.show()
Explanation: Plot the Data as a Probability Distribution
End of explanation
# Shows the probability of finding a number in a bin or any lower bin
plt.hist(casualty_count['Number_of_Casualties'],
bins=20,
normed=True,
cumulative=True)
plt.title('Cumulative Distribution')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.show()
Explanation: Plot a Cumulative Distribution Function
End of explanation
plt.hist(casualty_count['Number_of_Casualties'],
bins=20,
histtype='step')
plt.title('Number of Casualties Histogram')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.show()
Explanation: Show the Histogram as a Stepped Line
End of explanation
# Create a frequency table of vehicle counts
vehicle_count = accidents.groupby('Date').agg({'Number_of_Vehicles': np.sum})
# Plot the two dataframes
plt.hist(casualty_count['Number_of_Casualties'], bins=20, histtype='stepfilled', normed=True, color='b', label='Casualties')
plt.hist(vehicle_count['Number_of_Vehicles'], bins=20, histtype='stepfilled', normed=True, color='r', alpha=0.5, label='Vehicles')
plt.title("Casualties/Vehicles Histogram")
plt.xlabel("Value")
plt.ylabel("Probability")
plt.legend()
plt.show()
Explanation: Plot Two Sets of Values in a Probability Distribution
End of explanation
data_to_plot = [casualty_count['Number_of_Casualties'],
vehicle_count['Number_of_Vehicles']]
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axis instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(data_to_plot)
# Change the color and linewidth of the caps
for cap in bp['caps']:
cap.set(color='#7570b3', linewidth=2)
# Change the color and linewidth of the medians
for median in bp['medians']:
median.set(color='#b2df8a', linewidth=2)
# Change the style of the fliers and their fill
for flier in bp['fliers']:
flier.set(marker='o', color='#e7298a', alpha=0.5)
# Add x-axis labels
ax.set_xticklabels(['Casualties', 'Vehicles'])
# Show the figure
fig.savefig('fig1.png', bbox_inches='tight')
Explanation: Create a Customized Box Plot with Whiskers
End of explanation
# Create a figure instance
fig = plt.figure()
# Create an axis instance
ax = fig.add_subplot(111)
# Create the bar chart
ax.bar(range(len(casualty_count.index.values)), casualty_count['Number_of_Casualties'])
# Save the figure
fig.savefig('fig2.png')
Explanation: Create a Basic Bar Chart of Casualties Over Time
End of explanation |
12,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-cm6-1', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: CNRM-CM6-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
12,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
import fgm tables
Step1: Function libaries
ResBlock
res_block is the backbone of the resnet structure. The resblock has multi branch, bottle neck layer and skip connection build in. This modularized design has made create deep neural network easy.
Step2: data_reader
The read_h5_data function read the table from the hdf5 file.
In the FGM case we chose not to scale the input features, since they all falls between 0 and 1. There are a great variety in the output features. In the reaction region close to stoichiometry the gradient in the output properties are great. A good example is the source term for progress variable, which rises from 0 to 1e5. So the output features are first transformed to logrithmic scale and then rearranged between 0 and 1. The outputs are normalised by its variance. This way the output value will be large where the gradient is great. So during training more focus would be put. The same 'focus design' has been put on the loss function selection as well. mse is selected over mae for that the squared error put more weights on the data samples that shows great changes.
Step3: model
load data
Step4: build neural network model
Step5: model training
gpu training
Step6: Training loss plot
Step7: Inference test
prepare frontend for plotting
Step8: prepare data for plotting
GPU data prepare
Step9: interactive plot
Step10: Stutdent networ
The student network is trained on the synsetic data generated from the full teacher network. It is mean to simplify the final model used in production.
Step11: save student network weights | Python Code:
!pip install gdown
!mkdir ./data
import gdown
def data_import():
ids = {
"tables_of_fgm.h5":"1XHPF7hUqT-zp__qkGwHg8noRazRnPqb0"
}
url = 'https://drive.google.com/uc?id='
for title, g_id in ids.items():
try:
output_file = open("/content/data/" + title, 'wb')
gdown.download(url + g_id, output_file, quiet=False)
except IOError as e:
print(e)
finally:
output_file.close()
data_import()
Explanation: import fgm tables
End of explanation
import tensorflow as tf
import keras
from keras.layers import Dense, Activation, Input, BatchNormalization, Dropout, concatenate
from keras import layers
def res_branch(bi, conv_name_base, bn_name_base, scale, input_tensor, n_neuron, stage, block,dp1, bn=False):
x_1 = Dense(scale * n_neuron, name=conv_name_base + '2a_'+str(bi))(input_tensor)
if bn:
x_1 = BatchNormalization(axis=-1, name=bn_name_base + '2a_'+str(bi))(x_1)
x_1 = Activation('relu')(x_1)
if dp1>0:
x_1 = Dropout(dp1)(x_1)
return x_1
def res_block(input_tensor,scale, n_neuron, stage, block, bn=False,branches=0):
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# scale = 2
x = Dense(scale * n_neuron, name=conv_name_base + '2a')(input_tensor)
if bn:
x = BatchNormalization(axis=-1, name=bn_name_base + '2a')(x)
x = Activation('relu')(x)
dp1=0.
if dp1 >0:
x = Dropout(dp1)(x)
branch_list=[x]
for i in range(branches-1):
branch_list.append(res_branch(i,conv_name_base, bn_name_base, scale,input_tensor,n_neuron,stage,block,dp1,bn))
if branches-1 > 0:
x = Dense(n_neuron, name=conv_name_base + '2b')(concatenate(branch_list,axis=-1))
# x = Dense(n_neuron, name=conv_name_base + '2b')(layers.add(branch_list))
else:
x = Dense(n_neuron, name=conv_name_base + '2b')(x)
if bn:
x = BatchNormalization(axis=-1, name=bn_name_base + '2b')(x)
x = layers.add([x, input_tensor])
x = Activation('relu')(x)
if dp1 >0:
x = Dropout(dp1)(x)
return x
Explanation: Function libaries
ResBlock
res_block is the backbone of the resnet structure. The resblock has multi branch, bottle neck layer and skip connection build in. This modularized design has made create deep neural network easy.
End of explanation
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler, StandardScaler
class data_scaler(object):
def __init__(self):
self.norm = None
self.norm_1 = None
self.std = None
self.case = None
self.scale = 1
self.bias = 1e-20
# self.bias = 1
self.switcher = {
'min_std': 'min_std',
'std2': 'std2',
'std_min':'std_min',
'min': 'min',
'no':'no',
'log': 'log',
'log_min':'log_min',
'log2': 'log2',
'tan': 'tan'
}
def fit_transform(self, input_data, case):
self.case = case
if self.switcher.get(self.case) == 'min_std':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = self.norm.fit_transform(input_data)
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'std2':
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
if self.switcher.get(self.case) == 'std_min':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
out = self.norm.fit_transform(out)
if self.switcher.get(self.case) == 'min':
self.norm = MinMaxScaler()
out = self.norm.fit_transform(input_data)
if self.switcher.get(self.case) == 'no':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = input_data
if self.switcher.get(self.case) == 'log':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
self.std = StandardScaler()
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'log_min':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
self.norm = MinMaxScaler()
out = self.norm.fit_transform(out)
if self.switcher.get(self.case) == 'log2':
self.norm = MinMaxScaler()
self.norm_1 = MinMaxScaler()
out = self.norm.fit_transform(input_data)
out = np.log(np.asarray(out) + self.bias)
out = self.norm_1.fit_transform(out)
if self.switcher.get(self.case) == 'tan':
self.norm = MaxAbsScaler()
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
out = self.norm.fit_transform(out)
out = np.tan(out / (2 * np.pi + self.bias))
return out
def transform(self, input_data):
if self.switcher.get(self.case) == 'min_std':
out = self.norm.transform(input_data)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'std2':
out = self.std.transform(input_data)
if self.switcher.get(self.case) == 'std_min':
out = self.std.transform(input_data)
out = self.norm.transform(out)
if self.switcher.get(self.case) == 'min':
out = self.norm.transform(input_data)
if self.switcher.get(self.case) == 'no':
out = input_data
if self.switcher.get(self.case) == 'log':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'log_min':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
out = self.norm.transform(out)
if self.switcher.get(self.case) == 'log2':
out = self.norm.transform(input_data)
out = np.log(np.asarray(out) + self.bias)
out = self.norm_1.transform(out)
if self.switcher.get(self.case) == 'tan':
out = self.std.transform(input_data)
out = self.norm.transform(out)
out = np.tan(out / (2 * np.pi + self.bias))
return out
def inverse_transform(self, input_data):
if self.switcher.get(self.case) == 'min_std':
out = self.std.inverse_transform(input_data)
out = self.norm.inverse_transform(out)
if self.switcher.get(self.case) == 'std2':
out = self.std.inverse_transform(input_data)
if self.switcher.get(self.case) == 'std_min':
out = self.norm.inverse_transform(input_data)
out = self.std.inverse_transform(out)
if self.switcher.get(self.case) == 'min':
out = self.norm.inverse_transform(input_data)
if self.switcher.get(self.case) == 'no':
out = input_data
if self.switcher.get(self.case) == 'log':
out = self.std.inverse_transform(input_data)
out = (np.exp(-out) - self.bias) * self.scale
if self.switcher.get(self.case) == 'log_min':
out = self.norm.inverse_transform(input_data)
out = (np.exp(-out) - self.bias) * self.scale
if self.switcher.get(self.case) == 'log2':
out = self.norm_1.inverse_transform(input_data)
out = np.exp(out) - self.bias
out = self.norm.inverse_transform(out)
if self.switcher.get(self.case) == 'tan':
out = (2 * np.pi + self.bias) * np.arctan(input_data)
out = self.norm.inverse_transform(out)
out = self.std.inverse_transform(out)
return out
def read_h5_data(fileName, input_features, labels):
df = pd.read_hdf(fileName)
df = df[df['f']<0.45]
input_df=df[input_features]
in_scaler = data_scaler()
input_np = in_scaler.fit_transform(input_df.values,'no')
label_df=df[labels].clip(0)
# if 'PVs' in labels:
# label_df['PVs']=np.log(label_df['PVs']+1)
out_scaler = data_scaler()
label_np = out_scaler.fit_transform(label_df.values,'std2')
return input_np, label_np, df, in_scaler, out_scaler
Explanation: data_reader
The read_h5_data function read the table from the hdf5 file.
In the FGM case we chose not to scale the input features, since they all falls between 0 and 1. There are a great variety in the output features. In the reaction region close to stoichiometry the gradient in the output properties are great. A good example is the source term for progress variable, which rises from 0 to 1e5. So the output features are first transformed to logrithmic scale and then rearranged between 0 and 1. The outputs are normalised by its variance. This way the output value will be large where the gradient is great. So during training more focus would be put. The same 'focus design' has been put on the loss function selection as well. mse is selected over mae for that the squared error put more weights on the data samples that shows great changes.
End of explanation
%matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# define the labels
col_labels=['C2H3', 'C2H6', 'CH2', 'H2CN', 'C2H4', 'H2O2', 'C2H', 'CN',
'heatRelease', 'NCO', 'NNH', 'N2', 'AR', 'psi', 'CO', 'CH4', 'HNCO',
'CH2OH', 'HCCO', 'CH2CO', 'CH', 'mu', 'C2H2', 'C2H5', 'H2', 'T', 'PVs',
'O', 'O2', 'N2O', 'C', 'C3H7', 'CH2(S)', 'NH3', 'HO2', 'NO', 'HCO',
'NO2', 'OH', 'HCNO', 'CH3CHO', 'CH3', 'NH', 'alpha', 'CH3O', 'CO2',
'CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN', 'H', 'N', 'H2O',
'HCCOH', 'HCNN']
# Taking 0 out
col_labels.remove('AR')
col_labels.remove('heatRelease')
# labels = ['CH4','O2','H2O','CO','CO2','T','PVs','psi','mu','alpha']
labels = ['T','PVs']
# labels = ['T','CH4','O2','CO2','CO','H2O','H2','OH','psi']
# labels = ['CH2OH','HNCO','CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN']
# labels = np.random.choice(col_labels,20,replace=False).tolist()
# labels.append('PVs')
# labels = col_labels
print(labels)
input_features=['f','pv','zeta']
# read in the data
x_input, y_label, df, in_scaler, out_scaler = read_h5_data('./data/tables_of_fgm.h5',input_features=input_features, labels = labels)
Explanation: model
load data
End of explanation
from sklearn.model_selection import train_test_split
import tensorflow as tf
from keras.models import Model
from keras.layers import Dense, Input
from keras.callbacks import ModelCheckpoint
# split into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)
n_neuron = 10
scale=3
branches=3
# %%
print('set up ANN')
# ANN parameters
dim_input = x_train.shape[1]
dim_label = y_train.shape[1]
batch_norm = False
# This returns a tensor
inputs = Input(shape=(dim_input,),name='input_1')
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(n_neuron, activation='relu')(inputs)
# less then 2 res_block, there will be variance
x = res_block(x, scale, n_neuron, stage=1, block='a', bn=batch_norm,branches=branches)
x = res_block(x, scale, n_neuron, stage=1, block='b', bn=batch_norm,branches=branches)
# x = res_block(x, scale, n_neuron, stage=1, block='c', bn=batch_norm,branches=branches)
x = Dense(100, activation='relu')(x)
x = Dropout(0.1)(x)
predictions = Dense(dim_label, activation='linear', name='output_1')(x)
model = Model(inputs=inputs, outputs=predictions)
model.summary()
Explanation: build neural network model
End of explanation
import keras.backend as K
from keras.callbacks import LearningRateScheduler
import math
def cubic_loss(y_true, y_pred):
return K.mean(K.square(y_true - y_pred)*K.abs(y_true - y_pred), axis=-1)
def coeff_r2(y_true, y_pred):
from keras import backend as K
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
def step_decay(epoch):
initial_lrate = 0.001
drop = 0.5
epochs_drop = 200.0
lrate = initial_lrate * math.pow(drop,math.floor((1+epoch)/epochs_drop))
return lrate
lrate = LearningRateScheduler(step_decay)
from keras import optimizers
batch_size = 1024*32
epochs = 60
vsplit = 0.1
loss_type='mse'
adam_op = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,epsilon=1e-8, decay=0.0, amsgrad=True)
model.compile(loss=loss_type, optimizer=adam_op, metrics=[coeff_r2])
# model.compile(loss=cubic_loss, optimizer=adam_op, metrics=['accuracy'])
# checkpoint (save the best model based validate loss)
!mkdir ./tmp
filepath = "./tmp/weights.best.cntk.hdf5"
checkpoint = ModelCheckpoint(filepath,
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min',
period=20)
# callbacks_list = [checkpoint]
callbacks_list = [lrate]
# fit the model
history = model.fit(
x_train, y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=vsplit,
verbose=2,
# callbacks=callbacks_list,
shuffle=True)
model.save('trained_fgm_nn.h5')
Explanation: model training
gpu training
End of explanation
fig = plt.figure()
plt.semilogy(history.history['loss'])
if vsplit:
plt.semilogy(history.history['val_loss'])
plt.title(loss_type)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
Explanation: Training loss plot
End of explanation
#@title import plotly
import plotly.plotly as py
import numpy as np
from plotly.offline import init_notebook_mode, iplot
# from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter
import plotly.graph_objs as go
def configure_plotly_browser_state():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
plotly: 'https://cdn.plot.ly/plotly-1.5.1.min.js?noext',
},
});
</script>
'''))
Explanation: Inference test
prepare frontend for plotting
End of explanation
from sklearn.metrics import r2_score
# model.load_weights("./tmp/weights.best.cntk.hdf5")
x_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = model.predict(x_test,batch_size=1024*8)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
test_data=pd.concat([x_test_df,y_test_df],axis=1)
pred_data=pd.concat([x_test_df,predict_df],axis=1)
!rm sim_check.h5
test_data.to_hdf('sim_check.h5',key='test')
pred_data.to_hdf('sim_check.h5',key='pred')
df_test=pd.read_hdf('sim_check.h5',key='test')
df_pred=pd.read_hdf('sim_check.h5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
res_sum=pd.DataFrame()
r2s=[]
r2s_i=[]
names=[]
maxs_0=[]
maxs_9=[]
for r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):
names.append(name)
r2s.append(r2)
maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())
maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())
for i in zeta_level:
r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],
df_test[df_test['zeta']==i][name]))
res_sum['name']=names
# res_sum['max_0']=maxs_0
# res_sum['max_9']=maxs_9
res_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]
# res_sum['r2']=r2s
tmp=np.asarray(r2s_i).reshape(-1,10)
for idx,z in enumerate(zeta_level):
res_sum['r2s_'+str(z)]=tmp[:,idx]
res_sum[3:]
no_drop=res_sum[3:]
no_drop
Explanation: prepare data for plotting
GPU data prepare
End of explanation
#@title Default title text
# species = np.random.choice(labels)
species = 'T' #@param {type:"string"}
z_level = 0 #@param {type:"integer"}
# configure_plotly_browser_state()
# init_notebook_mode(connected=False)
from sklearn.metrics import r2_score
df_t=df_test[df_test['zeta']==zeta_level[z_level]].sample(frac=1)
# df_p=df_pred.loc[df_pred['zeta']==zeta_level[1]].sample(frac=0.1)
df_p=df_pred.loc[df_t.index]
error=df_p[species]-df_t[species]
r2=round(r2_score(df_p[species],df_t[species]),4)
print(species,'r2:',r2,'max:',df_t[species].max())
fig_db = {
'data': [
{'name':'test data from table',
'x': df_t['f'],
'y': df_t['pv'],
'z': df_t[species],
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
}
},
{'name':'prediction from neural networks',
'x': df_p['f'],
'y': df_p['pv'],
'z': df_p[species],
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
},
},
{'name':'error in difference',
'x': df_p['f'],
'y': df_p['pv'],
'z': error,
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
},
}
],
'layout': {
'scene':{
'xaxis': {'title':'mixture fraction'},
'yaxis': {'title':'progress variable'},
'zaxis': {'title': species+'_r2:'+str(r2)}
}
}
}
# iplot(fig_db, filename='multiple-scatter')
iplot(fig_db)
model.save('trained_fgm_nn.h5')
model.save('trained_fgm_nn.h5')
%run -i k2tf.py --input_model='trained_fgm_nn.h5' --output_model='exported/fgm.pb'
Explanation: interactive plot
End of explanation
from keras.models import Model
from keras.layers import Dense, Input
from keras.callbacks import ModelCheckpoint
n_neuron = 50
# %%
print('set up student network')
# ANN parameters
dim_input = x_train.shape[1]
dim_label = y_train.shape[1]
batch_norm = False
# This returns a tensor
inputs = Input(shape=(dim_input,),name='input_1')
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(n_neuron, activation='relu',name='l1')(inputs)
x = Dense(n_neuron, activation='relu',name='l2')(x)
x = Dropout(0.1)(x)
predictions = Dense(dim_label, activation='linear', name='output_1')(x)
student_model = Model(inputs=inputs, outputs=predictions)
student_model.summary()
batch_size = 1024*32
epochs = 60
vsplit = 0.1
loss_type='mse'
adam_op = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,epsilon=1e-8, decay=0.0, amsgrad=True)
student_model.compile(loss=loss_type, optimizer=adam_op, metrics=[coeff_r2])
# model.compile(loss=cubic_loss, optimizer=adam_op, metrics=['accuracy'])
# checkpoint (save the best model based validate loss)
!mkdir ./tmp
filepath = "./tmp/student_weights.best.cntk.hdf5"
checkpoint = ModelCheckpoint(filepath,
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min',
period=20)
# callbacks_list = [checkpoint]
callbacks_list = [lrate]
x_train_teacher = x_train
y_train_teacher = model.predict(x_train, batch_size=1024*8)
# fit the model
history = student_model.fit(
x_train_teacher, y_train_teacher,
epochs=epochs,
batch_size=batch_size,
validation_split=vsplit,
verbose=2,
# callbacks=callbacks_list,
shuffle=True)
from sklearn.metrics import r2_score
x_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)
predict_val = student_model.predict(x_test,batch_size=1024*8)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
pred_data=pd.concat([x_test_df,predict_df],axis=1)
!rm sim_check.h5
pred_data.to_hdf('sim_check.h5',key='pred')
df_pred=pd.read_hdf('sim_check.h5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
res_sum=pd.DataFrame()
r2s=[]
r2s_i=[]
names=[]
maxs_0=[]
maxs_9=[]
for r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):
names.append(name)
r2s.append(r2)
maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())
maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())
for i in zeta_level:
r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],
df_test[df_test['zeta']==i][name]))
res_sum['name']=names
# res_sum['max_0']=maxs_0
# res_sum['max_9']=maxs_9
res_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]
# res_sum['r2']=r2s
tmp=np.asarray(r2s_i).reshape(-1,10)
for idx,z in enumerate(zeta_level):
res_sum['r2s_'+str(z)]=tmp[:,idx]
res_sum[3:]
Explanation: Stutdent networ
The student network is trained on the synsetic data generated from the full teacher network. It is mean to simplify the final model used in production.
End of explanation
import h5py
!rm student_model_weights.h5
student_model.save('student_model_weights.h5')
f = h5py.File('student_model_weights.h5','r')
dset=f['model_weights']
list(dset)
l1_w=dset['dense_5']['dense_5']['kernel:0'][:]
l1_b=dset['dense_5']['dense_5']['bias:0'][:]
l1_c=np.vstack([l1_w,l1_b])
l1_c=pd.Series(list(l1_c)).to_json()
l2_w=dset['dense_6']['dense_6']['kernel:0'][:]
l2_b=dset['dense_6']['dense_6']['bias:0'][:]
l2_c=np.vstack([l2_w,l2_b])
l2_c=pd.Series(list(l2_c)).to_json()
l3_w=dset['output_1']['output_1_2']['kernel:0'][:]
l3_b=dset['output_1']['output_1_2']['bias:0'][:]
l3_c=np.vstack([l3_w,l3_b])
l3_c=pd.Series(list(l3_c)).to_json()
!rm data.json
print("{",file=open('data.json','w'))
print('"l1":',l1_c,file=open('data.json','a'))
print(',"l2":',l2_c,file=open('data.json','a'))
print(',"output":',l3_c,file=open('data.json','a'))
print("}",file=open('data.json','a'))
test_id=888
print(x_test[test_id])
print(student_model.predict(x_test[test_id].reshape(-1,3)))
print(y_test[test_id])
l1_b
np.vstack([l1,l1_b])
student_model.predict(np.asarray([0.5,0.1,0.1]).reshape(-1,3))
student_model.save_weights('student_weights.h5')
Explanation: save student network weights
End of explanation |
12,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fit X in the gmm model for 1, 2, ... 10 components. Hint
Step1: Calculate the AIC and BIC for each of these 10 models, and find the best model.
Step2: Plot the AIC and BIC
Step3: Define your PDF by evenly distributing 1000 points in some range. Look up what the eval method of the model instance does, and evaluate on your 1000 data points x. You should be able to extract a pdf, and the individual responsibilities for each of the components.
Step4: Plot x as a histogram, and the PDF values over your x_i values. | Python Code:
gmms = [GMM(i).fit(X) for i in range(1,10)]
Explanation: Fit X in the gmm model for 1, 2, ... 10 components. Hint: You should create 10 instances of a GMM model, e.g. GMM(?).fit(X) would be one instance of a GMM model with ? components.
End of explanation
aics = [g.aic(X) for g in gmms]
bics = [g.bic(X) for g in gmms]
Explanation: Calculate the AIC and BIC for each of these 10 models, and find the best model.
End of explanation
plt.plot(aics)
plt.plot(bics)
Explanation: Plot the AIC and BIC
End of explanation
# Data x_i
x = np.linspace(-6,6,1000)
pdf = gmms[2].score_samples(x.reshape(-1,1))
Explanation: Define your PDF by evenly distributing 1000 points in some range. Look up what the eval method of the model instance does, and evaluate on your 1000 data points x. You should be able to extract a pdf, and the individual responsibilities for each of the components.
End of explanation
plt.plot(np.linspace(-6,6,1000),np.exp(pdf[0]))
plt.hist(X,bins='auto',normed=True)
Explanation: Plot x as a histogram, and the PDF values over your x_i values.
End of explanation |
12,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
载入数据、查阅基本信息
用户的基本信息,逾期情况,接待对象,金额
'注册日期', '来源渠道', '用户级别', '拨打次数', '客户ID', '客户性别', '年龄', '客户设备', '客户所属省',
'客户公司地址', '客户授信状态', '评分原因', '标识原因', '总分', '初审审核说明', '审核人', '备注', '授信额度',
'当前授信额度', '可用额度', '授信进件时间', '授信完成时间', '认证 1', '认证 2', '认证 3', '认证 4',
'认证 5', '认证 6', '认证 7', '借款合计', '责任员工手机号', '责任员工岗位', '与责任员工的关系级次',
'确认状态', '确认用户来源', '确认备注', '营业部经理手机号', '城市经理手机号', '银行卡行', '入司时间',
'入司时长(月)', '从业时间', '客户授信入口', '风控分', '芝麻分', '下次是否需要做运营商认证', '已使用额度',
'冻结额度', '有没有做过运营商', '是否离职', '是否命中预拉黑名单表', '首次借款时间', '最新一次借款时间', '借款次数',
'还款计划总条数', '到期应还次数', '正常未还次数', '正常还款中次数', '正常已还次数', '逾期未还次数', '逾期已还次数',
'逾期还款中次数', '逾期1天已还次数', '逾期1_3天次数', '逾期4_7天次数', '逾期8_20天次数', '逾期21天以上次数',
'最长逾期天数', '到期应还金额', '标签'
Step1: 有缺失数值的特征: 总分,与责任员工的关系层次 、入司时长
参考分数
是否实名
Step2: 运营商、年龄、地域分布,3个月内是否地点发生过变动
Step3: 葫芦分 直接黑人 间接黑人 认识间接黑人的联系人个数
Step4: 线上消费分期出现次数 线下消费分期出现次数 信用卡代还出现次数 小额快速贷出现次数 线上现金贷出现次数 线下现金贷出现次数 其他 趋势
Step5: 身份证挖掘,是否与运营商匹配 产生新的指标
Step6: 绑定其他手机号码、个数、归属地
Step7: 定位信息
Step8: 多头借贷次数 身份证三个月关联手机号个数 手机号六个月星网模型大小 手机号三个月关联身份证个数
Step9: 行为分析,地理变更
Step10: 用户画像标签构建
用户标签:性别、年龄、地域、学历职业
消费标签:消费习惯、购买意向、促销敏感度
行为标签:时间频次分布、时长、访问路径探索用户使用APP习惯
内容分析:浏览内容、停留时长
Step11: 人口属性统计,来源SL-1,SL-4
Step12: 用户社会属性 家庭成员--一般从亲情号和通讯录获取,并未提供。 家庭住址、公司数据, 是否是学生、老师(地理链接),职业身份
Step13: 用户消费特征 包含子类:用户消费类目偏好(缺乏电商数据) 暂时只能分析手机型号 借贷次数 来粗略判断
Step14: 用户价值属性 用户信用价值细分 芝麻分、来源渠道
Step15: 用户生命周期 无资产、无消费信息,无法进行聚类或者RFM切分 入司时长(月) 从业时间
Step16: 用户风险控制 子类:黑灰用户识别 、、、是否实名、、、、多头多贷用户识别\授信状况、用户评价、、、、信息重名等
Step17: 整合成标签用户标签大宽表
Step18: 输出文档报表 | Python Code:
user_info = pd.read_excel('2000_sample.xlsx', 'user_info')
user_info.head()
# 独立检验
user_info.客户ID.unique().shape
# 总的分析
user_info.describe()
Explanation: 载入数据、查阅基本信息
用户的基本信息,逾期情况,接待对象,金额
'注册日期', '来源渠道', '用户级别', '拨打次数', '客户ID', '客户性别', '年龄', '客户设备', '客户所属省',
'客户公司地址', '客户授信状态', '评分原因', '标识原因', '总分', '初审审核说明', '审核人', '备注', '授信额度',
'当前授信额度', '可用额度', '授信进件时间', '授信完成时间', '认证 1', '认证 2', '认证 3', '认证 4',
'认证 5', '认证 6', '认证 7', '借款合计', '责任员工手机号', '责任员工岗位', '与责任员工的关系级次',
'确认状态', '确认用户来源', '确认备注', '营业部经理手机号', '城市经理手机号', '银行卡行', '入司时间',
'入司时长(月)', '从业时间', '客户授信入口', '风控分', '芝麻分', '下次是否需要做运营商认证', '已使用额度',
'冻结额度', '有没有做过运营商', '是否离职', '是否命中预拉黑名单表', '首次借款时间', '最新一次借款时间', '借款次数',
'还款计划总条数', '到期应还次数', '正常未还次数', '正常还款中次数', '正常已还次数', '逾期未还次数', '逾期已还次数',
'逾期还款中次数', '逾期1天已还次数', '逾期1_3天次数', '逾期4_7天次数', '逾期8_20天次数', '逾期21天以上次数',
'最长逾期天数', '到期应还金额', '标签'
End of explanation
SZ = pd.read_excel('2000_sample.xlsx', 'SZ')
SZ.head()
SZ.user_id.unique().shape
SZ.shape
# 取同一个用户下的最后一行
Explanation: 有缺失数值的特征: 总分,与责任员工的关系层次 、入司时长
参考分数
是否实名
End of explanation
SL_1 = pd.read_excel('2000_sample.xlsx', 'SL-1')
SL_1.head(5)
Explanation: 运营商、年龄、地域分布,3个月内是否地点发生过变动
End of explanation
SL_2 = pd.read_excel('2000_sample.xlsx', 'SL-2')
SL_2.user_id.unique()
#取最后一条
SL_2.head()
Explanation: 葫芦分 直接黑人 间接黑人 认识间接黑人的联系人个数
End of explanation
SL_3 = pd.read_excel('2000_sample.xlsx', 'SL-3')
SL_3.user_id.unique().shape
SL_3.head()
Explanation: 线上消费分期出现次数 线下消费分期出现次数 信用卡代还出现次数 小额快速贷出现次数 线上现金贷出现次数 线下现金贷出现次数 其他 趋势
End of explanation
SL_4 = pd.read_excel('2000_sample.xlsx', 'SL-4')
SL_4.head(5)
Explanation: 身份证挖掘,是否与运营商匹配 产生新的指标
End of explanation
SL_5 = pd.read_excel('2000_sample.xlsx', 'SL-5')
SL_5.head(5)
Explanation: 绑定其他手机号码、个数、归属地
End of explanation
BQS_1 = pd.read_excel('2000_sample.xlsx', 'BQS-1')
BQS_1.head()
Explanation: 定位信息
End of explanation
BQS_2 = pd.read_excel('2000_sample.xlsx', 'BQS-2')
BQS_2.用户Id.unique().shape
BQS_2.shape
BQS_2.head()
Explanation: 多头借贷次数 身份证三个月关联手机号个数 手机号六个月星网模型大小 手机号三个月关联身份证个数
End of explanation
BQS_3 = pd.read_excel('2000_sample.xlsx', 'BQS-3')
BQS_3.用户ID.unique().shape
BQS_3.head(5)
Explanation: 行为分析,地理变更
End of explanation
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "https://pic3.zhimg.com/80/v2-1919e069b796f92075c128af5eaf7e9a_hd.png")
Explanation: 用户画像标签构建
用户标签:性别、年龄、地域、学历职业
消费标签:消费习惯、购买意向、促销敏感度
行为标签:时间频次分布、时长、访问路径探索用户使用APP习惯
内容分析:浏览内容、停留时长
End of explanation
# 取最近的一条信息
tmp_1 = SL_1.groupby(by='user_id').apply(lambda x: x[x.ctime==max(x.ctime)])
tmp_2 = SL_4.groupby(by='user_id').apply(lambda x: x[x.ctime==max(x.ctime)])
user_info.columns
tmp = pd.DataFrame(user_info[['客户ID','是否离职','客户公司地址']]).merge(tmp_1,left_on='客户ID',right_on='user_id',how='left')
tmp = tmp.merge(tmp_2,left_on='客户ID',right_on='user_id',how='left')
tmp.head(5)
popu_attributes = tmp[['客户ID','是否离职','客户公司地址','省','城市', '地区', '手机运营商', '手机归属省份', '手机归属城市',
'关联身份证数量','身份证户籍城市',
'身份证是否是有效身份证', '身份证户籍省份','身份证户籍地区', '生日','年龄_y', '性别_y']]
# 年龄段划分
popu_attributes.head(5)
popu_attributes.shape
Explanation: 人口属性统计,来源SL-1,SL-4
End of explanation
# user_info.客户公司地址
#social attributes
Explanation: 用户社会属性 家庭成员--一般从亲情号和通讯录获取,并未提供。 家庭住址、公司数据, 是否是学生、老师(地理链接),职业身份
End of explanation
# 消费情况
tmp_1 = SL_3.groupby(by = 'user_id',group_keys = False).apply(lambda x: x[x.ctime==max(x.ctime)])
tmp_1.head()
# 手机品牌 操作系统
tmp_2 = BQS_1.groupby(by='用户ID',group_keys = False).apply(lambda x: x[x.ctime==max(x.ctime)])
tmp_2.dtypes
tmp = pd.DataFrame(user_info['客户ID']).merge(tmp_1,left_on='客户ID',right_on='user_id',how='left')
#tmp = tmp.merge(tmp_2,left_on='客户ID',right_on='用户ID',how='left')
tmp.head(5)
tmp = tmp.merge(tmp_2,left_on='客户ID',right_on='用户ID',how='left')
tmp.head(5)
tmp.shape
consum_attributes = tmp[['客户ID','线上消费分期出现次数', '线下消费分期出现次数', '信用卡代还出现次数',
'小额快速贷出现次数', '线上现金贷出现次数', '线下现金贷出现次数', '其他',
'操作系统','手机运营商家']]
Explanation: 用户消费特征 包含子类:用户消费类目偏好(缺乏电商数据) 暂时只能分析手机型号 借贷次数 来粗略判断
End of explanation
# 芝麻分,客户来源
tmp_1 = user_info[['客户ID','芝麻分','来源渠道']]
value_attributes = tmp_1
value_attributes.shape
Explanation: 用户价值属性 用户信用价值细分 芝麻分、来源渠道
End of explanation
lifecycl_attributes = user_info[['客户ID','入司时长(月)','从业时间']]
lifecycl_attributes.head()
Explanation: 用户生命周期 无资产、无消费信息,无法进行聚类或者RFM切分 入司时长(月) 从业时间
End of explanation
# source
# 取最近的一条信息
tmp_1 = SL_2.groupby(by='user_id').apply(lambda x: x[x.ctime==max(x.ctime)])
tmp_1.head()
tmp_1.user_id.unique().shape
tmp_1.user_id.shape
tmp_2 = BQS_2.groupby(by='用户Id').apply(lambda x: x[x.ctime==max(x.ctime)])
tmp_2.head()
tmp_2 = tmp_2.drop_duplicates()
tmp_3 = SZ.groupby(by='user_id').apply(lambda x: x[x.ctime==max(x.ctime)])
tmp_3 = tmp_3[['user_id','实名']]
tmp_3.head()
# 客户授信和评价
tmp_4 = user_info[['客户ID','客户授信状态','标签']]
tmp_4.head()
# # 绑定不同手机个数和机构数 # 重新构造
# tmp_5 = SL_5.groupby(by='user_id').apply(lambda x: x[x.ctime == max(x.ctime)])
# tmp_5 = tmp_5.drop_duplicates()
# tmp_5.head(5)
tmp = pd.DataFrame(user_info.客户ID).merge(tmp_1, left_on='客户ID', right_on = 'user_id', how='left' )
tmp = tmp.merge(tmp_2,left_on='客户ID',right_on = '用户Id', how='left')
tmp = tmp.merge(tmp_3,left_on='客户ID',right_on = 'user_id', how='left')
tmp = tmp.merge(tmp_4,left_on='客户ID',right_on = '客户ID', how='left')
# tmp = tmp.merge(tmp_5,left_on='客户ID',right_on = 'user_id', how='left')
tmp.head()
tmp.columns
tmp.shape
risk_attributes = tmp[['客户ID','葫芦分','直接联系人', '直接黑人', '间接黑人','认识间接黑人的联系人个数', '认识间接黑人的联系人比例',
'多头借贷次数','身份证三个月关联手机号个数', '手机号六个月星网模型大小', '手机号三个月关联身份证个数',
'实名','客户授信状态', '标签']]
# ,
# '绑定其他手机号码', '此号码绑定其他姓名个数','查询此手机号的机构数']]
risk_attributes.head()
Explanation: 用户风险控制 子类:黑灰用户识别 、、、是否实名、、、、多头多贷用户识别\授信状况、用户评价、、、、信息重名等
End of explanation
#在确认数据唯一的情况下,进行左链接
tmp_1 = popu_attributes.merge(consum_attributes,left_on = '客户ID', right_on = '客户ID',how = 'left')
tmp_2 = tmp_1.merge( value_attributes ,left_on = '客户ID', right_on = '客户ID',how = 'left')
tmp_3 = tmp_2.merge(lifecycl_attributes,left_on = '客户ID', right_on = '客户ID',how = 'left')
portrait_attributes = tmp_3.merge( risk_attributes, left_on = '客户ID', right_on = '客户ID',how = 'left')
portrait_attributes.columns
portrait_attributes = portrait_attributes.rename(columns={'入司时长(月)':'入司时长'})
portrait_attributes.head()
Explanation: 整合成标签用户标签大宽表
End of explanation
with open('chuangjin_template1.htm', 'r') as f:
html_string_1 = f.read()
with open('chuangjin_template2.htm', 'r') as f:
html_string_2 = f.read()
with open('chuangjin_template3.htm', 'r') as f:
html_string_3 = f.read()
one_info = portrait_attributes.iloc[1]
portrait_attributes.columns
html_string_down = html_string_2.format(one_info.客户ID,one_info.身份证是否是有效身份证, '等待处理', one_info.性别_y, one_info.年龄_y,
one_info.生日, one_info.是否离职 , '工作城市等待处理', one_info.客户公司地址, one_info.手机归属省份,
one_info.手机归属城市, one_info.身份证户籍省份 , one_info.身份证户籍城市 , one_info.线上消费分期出现次数, one_info.线下消费分期出现次数,
one_info.信用卡代还出现次数 , one_info.小额快速贷出现次数, one_info.线上现金贷出现次数, one_info.线下现金贷出现次数,one_info.其他,
one_info.操作系统, one_info.手机运营商,one_info.芝麻分, one_info.来源渠道, one_info.入司时长,
one_info.从业时间,one_info.葫芦分, one_info.直接联系人, one_info.直接黑人,one_info.间接黑人,
one_info.认识间接黑人的联系人个数, one_info.认识间接黑人的联系人比例, one_info.多头借贷次数,one_info.身份证三个月关联手机号个数, one_info.手机号六个月星网模型大小,
one_info.手机号三个月关联身份证个数, one_info.实名, one_info.客户授信状态, one_info.标签,'暂未整合',
'暂未整合', '暂未整合'
)
# 拼接成一个
html_string = html_string_1+html_string_down+html_string_3
def save_to_file(file_name, contents):
fh = open(file_name, 'w')
fh.write(contents)
fh.close()
save_to_file('chuangjin-baogao.htm', html_string)
Explanation: 输出文档报表
End of explanation |
12,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interruptible optimization runs with checkpoints
Christian Schell, Mai 2018
Reformatted by Holger Nahrstaedt 2020
.. currentmodule
Step1: Simple example
We will use pretty much the same optimization problem as in the
sphx_glr_auto_examples_bayesian-optimization.py
notebook. Additionally we will instantiate the
Step2: Now let's assume this did not finish at once but took some long time
Step3: Continue the search
The previous results can then be used to continue the optimization process | Python Code:
print(__doc__)
import sys
import numpy as np
np.random.seed(777)
import os
Explanation: Interruptible optimization runs with checkpoints
Christian Schell, Mai 2018
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
Problem statement
Optimization runs can take a very long time and even run for multiple days.
If for some reason the process has to be interrupted results are irreversibly
lost, and the routine has to start over from the beginning.
With the help of the :class:callbacks.CheckpointSaver callback the optimizer's current state
can be saved after each iteration, allowing to restart from that point at any
time.
This is useful, for example,
if you don't know how long the process will take and cannot hog computational resources forever
if there might be system failures due to shaky infrastructure (or colleagues...)
if you want to adjust some parameters and continue with the already obtained results
End of explanation
from skopt import gp_minimize
from skopt import callbacks
from skopt.callbacks import CheckpointSaver
noise_level = 0.1
def obj_fun(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() \
* noise_level
checkpoint_saver = CheckpointSaver("./checkpoint.pkl", compress=9) # keyword arguments will be passed to `skopt.dump`
gp_minimize(obj_fun, # the function to minimize
[(-20.0, 20.0)], # the bounds on each dimension of x
x0=[-20.], # the starting point
acq_func="LCB", # the acquisition function (optional)
n_calls=10, # number of evaluations of f including at x0
n_random_starts=3, # the number of random initial points
callback=[checkpoint_saver],
# a list of callbacks including the checkpoint saver
random_state=777)
Explanation: Simple example
We will use pretty much the same optimization problem as in the
sphx_glr_auto_examples_bayesian-optimization.py
notebook. Additionally we will instantiate the :class:callbacks.CheckpointSaver
and pass it to the minimizer:
End of explanation
from skopt import load
res = load('./checkpoint.pkl')
res.fun
Explanation: Now let's assume this did not finish at once but took some long time: you
started this on Friday night, went out for the weekend and now, Monday
morning, you're eager to see the results. However, instead of the
notebook server you only see a blank page and your colleague Garry
tells you that he had had an update scheduled for Sunday noon – who
doesn't like updates?
:class:gp_minimize did not finish, and there is no res variable with the
actual results!
Restoring the last checkpoint
Luckily we employed the :class:callbacks.CheckpointSaver and can now restore the latest
result with :class:skopt.load
(see sphx_glr_auto_examples_store-and-load-results.py for more
information on that)
End of explanation
x0 = res.x_iters
y0 = res.func_vals
gp_minimize(obj_fun, # the function to minimize
[(-20.0, 20.0)], # the bounds on each dimension of x
x0=x0, # already examined values for x
y0=y0, # observed values for x0
acq_func="LCB", # the acquisition function (optional)
n_calls=10, # number of evaluations of f including at x0
n_random_starts=3, # the number of random initialization points
callback=[checkpoint_saver],
random_state=777)
Explanation: Continue the search
The previous results can then be used to continue the optimization process:
End of explanation |
12,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step11: Vertex constants
Setup up the following constants for Vertex
Step12: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Container (Docker) image
Next, we will set the Docker container images for training and prediction
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15
Step14: Machine Type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving.
Step16: Train a model
There are two ways you can train a custom model using a container image
Step17: Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.
boot_disk_type
Step18: Define the worker pool specification
Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following
Step19: Assemble a job specification
Now assemble the complete description for the custom job specification
Step20: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step21: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary
Step22: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step23: Train the model
Now start the training of your custom training job on Vertex. Use this helper function create_custom_job, which takes the following parameter
Step24: Now get the unique identifier for the custom job you created.
Step25: Get information on a custom job
Next, use this helper function get_custom_job, which takes the following parameter
Step26: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/saved_model.pb'.
Step27: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step28: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements
Step29: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step30: Get dataset statistics
The training script is designed to return dataset statistics you will need for serving predictions on data items that have not otherwise been preprocessed -- feature normalization, which is also referred to as rescaling. Note, that the x_test data was already preprocessed, so we don't need to do additional feature normalization if we use that data from x_test.
Instead, we set aside a copy of one data item that was not feature normalized. We will use this subsequently when doing a prediction.
Step31: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
Step32: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters
Step33: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter
Step34: Deploy the Model resource
Now deploy the trained Vertex custom Model resource. This requires two steps
Step35: Now get the unique identifier for the Endpoint resource you created.
Step36: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step37: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step38: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
Step39: Send the prediction request
Ok, now you have a test data item. Use this helper function predict_data, which takes the parameters
Step40: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step41: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: Custom training tabular regression model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_tabular_regression_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_tabular_regression_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom tabular regression model for online prediction.
Dataset
The dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.
Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a prediction on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Google Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train a TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Container (Docker) image
Next, we will set the Docker container images for training and prediction
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest
TensorFlow 2.4
gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest
XGBoost
gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1
Scikit-learn
gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest
Pytorch
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest
For the latest list, see Pre-built containers for training.
TensorFlow 1.15
gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest
XGBoost
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest
Scikit-learn
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest
For the latest list, see Pre-built containers for prediction
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Machine Type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving.
End of explanation
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
Explanation: Train a model
There are two ways you can train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.
Prepare your custom job specification
Now that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:
worker_pool_spec : The specification of the type of machine(s) you will use for training and how many (single or distributed)
python_package_spec : The specification of the Python package to be installed with the pre-built container.
Prepare your machine specification
Now define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training.
- machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.
- accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.
- accelerator_count: The number of accelerators.
End of explanation
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
Explanation: Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.
boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
boot_disk_size_gb: Size of disk in GB.
End of explanation
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
PARAM_FILE = BUCKET_NAME + "/params.txt"
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
"--param-file=" + PARAM_FILE,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
"--param-file=" + PARAM_FILE,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_boston.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
Explanation: Define the worker pool specification
Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.
Let's dive deeper now into the python package specification:
-executor_image_spec: This is the docker image which is configured for your custom training job.
-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.
-python_module: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.
-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:
- "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts:
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
- indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
- "--param-file=" + PARAM_FILE: The Cloud Storage location for storing feature normalization values.
End of explanation
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
Explanation: Assemble a job specification
Now assemble the complete description for the custom job specification:
display_name: The human readable name you assign to this custom job.
job_spec: The specification for the custom job.
worker_pool_specs: The specification for the machine VM instances.
base_output_directory: This tells the service the Cloud Storage location where to save the model artifacts (when variable DIRECT = False). The service will then pass the location to the training script as the environment variable AIP_MODEL_DIR, and the path will be of the form: <output_uri_prefix>/model
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads Boston Housing dataset from TF.Keras builtin datasets
Builds a simple deep neural network model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs specified by args.epochs.
Saves the trained model (save(args.model_dir)) to the specified model directory.
Saves the maximum value for each feature f.write(str(params)) to the specified parameters file.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
Explanation: Train the model
Now start the training of your custom training job on Vertex. Use this helper function create_custom_job, which takes the following parameter:
-custom_job: The specification for the custom job.
The helper function calls job client service's create_custom_job method, with the following parameters:
-parent: The Vertex location path to Dataset, Model and Endpoint resources.
-custom_job: The specification for the custom job.
You will display a handful of the fields returned in response object, with the two that are of most interest are:
response.name: The Vertex fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.
response.state: The current state of the custom training job.
End of explanation
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
Explanation: Now get the unique identifier for the custom job you created.
End of explanation
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
Explanation: Get information on a custom job
Next, use this helper function get_custom_job, which takes the following parameter:
name: The Vertex fully qualified identifier for the custom job.
The helper function calls the job client service'sget_custom_job method, with the following parameter:
name: The Vertex fully qualified identifier for the custom job.
If you recall, you got the Vertex fully qualified identifier for the custom job in the response.name field when you called the create_custom_job method, and saved the identifier in the variable job_id.
End of explanation
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
Explanation: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/saved_model.pb'.
End of explanation
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescaling) the data in each column by dividing each value by the maximum value of that column. This will replace each single value with a 32-bit floating point number between 0 and 1.
End of explanation
model.evaluate(x_test, y_test)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
# Get the rescaling values,.
with tf.io.gfile.GFile(PARAM_FILE, "r") as f:
rescale = f.read()
# Convert string to floating point list
rescale = rescale.replace("[", "").replace("]", "")
rescale = [float(val) for val in rescale.split(",")]
print(rescale)
Explanation: Get dataset statistics
The training script is designed to return dataset statistics you will need for serving predictions on data items that have not otherwise been preprocessed -- feature normalization, which is also referred to as rescaling. Note, that the x_test data was already preprocessed, so we don't need to do additional feature normalization if we use that data from x_test.
Instead, we set aside a copy of one data item that was not feature normalized. We will use this subsequently when doing a prediction.
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
End of explanation
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"boston-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
Explanation: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters:
display_name: A human readable name for the Endpoint service.
image_uri: The container image for the model deployment.
model_uri: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the trainer/task.py saved the model artifacts, which we specified in the variable MODEL_DIR.
The helper function calls the Model client service's method upload_model, which takes the following parameters:
parent: The Vertex location root path for Dataset, Model and Endpoint resources.
model: The specification for the Vertex Model resource instance.
Let's now dive deeper into the Vertex model specification model. This is a dictionary object that consists of the following fields:
display_name: A human readable name for the Model resource.
metadata_schema_uri: Since your model was built without an Vertex Dataset resource, you will leave this blank ('').
artificat_uri: The Cloud Storage path where the model is stored in SavedModel format.
container_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the Model resource will serve predictions. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.
The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
End of explanation
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
Explanation: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
This helper function calls the Vertex Model client service's method get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
End of explanation
ENDPOINT_NAME = "boston_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex custom Model resource. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "boston_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
test_item = x_test[0]
test_label = y_test[0]
print(test_item.shape)
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
End of explanation
def predict_data(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: data.tolist()}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_data(test_item, endpoint_id, None)
Explanation: Send the prediction request
Ok, now you have a test data item. Use this helper function predict_data, which takes the parameters:
data: The test data item as a numpy 1D array of floating point values.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional parameters for serving.
This function uses the prediction client service and calls the predict method with the parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (data items) to predict.
parameters: Additional parameters for serving.
To pass the test data to the prediction service, you package it for transmission to the serving binary as follows:
1. Convert the data item from a 1D numpy array to a 1D Python list.
2. Convert the prediction request to a serialized Google protobuf (`json_format.ParseDict()`)
Each instance in the prediction request is a dictionary entry of the form:
{input_name: content}
input_name: the name of the input layer of the underlying model.
content: The data item as a 1D Python list.
Since the predict() service can take multiple data items (instances), you will send your single data item as a list of one data item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() service.
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:
predictions -- the predicated median value of a house in units of 1K USD.
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
12,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Malaysian MP Statistics
A live notebook of working examples of using Sinar's Popit API and database of Malaysian MPs.
TODO
Detailed information of Persons should probably be appended to post memberships
this would allow us to show post (Seat) information, not just details of person.
Refactor functions here into common library
Issues
Posts should check for role, eg. Member of Parliament. Speaker is a post in Dewan Rakyat,
but is not an MP.
Author
Feel free to do pull requests, or contact me on issues with the data.
Khairil Yusof khairil.yusof@sinarproject.org
Step1: Now we will load up information on the MPs holding these posts
Step2: Women MPs
Step3: Age of MPs
Step4: Pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
If you're learning Python to work with data, it's worth getting used to this library, as it provides pretty much all you will need when working with data from importing and cleaning messy data, to exporting it, including working with very large data sets.
A lot of the earlier work, such as cleaning, getting unique values etc. could be done easily with built-in functions of pandas as a DataFrame.
Step5: We could have dropped duplicates from bad data
Step6: Previous example tried to massage python data structures into Pandas DataFrame which works, but isn't very pretty.
Ng Swee Meng sweester@sinarproject.org has contributed proper way of building up data structures for Pandas DataFrames in the following example | Python Code:
import requests
import json
#Dewan Rakyat MP Posts in Sinar Malaysia Popit Database
posts = []
for page in range(1,10):
dewan_rakyat_request = requests.get('http://sinar-malaysia.popit.mysociety.org/api/v0.1/search/posts?q=organization_id:53633b5a19ee29270d8a9ecf'+'&page='+str(page))
for post in (json.loads(dewan_rakyat_request.content)['result']):
posts.append(post)
Explanation: Malaysian MP Statistics
A live notebook of working examples of using Sinar's Popit API and database of Malaysian MPs.
TODO
Detailed information of Persons should probably be appended to post memberships
this would allow us to show post (Seat) information, not just details of person.
Refactor functions here into common library
Issues
Posts should check for role, eg. Member of Parliament. Speaker is a post in Dewan Rakyat,
but is not an MP.
Author
Feel free to do pull requests, or contact me on issues with the data.
Khairil Yusof khairil.yusof@sinarproject.org
End of explanation
import datetime
from dateutil import parser
current = datetime.date(2013,5,5)
def has_end_date(member):
if member.has_key('end_date') and member['end_date'] == '':
return False
elif not member.has_key('end_date'):
return False
else:
return True
def current_MP(member):
#Legislative tag term here would simply
if (parser.parse(member['start_date'])).date() > current:
if not has_end_date(member):
return True
else:
return False
def person(person_id):
#Load up information of persons from Popit database
req = requests.get('https://sinar-malaysia.popit.mysociety.org/api/v0.1/persons/' + person_id)
return json.loads(req.content)['result']
def age(str):
#calculate age based on date strings stored in Popit
born = parser.parse(str)
today = datetime.date.today()
age = today.year - born.year - ((today.month, today.day) < (born.month, born.day))
return int(age)
#Current MPs will not have end dates, and have terms after 2013-05-05
MP_ids = []
for post in posts:
for member in post['memberships']:
if current_MP(member):
MP_ids.append(member['person_id'])
#Pull down the data of current MPs from Popit Database add calculate age if there is birthdate
MPs = []
for id in MP_ids:
MPs.append(person(id))
for MP in MPs:
if MP.has_key('birth_date'):
if MP['birth_date']:
#add current age in addition to the values in Popit
MP['age'] = age(MP['birth_date'])
Explanation: Now we will load up information on the MPs holding these posts
End of explanation
WomenMPs = []
for MP in MPs:
if MP.has_key('gender') and MP['gender'] == 'Female':
WomenMPs.append(MP)
print "Number of Women MPs " + str(len(WomenMPs))
for MP in WomenMPs:
print MP['name']
Explanation: Women MPs
End of explanation
import numpy
#list of ages
ages = []
for MP in MPs:
if MP.has_key('age'):
ages.append(int(MP['age']))
print numpy.median(ages)
print numpy.max(ages)
print numpy.min(ages)
Explanation: Age of MPs
End of explanation
import pandas
pandas.DataFrame(MPs)
df = pandas.DataFrame(MPs)
print df['age'].median()
print df['age'].max()
print df['age'].min()
Explanation: Pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
If you're learning Python to work with data, it's worth getting used to this library, as it provides pretty much all you will need when working with data from importing and cleaning messy data, to exporting it, including working with very large data sets.
A lot of the earlier work, such as cleaning, getting unique values etc. could be done easily with built-in functions of pandas as a DataFrame.
End of explanation
MP_source = {'name':df['name'],'birth_date':df['birth_date'],'age':df['age']}
MP_Names = pandas.DataFrame(MP_source)
MP_Names.sort('age')
%matplotlib inline
grouped = MP_Names.groupby('age')
grouped.age.count().plot(kind='bar',figsize=(15,15))
Explanation: We could have dropped duplicates from bad data:
df.drop_duplicates('id')
Parse and set birth_date column as datetime to calculate age without parsing it manually:
df['birth_date']= pandas.to_datetime(df['birth_date'])
Best of all after cleaning up the data, we can easily export it to CSV format where it is more easily usable by normal users in spreadsheets or plotting charts.
End of explanation
import pandas
data = { "age": [], "birth_date": []}
data_index = { "age": [], "birth_date": []}
for entry in MPs:
if entry.has_key('age'):
data["age"].append(entry["age"])
data_index["age"].append(entry["name"])
data["birth_date"].append(entry["birth_date"])
data_index["birth_date"].append(entry["name"])
final_data = { "age": pandas.Series(data["age"], index=data_index["age"]),
"birth_date": pandas.Series(data["birth_date"], index=data_index["birth_date"])
}
mp_age_df = pandas.DataFrame(final_data)
mp_age_df.sort("age")
mp_age_df["age"].plot(kind="hist",figsize=(15,15))
Explanation: Previous example tried to massage python data structures into Pandas DataFrame which works, but isn't very pretty.
Ng Swee Meng sweester@sinarproject.org has contributed proper way of building up data structures for Pandas DataFrames in the following example:
End of explanation |
12,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TP2 - Object recognition using neural networks and convolutional neural networks
M4108C/M4109C - INFOgr2D
Student 1
Step1: Your response
Step2: Your comment
Step3: On a divisé par deux le nombre d'image.
I-2. Fully-connected NNs on CIFAR-10
1) Design a fully connected NN named 'modelCifar_nn1' including 2 layers of 256 and 512 neurons with the sigmoid activation function. Train this model with 10 epochs and batch_size = 500 (remember to pre-process them before). Test the model and report the following results
Step4: Your comment
Step5: Your observation and comment
Step6: Your observation and comment
Step7: Your observation and comment
Step8: Your observation and comment
Step9: 3) Now describe your pre-processed data for training and validation
Step10: Your observation and comments
Step11: Result, observation and comment | Python Code:
from __future__ import print_function
import numpy as np
np.random.seed(7)
import keras
from keras.datasets import cifar10
# load and split data into training and test sets --> it may take some times with your own laptop
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
# describe your data (use print function)
print("train size : ",x_train.shape)
print("test size : ",x_test.shape)
print("train label : ",y_train.shape)
print("test label : ",y_test.shape)
nclass = len(np.unique(y_train))
print("number of classes:",nclass)
Explanation: TP2 - Object recognition using neural networks and convolutional neural networks
M4108C/M4109C - INFOgr2D
Student 1: Antoine Gicquel
<br>
For submission: <font style="color:blue"> TP2_nom1_nom2.iypnb </font>, Due: <font style="color:blue"> 18/03/2018 </font>
Introduction
In this lab, we design and observe the performance of the fully connected neural networks (NNs) as well as the convolutional neural networks (CNNs) for object regconition task. All implementations should be in Keras with Tensorflow backend. This lab includes three parts:
In the first part, we perform object recognition using NNs and CNNs on the CIFAR-10 dataset (import from Keras).
In the second part, we work on the image data which are imported from disk.
The last part includes some advanced exercices.
Read and response to each question. Use the print() function to show results in code cells and write your comments/responses using Markdown cells.
IMPORTANT: Every result should be commented!
NOTE: (max 20 pts)
- part I: 10 pts
- part II: 6 pts
- part III: 2 pts
- clarity and presentation: 2 pts
Part I. Object recognition using CIFAR-10 dataset <font color='red'> (10 pts)<font/>
I-1. The CIFAR-10 data
1) Load CIFAR dataset and describe its information (number of training/test images, image size, number of classes, class names, etc.) <font color='red'> (1 pts)<font/>
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
labels = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
for i in range(0,9):
plt.subplot(3, 3, i+1)
plt.imshow(x_train[i], cmap=plt.get_cmap('gray')); plt.axis('off')
print(labels[y_train[i][0]])
Explanation: Your response:
Il y a 50000 images de 32 sur 32 avec 3 caneaux de couleurs pour l'entrainement et 10000 images de test.
2) Display some image samples with their class labels using matplotlib.pyplot <font color='red'> (1 pts)<font/>
End of explanation
x_train = x_train[0:25000,:]
y_train = y_train[0:25000]
print("train size : ",x_train.shape)
print("train label : ",y_train.shape)
Explanation: Your comment:
Les labels sont donnés du haut vers la droite avec les images correspondantes.
Voici les 9 images.
3) (If necessary) Reduce the number of training images (using half of them for example) for quick training and small-GPU computer
End of explanation
# pre-process your data
x_train = x_train.reshape(x_train.shape[0], 32*32*3)
x_test = x_test.reshape(x_test.shape[0], 32*32*3)
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
from keras.utils import np_utils
y_train_cat = np_utils.to_categorical(y_train, nclass)
y_test_cat = np_utils.to_categorical(y_test, nclass)
y_train_cat.shape
print("train size : ",x_train.shape)
print("test size : ",x_test.shape)
Explanation: On a divisé par deux le nombre d'image.
I-2. Fully-connected NNs on CIFAR-10
1) Design a fully connected NN named 'modelCifar_nn1' including 2 layers of 256 and 512 neurons with the sigmoid activation function. Train this model with 10 epochs and batch_size = 500 (remember to pre-process them before). Test the model and report the following results:
- number of total parameters (explain how to compute?)
- training and testing time
- test loss and accuracy
- number of iterations to complete one epoch (explain how to compute?)
<font color='red'> (2 pts)<font/>
<br/>
Explanation:<br/>
-> one epoch = one forward pass and one backward pass of all the training examples<br/>
-> batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.<br/>
End of explanation
# Define the model
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop
modelCifar_nn1 = Sequential()
modelCifar_nn1.add(Dense(256, input_shape=(3072,),activation='sigmoid'))
modelCifar_nn1.add(Dense(512, activation='sigmoid'))
modelCifar_nn1.add(Dense(10,activation='softmax')) #Last layer has nclass nodes
modelCifar_nn1.summary()
# compile and train the model
import time
# compile the model
modelCifar_nn1.compile(loss='categorical_crossentropy', optimizer =RMSprop(lr=0.001), metrics=["accuracy"])
# train the model
start_t_mod= time.time()
modelCifar_nn1.fit(x_train, y_train_cat, batch_size=500, epochs = 10)
finish_t_mod = time.time()
time = finish_t_mod - start_t_mod
print("training time :", time)
# evaluate the model
score = modelCifar_nn1.evaluate(x_test, y_test_cat)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Your comment:
Conversion des labels d'entiers en catégories et conversion des valeurs.
End of explanation
# Define the model
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop
modelCifar_nn2 = Sequential()
modelCifar_nn2.add(Dense(256, input_shape=(3072,),activation='relu'))
modelCifar_nn2.add(Dense(512, activation='relu'))
modelCifar_nn2.add(Dense(10,activation='softmax')) #Last layer has nclass nodes
modelCifar_nn2.summary()
# compile and train the model
import time
# compile the model
modelCifar_nn2.compile(loss = 'categorical_crossentropy', optimizer = RMSprop(lr=0.001), metrics = ["accuracy"])
# train the model
start_t_mod= time.time()
modelCifar_nn2.fit(x_train, y_train_cat, batch_size = 500, epochs = 10)
finish_t_mod = time.time()
time = finish_t_mod - start_t_mod
print("training time :", time)
# evaluate the model
score = modelCifar_nn2.evaluate(x_test, y_test_cat)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Your observation and comment:
L'exactitude est de 43% avec le modele sigmoid et 10 etochs.
2) Design the NN model named modelCifar_nn2 by replacing the sigmoid activation with the ReLu activation. Train and test this model. Compare to the first one. <font color='red'> (1 pts)<font/>
End of explanation
# reload and pre-process your data
(x2_train, y2_train), (x2_test, y2_test) = cifar10.load_data()
#x2_train = x_train[0:25000,:]
#y2_train = y_train[0:25000]
x2_train = x2_train.astype('float32')
x2_test = x2_test.astype('float32')
x2_train = x2_train / 255.0
x2_test = x2_test / 255.0
# one hot encode outputs
y2_train = np_utils.to_categorical(y_train)
y2_test = np_utils.to_categorical(y_test)
print("train 2 size : ",x2_train.shape)
print("test 2 size : ",x2_test.shape)
print("train 2 label : ",y2_train.shape)
print("test 2 label : ",y2_test.shape)
# Define the model
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.layers import Flatten
from keras.constraints import maxnorm
modelCifar_cnn1 = Sequential()
modelCifar_cnn1.add(Conv2D(16, (3, 3), input_shape=(32, 32, 3), padding='same', activation='relu', kernel_constraint=maxnorm(y2_test.shape[1])))
modelCifar_cnn1.add(MaxPooling2D(pool_size=(2, 2)))
modelCifar_cnn1.add(Dropout(0.2))
modelCifar_cnn1.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(y2_test.shape[1])))
modelCifar_cnn1.add(MaxPooling2D(pool_size=(2, 2)))
modelCifar_cnn1.add(Flatten())
modelCifar_cnn1.add(Dense(128, activation='relu', kernel_constraint=maxnorm(y2_test.shape[1])))
modelCifar_cnn1.add(Dropout(0.5))
modelCifar_cnn1.add(Dense(10, activation='softmax'))
# compile and train the model
import time
from keras.optimizers import SGD
# compile the model
#modelCifar_cnn1.compile(loss='categorical_crossentropy', optimizer =RMSprop(lr=0.001), metrics=["accuracy"])
#modelCifar_cnn1.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
epochs = 10
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
modelCifar_cnn1.compile(loss='categorical_crossentropy', optimizer=RMSprop(lr=0.001), metrics=['accuracy'])
#modelCifar_cnn1.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# train the model
start_t_mod= time.time()
modelCifar_cnn1.fit(x2_train, y2_train, validation_data=(x2_test, y2_test), epochs=epochs, batch_size=500)
finish_t_mod = time.time()
time = finish_t_mod - start_t_mod
print("training time :", time)
# evaluate the model
score = modelCifar_cnn1.evaluate(x2_test, y2_test_cat)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Your observation and comment: L'exactitude est de 20% avec le modele sigmoid et 10 etochs.
I-2. CNNs on CIFAR-10
1) Now design a CNN named modelCifar_cnn1 consisting of 2 convolutional layers + one fully-connected layer as follows:
- Conv_1: 16 filters of size 3x3, no padding, no stride, activation Relu
- maxPool_1: size 2x2
- Conv_2: 32 filters of size 3x3, no padding, no stride, activation Relu
- maxPool_2: size 2x2
- fc layer (Dense) 128 nodes
- [Do not forget Flatten() and final output dense layer with 'softmax' activation]
Reload and preprocess the data. Train this model with 10 epochs and batch_size = 500. Test the model and report the following results:
- number of total parameters (explain how to compute?)
- training and testing time
- test loss and accuracy
<font color='red'> (2 pts)<font/>
End of explanation
# Define the model
# modelCifar_cnn2 = Sequential()
Explanation: Your observation and comment:
2) Now modify the modelCifar_cnn1 by changing the filter size of 2 convolutional layers to 5x5. The new model is called modelCifar_cnn2. Train and test the model. Compare to the first CNN. <font color='red'> (1 pts)<font/>
End of explanation
from keras.preprocessing.image import ImageDataGenerator
batchSize = 100
datagen = ImageDataGenerator(rescale=1./255)
train_datagen = datagen.flow_from_directory(
'dataTP2/train', # this is your target directory which includes the training images
target_size = (50, 50), # all images will be resized to 50x50 pixels for fast computation
batch_size = batchSize,
class_mode = 'categorical')
validation_datagen = datagen.flow_from_directory(
'dataTP2/validation', # this is your target directory which includes the validation images
target_size = (50, 50), # all images will be resized to 50x50 pixels for fast computation
batch_size = batchSize,
class_mode = 'categorical')
Explanation: Your observation and comment:
*3) Compare the two CNNs with the two NNs in section I-1 in terms of accuracy, loss, number of parameters, calculation time, ect. * <font color='red'> (2 pts)<font/>
Fill the following table for comparison:
| Models | Number of parameters | Training time | Accuracy |
| ---------------|:---------------------:|:--------------:|:--------:|
| modelCifar_nn1 |
| modelCifar_nn2 |
| modelCifar_cnn1|
| modelCifar_cnn2|
Your observation and comment:
Part II - Cat and Dog classification <font color='red'> (6 pts)<font/>
In this part, we design and train CNNs on our data (import from disk). We will work on a small dataset including only 2 classes (cat and dog). Each one has 1000 images for training and 200 for validation.
You can download the data from:
(https://drive.google.com/open?id=15cQfeAuDY1CRuOduF5LZwWZ4koL6Dti9)
1) Describe the downloaded data: numer of training and validation images, number of classes, class names? Do the images have the same size? <font color='red'> (1 pts)<font/>
Your response:
2) Show some cat and dog images from the train set. Comment. <font color='red'> (1 pts)<font/>
Now we import the ImageDataGenerator module of Keras. This module can be used to pre-process the images and to perform data augmentation. We use 'flow_from_directory()' to generate batches of image data (and their labels) directly from our images in their respective folders (from disk).
End of explanation
# Define the model
# modelPart2_cnn1 = Sequential()
# train with .fit_generator
# modelPart2_cnn1.fit_generator(...)
# Define the model
# modelPart2_cnn2 = Sequential()
# train with .fit_generator
# modelPart2_cnn2.fit_generator(...)
Explanation: 3) Now describe your pre-processed data for training and validation: numer of training and validation images, number of classes, class names? Do the images have the same size? <font color='red'> (1 pts)<font/>
Your response:
4) Redefine, train and validate the 2 CNNs in Part I (namely modelPart2_cnn1, modelPart2_cnn2) on the new data using model.fit_generator instead of model.fit. Observe and compare the results. <font color='red'> (3 pts)<font/>
End of explanation
# Define new model
# modelCifar_cnn3 = Sequential()
# train and test
Explanation: Your observation and comments:
Part III - Advances <font color='red'> (2 pts)<font/>
In this part, you are free to improve your CNN performance using Data augmentation, Dropout, batch normalization, etc. Define at least 2 more CNNs to improve the classification performance of the CIFAR-10 dataset based on the first CNN (modelCifar_cnn1). That means you are not allowed to add more layers, change the number of filters or filter size, etc. Only the use of Data augmentation, Dropout, batch normalization is allowed. To use these techniques, further reading is required.
For each one, you are required to define the model, train, test and report the results.
End of explanation
# Define new model
# modelCifar_cnn4 = Sequential()
# train and test
Explanation: Result, observation and comment:
End of explanation |
12,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: An Exploratory Visualization of the Dataset
Number of Samples in Each Category
The categories with minimum/maximum number of samples are marked with yellow/red color correspondingly.
Step3: Random Image from Each Category
Output a sample image from each category. Note, that images will be transformed before they are passed to neural network.
Step4: Step 2
Step7: Prepare Input Images
Step8: Model Architecture
Step9: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Features and Labels
Step10: Training Pipeline
Step11: Model Evaluation
Step12: Train the Model
Step13: Evaluate Trained Model Using Test Samples
Step14: Step 3
Step15: Predict the Sign Type for Each Image
Step16: Analyze Performance
Step17: Top 5 Softmax Probabilities For Each Image Found on the Web | Python Code:
# Load pickled data
import pickle
import pandas as pd
# Data's location
training_file = "traffic-sign-data/train.p"
validation_file = "traffic-sign-data/valid.p"
testing_file = "traffic-sign-data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
# features and labels
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# Sign id<->name mapping
sign_names = pd.read_csv('signnames.csv').to_dict(orient='index')
sign_names = { key : val['SignName'] for key, val in sign_names.items() }
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
Author: Sergey Morozov
In this notebook, a traffic sign classifier is implemented. German Traffic Sign Dataset is used to train the model. There is a write-up where different stages of the implementation are described including analysis of the pros and cons of the chosen approaches and suggestions for further improvements.
Step 0: Load The Data
End of explanation
import numpy as np
# Number of training examples
n_train = len(X_train)
# Number of testing examples.
n_test = len(X_test)
# Number of validation examples.
n_valid = len(X_valid)
# What's the shape of an traffic sign image?
image_shape = X_train.shape[1:]
# How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Number of validation examples =", n_valid)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES.
A Basic Summary of the Dataset
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
plt.rcdefaults()
fig, ax = plt.subplots()
samples_per_category = [len(np.where(y_train==cat_id)[0]) for cat_id in sign_names.keys()]
category_names = tuple([val + " [ id:{id} ]".format(id=key) for key,val in sign_names.items()])
min_cnt = min(samples_per_category)
max_cnt = max(samples_per_category)
y_pos = np.arange(len(category_names))
rects = ax.barh(y_pos,
samples_per_category,
align='center',
color=['green' if val != min_cnt and val != max_cnt \
else 'yellow' if val == min_cnt \
else 'red' for val in samples_per_category])
# setting labels for each bar
for i in range(0,len(rects)):
ax.text(int(rects[i].get_width()),
int(rects[i].get_y()+rects[i].get_height()/2.0),
samples_per_category[i],
fontproperties=fm.FontProperties(size=5))
ax.set_yticks(y_pos)
ax.set_yticklabels(category_names,fontproperties=fm.FontProperties(size=5))
ax.invert_yaxis()
ax.set_title('Samples per Category')
plt.show()
Explanation: An Exploratory Visualization of the Dataset
Number of Samples in Each Category
The categories with minimum/maximum number of samples are marked with yellow/red color correspondingly.
End of explanation
import random
import numpy as np
import matplotlib.pyplot as plt
import math
# Visualizations will be shown in the notebook.
%matplotlib inline
h_or_w = image_shape[0]
fig = plt.figure(figsize=(h_or_w,h_or_w))
for i in range(0, n_classes):
samples = np.where(y_train==i)[0]
index = random.randint(0, len(samples) - 1)
image = X_train[samples[index]]
ax = fig.add_subplot(math.ceil(n_classes/5), 5, i+1)
ax.set_title(sign_names[i])
ax.set_ylabel("id: {id}".format(id=i))
plt.imshow(image)
plt.show()
Explanation: Random Image from Each Category
Output a sample image from each category. Note, that images will be transformed before they are passed to neural network.
End of explanation
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. The LeNet-5 CNN architecture is used here with minor modifications: dropout parameter added to the first fully connected layer.
Pre-process the Data Set (normalization, grayscale, etc.)
Shuffle Data
End of explanation
import cv2
def prepare_image(image_set):
Transform initial set of images so that they are ready to be fed to neural network.
(1) normalize image
(2) convert RGB image to gray scale
# initialize empty image set for prepared images
new_shape = image_shape[0:2] + (1,)
prep_image_set = np.empty(shape=(len(image_set),) + new_shape, dtype=int)
for ind in range(0, len(image_set)):
# normalize
norm_img = cv2.normalize(image_set[ind], np.zeros(image_shape[0:2]), 0, 255, cv2.NORM_MINMAX)
# grayscale
gray_img = cv2.cvtColor(norm_img, cv2.COLOR_RGB2GRAY)
# set new image to the corresponding position
prep_image_set[ind] = np.reshape(gray_img, new_shape)
return prep_image_set
def equalize_number_of_samples(image_set, image_labels):
Make number of samples in each category equal.
The data set has different number of samples for each category.
This function will transform the data set in a way that each category
will contain the number of samples equal to maximum samples per category
from the initial set. This will provide an equal probability to meet
traffic sign of each category during the training process.
num = max([len(np.where(image_labels==cat_id)[0]) for cat_id in sign_names.keys()])
equalized_image_set = np.empty(shape=(num * n_classes,) + image_set.shape[1:], dtype=int)
equalized_image_labels = np.empty(shape=(num * n_classes,), dtype=int)
j = 0
for cat_id in sign_names.keys():
cat_inds = np.where(y_train==cat_id)[0]
cat_inds_len = len(cat_inds)
for i in range(0, num):
equalized_image_set[j] = image_set[cat_inds[i % cat_inds_len]]
equalized_image_labels[j] = image_labels[cat_inds[i % cat_inds_len]]
j += 1
# at this stage data is definitely not randomly shuffled, so shuffle it
return shuffle(equalized_image_set, equalized_image_labels)
X_train_prep = prepare_image(X_train)
X_test_prep = prepare_image(X_test)
X_valid_prep = prepare_image(X_valid)
X_train_prep, y_train_prep = equalize_number_of_samples(X_train_prep, y_train)
# we do not need to transform labes for validation and test sets
y_test_prep = y_test
y_valid_prep = y_valid
image_shape_prep = X_train_prep[0].shape
Explanation: Prepare Input Images
End of explanation
# LeNet-5 architecture is used.
import tensorflow as tf
from tensorflow.contrib.layers import flatten
def LeNet(x, channels, classes, keep_prob, mu=0, sigma=0.01):
# Arguments used for tf.truncated_normal, randomly defines variables
# for the weights and biases for each layer
# Layer 1: Convolutional. Input = 32x32xchannels. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, channels, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Layer 1: Activation.
conv1 = tf.nn.relu(conv1)
# Layer 1: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Layer 2: Activation.
conv2 = tf.nn.relu(conv2)
# Layer 2: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
fc0 = tf.nn.dropout(fc0, keep_prob=keep_prob)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# Layer 3: Activation.
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Layer 4: Activation.
fc2 = tf.nn.relu(fc2)
# Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
Explanation: Model Architecture
End of explanation
# x is a placeholder for a batch of input images
x = tf.placeholder(tf.float32, (None,) + image_shape_prep)
# y is a placeholder for a batch of output labels
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
Explanation: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Features and Labels
End of explanation
# hyperparameters of the training process
RATE = 0.0008
EPOCHS = 30
BATCH_SIZE = 128
KEEP_PROB = 0.7
STDDEV = 0.01
keep_prob = tf.placeholder(tf.float32)
logits = LeNet(x, image_shape_prep[-1], n_classes, keep_prob, sigma=STDDEV)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = RATE)
training_operation = optimizer.minimize(loss_operation)
Explanation: Training Pipeline
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: Model Evaluation
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train_prep)
print("Training...")
print()
for i in range(EPOCHS):
X_train_prep, y_train_prep = shuffle(X_train_prep, y_train_prep)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train_prep[offset:end], y_train_prep[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: KEEP_PROB})
train_accuracy = evaluate(X_train_prep, y_train_prep)
validation_accuracy = evaluate(X_valid_prep, y_valid_prep)
print("EPOCH {} ...".format(i+1))
print("Train Accuracy = {:.3f}".format(train_accuracy))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './model.ckpt')
print("Model saved")
Explanation: Train the Model
End of explanation
with tf.Session() as sess:
saver.restore(sess, './model.ckpt')
test_accuracy = evaluate(X_test_prep, y_test_prep)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluate Trained Model Using Test Samples
End of explanation
import os
import cv2
import matplotlib.image as mpimg
img_paths = os.listdir("traffic-sign-images")
images = list()
labels = list()
# read images and resize
for img_path in img_paths:
# read image from file
img = mpimg.imread(os.path.join("traffic-sign-images", img_path))
img = cv2.resize(img, image_shape[0:2], interpolation=cv2.INTER_CUBIC)
images.append(img)
# prefix of each image name is a number of its category
labels.append(int(img_path[0:img_path.find('-')]))
images = np.array(images)
labels = np.array(labels)
# output the resized images
h_or_w = image_shape[0]
fig = plt.figure(figsize=(h_or_w,h_or_w))
for i in range(0, len(images)):
ax = fig.add_subplot(1, len(images), i+1)
ax.set_title(sign_names[labels[i]])
ax.set_ylabel("id: {id}".format(id=labels[i]))
plt.imshow(images[i])
plt.show()
Explanation: Step 3: Test a Model on New Images
It is time to apply the trained model to the German trafic sign images that were obtained from the Internet.
Load and Output the Images
End of explanation
# preprocess images first
images_prep = prepare_image(images)
labels_prep = labels
# then make a prediction
with tf.Session() as sess:
saver.restore(sess, './model.ckpt')
sign_ids = sess.run(tf.argmax(logits, 1), feed_dict={x: images_prep, y: labels_prep, keep_prob: 1})
# output the results in the table
print('-' * 93)
print("| {p:^43} | {a:^43} |".format(p='PREDICTED', a='ACTUAL'))
print('-' * 93)
for i in range(len(sign_ids)):
print('| {p:^2} {strp:^40} | {a:^2} {stra:^40} |'.format(
p=sign_ids[i], strp=sign_names[sign_ids[i]], a=labels[i], stra=sign_names[labels[i]]))
print('-' * 93)
Explanation: Predict the Sign Type for Each Image
End of explanation
# run evaluation on the new images
with tf.Session() as sess:
saver.restore(sess, './model.ckpt')
test_accuracy = evaluate(images_prep, labels_prep)
print("Accuracy = {:.3f}".format(test_accuracy))
Explanation: Analyze Performance
End of explanation
# Print out the top five softmax probabilities for the predictions on
# the German traffic sign images found on the web.
with tf.Session() as sess:
saver.restore(sess, './model.ckpt')
top_k = sess.run(tf.nn.top_k(tf.nn.softmax(logits), k=5),
feed_dict={x: images_prep, y: labels_prep, keep_prob: 1})
print(top_k)
plt.rcdefaults()
# show histogram of top 5 softmax probabilities for each image
h_or_w = image_shape[0]
fig = plt.figure()
for i in range(0, len(images)):
ax = fig.add_subplot(len(images), 1, i+1)
probabilities = top_k.values[i]
y_pos = np.arange(len(probabilities))
ax.set_ylabel("actual id: {id}".format(id=labels[i]), fontproperties=fm.FontProperties(size=5))
rects = ax.barh(y_pos,
probabilities,
align='center',
color='blue')
# setting labels for each bar
for j in range(0,len(rects)):
ax.text(int(rects[j].get_width()),
int(rects[j].get_y()+rects[j].get_height()/2.0),
probabilities[j],
fontproperties=fm.FontProperties(size=5), color='red')
ax.set_yticks(y_pos)
ax.set_yticklabels(top_k.indices[i], fontproperties=fm.FontProperties(size=5))
xticks = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
ax.set_xticks(xticks)
ax.set_xticklabels(xticks, fontproperties=fm.FontProperties(size=5))
ax.invert_yaxis()
plt.tight_layout()
plt.show()
Explanation: Top 5 Softmax Probabilities For Each Image Found on the Web
End of explanation |
12,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting
In this notebook, I'll develop a function to plot subjects and their labels.
Step1: Displaying radio images
Radio images look pretty terrible, so let's run a filter over them to make them a little easier to see. I'll use skimage and try a few different ones.
Let's get an example and look at the basic output.
Step2: It's hard to make out any features. Now, let's run some filters on it.
Step3: Square root looks good, so let's blitz that over some random images and see how it looks.
Step4: Plotting IR objects
This is an extremely unpleasant operation
Step5: Displaying classifications
The simplest thing we can do is to just highlight the host galaxies, so let's load up the Norris et al. classifications and have a look.
Step6: What about displaying classifications from my classifier?
Step7: Plotting a committee
If we have multiple classifiers, how should we output their predictions?
Step8: These classifiers have really low diversity because of the way I divided up the data, but this should work fine.
Step9: Bringing it all together
We want to plot classifications, RGZ labels, and Norris labels in the same row. | Python Code:
from astropy.coordinates import SkyCoord
import astropy.io.fits
import astropy.wcs
import h5py
import matplotlib.pyplot as plt
from matplotlib.pyplot import cm
import numpy
import skimage.exposure
import sklearn.neighbors
import sklearn.pipeline
import sklearn.preprocessing
CROWDASTRO_H5_PATH = 'data/crowdastro.h5'
PATCH_DIAMETER = 200
FITS_CONVENTION = 1
ARCMIN = 1 / 60
IMAGE_SIZE = 200 * 200
NORRIS_DAT_PATH = 'data/norris_2006_atlas_classifications_ra_dec_only.dat'
TRAINING_H5_PATH = 'data/training.h5'
with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5:
N_ASTRO = 5 if f_h5.attrs['ir_survey'] == 'wise' else 6
%matplotlib inline
Explanation: Plotting
In this notebook, I'll develop a function to plot subjects and their labels.
End of explanation
with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5:
image = f_h5['/atlas/cdfs/numeric'][250, 2 : 2 + PATCH_DIAMETER ** 2].reshape((PATCH_DIAMETER, PATCH_DIAMETER))
plt.imshow(image, cmap='gray')
plt.show()
Explanation: Displaying radio images
Radio images look pretty terrible, so let's run a filter over them to make them a little easier to see. I'll use skimage and try a few different ones.
Let's get an example and look at the basic output.
End of explanation
fig = plt.figure(figsize=(18, 27))
def subplot_imshow_hist(i, fig, im, title):
ax = fig.add_subplot(6, 3, i)
ax.imshow(im, cmap='gray')
ax.set_title(title)
ax.axis('off')
ax = fig.add_subplot(6, 3, i + 3)
ax.hist(im.ravel(), bins=256, histtype='step', color='black')
ax.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0))
subplot_imshow_hist(1, fig, image, 'Default')
subplot_imshow_hist(2, fig, skimage.exposure.equalize_adapthist(image, clip_limit=0.01), 'Adaptive equalisation')
subplot_imshow_hist(3, fig, skimage.exposure.equalize_hist(image), 'Histogram equalisation')
subplot_imshow_hist(7, fig, skimage.exposure.rescale_intensity(image, in_range=tuple(numpy.percentile(image, (0.75, 99.25)))),
'Constant stretching 0.75 - 99.25')
subplot_imshow_hist(8, fig, skimage.exposure.rescale_intensity(image, in_range=tuple(numpy.percentile(image, (1, 99)))),
'Constant stretching 1 - 99')
subplot_imshow_hist(9, fig, skimage.exposure.rescale_intensity(image, in_range=tuple(numpy.percentile(image, (2, 98)))),
'Constant stretching 2 - 98')
subplot_imshow_hist(13, fig, numpy.sqrt(image - image.min()), 'Square root')
subplot_imshow_hist(14, fig, numpy.log(image - image.min() + 1e-5), 'Logarithm + 1e-5')
subplot_imshow_hist(15, fig, numpy.log(image + 1), 'Logarithm + 1')
Explanation: It's hard to make out any features. Now, let's run some filters on it.
End of explanation
fig = plt.figure(figsize=(18, 25))
with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5:
indices = numpy.arange(f_h5['/atlas/cdfs/numeric'].shape[0])
numpy.random.seed(10000)
numpy.random.shuffle(indices)
for j, i in enumerate(indices[:3]):
image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape((PATCH_DIAMETER, PATCH_DIAMETER))
subplot_imshow_hist(j + 1, fig, numpy.sqrt(image - image.min()), str(i))
Explanation: Square root looks good, so let's blitz that over some random images and see how it looks.
End of explanation
from crowdastro.config import config
with astropy.io.fits.open(config['data_sources']['atlas_image'],
ignore_blank=True) as atlas_image:
wcs = astropy.wcs.WCS(atlas_image[0].header).dropaxis(3).dropaxis(2)
def ra_dec_to_pixels(subject_coords, coords):
offset, = wcs.all_world2pix([subject_coords], FITS_CONVENTION)
# The coords are of the middle of the subject.
coords = wcs.all_world2pix(coords, FITS_CONVENTION)
coords -= offset
coords[:, 0] /= config['surveys']['atlas']['mosaic_scale_x'] * 424 / 200
coords[:, 1] /= config['surveys']['atlas']['mosaic_scale_y'] * 424 / 200
coords += [40, 40]
return coords
with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5:
i = 296
image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape(
(PATCH_DIAMETER, PATCH_DIAMETER))[60:140, 60:140]
radio_coords = f_h5['/atlas/cdfs/numeric'][i, :2]
nearby = f_h5['/atlas/cdfs/numeric'][i, 2 + PATCH_DIAMETER ** 2:] < ARCMIN
ir_coords = f_h5['/swire/cdfs/numeric'][nearby, :2]
ir_coords = ra_dec_to_pixels(radio_coords, ir_coords)
plt.imshow(numpy.sqrt(image - image.min()), cmap='gray')
plt.scatter(ir_coords[:, 0], ir_coords[:, 1])
Explanation: Plotting IR objects
This is an extremely unpleasant operation: We have to find the pixel coordinates of each IR location, which are all specified in RA/DEC.
End of explanation
# Load labels.
with h5py.File(TRAINING_H5_PATH, 'r') as training_h5:
crowdsourced_labels = training_h5['labels'].value
with h5py.File(CROWDASTRO_H5_PATH, 'r') as crowdastro_h5:
ir_names = crowdastro_h5['/swire/cdfs/string'].value
ir_positions = crowdastro_h5['/swire/cdfs/numeric'].value[:, :2]
ir_tree = sklearn.neighbors.KDTree(ir_positions)
with open(NORRIS_DAT_PATH, 'r') as norris_dat:
norris_coords = [r.strip().split('|') for r in norris_dat]
norris_labels = numpy.zeros((len(ir_positions)))
for ra, dec in norris_coords:
# Find a neighbour.
skycoord = SkyCoord(ra=ra, dec=dec, unit=('hourangle', 'deg'))
ra = skycoord.ra.degree
dec = skycoord.dec.degree
((dist,),), ((ir,),) = ir_tree.query([(ra, dec)])
if dist < 0.1:
norris_labels[ir] = 1
with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5:
i = 250
image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape(
(PATCH_DIAMETER, PATCH_DIAMETER))[60:140, 60:140]
radio_coords = f_h5['/atlas/cdfs/numeric'][i, :2]
nearby = f_h5['/atlas/cdfs/numeric'][i, 2 + PATCH_DIAMETER ** 2:] < ARCMIN
ir_coords = f_h5['/swire/cdfs/numeric'][nearby, :2]
ir_coords = ra_dec_to_pixels(radio_coords, ir_coords)
plt.imshow(numpy.sqrt(image - image.min()), cmap='gray')
plt.scatter(ir_coords[:, 0], ir_coords[:, 1])
labels = norris_labels[nearby].astype(bool)
nearby_hosts = ir_coords[labels]
plt.scatter(nearby_hosts[:, 0], nearby_hosts[:, 1], c='red')
Explanation: Displaying classifications
The simplest thing we can do is to just highlight the host galaxies, so let's load up the Norris et al. classifications and have a look.
End of explanation
from crowdastro.classifier import RGZClassifier
from sklearn.ensemble import RandomForestClassifier
with h5py.File(TRAINING_H5_PATH, 'r') as f_h5:
classifier = RGZClassifier(f_h5['features'].value, N_ASTRO)
classifier.train(numpy.arange(f_h5['features'].shape[0]), norris_labels)
with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5:
i = 250
image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape(
(PATCH_DIAMETER, PATCH_DIAMETER))[60:140, 60:140]
radio_coords = f_h5['/atlas/cdfs/numeric'][i, :2]
nearby = f_h5['/atlas/cdfs/numeric'][i, 2 + PATCH_DIAMETER ** 2:] < ARCMIN
ir_coords = f_h5['/swire/cdfs/numeric'][nearby, :2]
ir_coords = ra_dec_to_pixels(radio_coords, ir_coords)
vec = f_h5['/atlas/cdfs/numeric'][i, :]
probs = classifier.predict_probabilities(vec)[nearby]
nearby_norris = ir_coords[norris_labels[nearby].astype('bool')]
nearby_rgz = ir_coords[crowdsourced_labels[nearby].astype('bool')]
plt.figure(figsize=(15, 15))
base_size = 200
plt.imshow(numpy.sqrt(image - image.min()), cmap='gray')
plt.scatter(ir_coords[:, 0], ir_coords[:, 1], s=probs * base_size, c=probs, marker='o', cmap='cool')
plt.scatter(nearby_norris[:, 0], nearby_norris[:, 1], s=base_size, c='green', marker='*')
plt.axis('off')
# plt.scatter(nearby_rgz[:, 0], nearby_rgz[:, 1], s=50, c='cyan', marker='x', alpha=0.5)
plt.xlim((0, 80))
plt.ylim((0, 80))
Explanation: What about displaying classifications from my classifier?
End of explanation
with h5py.File(TRAINING_H5_PATH, 'r') as f_h5:
classifiers = [RGZClassifier(f_h5['features'], N_ASTRO) for _ in range(10)]
for classifier in classifiers:
subset = numpy.arange(f_h5['features'].shape[0])
numpy.random.shuffle(subset)
subset = subset[:len(subset) // 50]
subset = sorted(subset)
classifier.train(list(subset), norris_labels[subset])
with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5:
i = 250
image = f_h5['/atlas/cdfs/numeric'][i, 2 : 2 + PATCH_DIAMETER ** 2].reshape(
(PATCH_DIAMETER, PATCH_DIAMETER))[60:140, 60:140]
radio_coords = f_h5['/atlas/cdfs/numeric'][i, :2]
nearby = f_h5['/atlas/cdfs/numeric'][i, 2 + PATCH_DIAMETER ** 2:] < ARCMIN
ir_coords = f_h5['/swire/cdfs/numeric'][nearby, :2]
ir_coords = ra_dec_to_pixels(radio_coords, ir_coords)
vec = f_h5['/atlas/cdfs/numeric'][i, :]
probs = [classifier.predict_probabilities(vec)[nearby] for classifier in classifiers]
# Set all but the top n predictions to zero.
n = 1
for probs_ in probs:
top_n = sorted(probs_, reverse=True)[:n]
for j, prob in enumerate(probs_):
if prob not in top_n:
probs_[j] = 0
plt.figure(figsize=(10, 10))
base_size = 200
plt.imshow(numpy.sqrt(image - image.min()), cmap='gray')
colours = cm.rainbow(numpy.linspace(0, 1, 10))
for colour, probs_ in zip(colours, probs):
plt.scatter(ir_coords[:, 0] + numpy.random.normal(size=ir_coords.shape[0], scale=0.5),
ir_coords[:, 1] + numpy.random.normal(size=ir_coords.shape[0], scale=0.5),
s=probs_ * base_size, marker='x', c=colour, alpha=1)
plt.axis('off')
plt.xlim((0, 80))
plt.ylim((0, 80))
Explanation: Plotting a committee
If we have multiple classifiers, how should we output their predictions?
End of explanation
def plot_points_on_background(points, background, noise=False, base_size=200):
plt.imshow(background, cmap='gray')
colours = cm.rainbow(numpy.linspace(0, 1, len(points)))
for colour, (x, y) in zip(colours, points):
if noise:
x += numpy.random.normal(scale=0.5)
y += numpy.random.normal(scale=0.5)
plt.scatter(x, y, marker='o', c=colour, s=base_size)
plt.axis('off')
plt.xlim((0, background.shape[0]))
plt.ylim((0, background.shape[1]))
def plot_classifications(atlas_vector, ir_matrix, labels, base_size=200):
image = atlas_vector[2 : 2 + PATCH_DIAMETER ** 2].reshape((PATCH_DIAMETER, PATCH_DIAMETER)
)[60:140, 60:140]
radio_coords = atlas_vector[:2]
nearby = atlas_vector[2 + PATCH_DIAMETER ** 2:] < ARCMIN
labels = labels[nearby]
ir_coords = ir_matrix[nearby, :2][labels.astype(bool)]
ir_coords = ra_dec_to_pixels(radio_coords, ir_coords)
plot_points_on_background(ir_coords, image, base_size=base_size)
with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5:
i = 250
atlas_vector = f_h5['/atlas/cdfs/numeric'][i, :]
ir_coords = f_h5['/swire/cdfs/numeric']
plot_classifications(atlas_vector, ir_coords, norris_labels)
Explanation: These classifiers have really low diversity because of the way I divided up the data, but this should work fine.
End of explanation
def plot_classifications_row(atlas_vector, ir_matrix, classifier_labels, rgz_labels, norris_labels, base_size=200):
plt.subplot(1, 3, 1)
plt.title('Classifier')
plot_classifications(atlas_vector, ir_matrix, classifier_labels, base_size=base_size)
plt.subplot(1, 3, 2)
plt.title('RGZ')
plot_classifications(atlas_vector, ir_matrix, rgz_labels, base_size=base_size)
plt.subplot(1, 3, 3)
plt.title('Norris')
plot_classifications(atlas_vector, ir_matrix, norris_labels, base_size=base_size)
with h5py.File(TRAINING_H5_PATH, 'r') as f_h5:
classifier = RGZClassifier(f_h5['features'].value, N_ASTRO)
classifier.train(numpy.arange(f_h5['features'].shape[0]), norris_labels)
with h5py.File(CROWDASTRO_H5_PATH, 'r') as f_h5:
i = 250
vec = f_h5['/atlas/cdfs/numeric'][i, :]
mat = f_h5['/swire/cdfs/numeric']
probs = classifier.predict_probabilities(vec)
labels = numpy.zeros(probs.shape)
labels[probs.argmax()] = 1
plt.figure(figsize=(20, 10))
plot_classifications_row(vec, mat, labels, crowdsourced_labels, norris_labels, base_size=200)
Explanation: Bringing it all together
We want to plot classifications, RGZ labels, and Norris labels in the same row.
End of explanation |
12,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'mpi-esm-1-2-hr', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: DWD
Source ID: MPI-ESM-1-2-HR
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
12,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Studio
Step2: Solution with only Number
Step3: Solution provided in class | Python Code:
days_of_week = [
# 0 1 2
'Sunday', 'Monday', 'Tuesday',
# 3 4 5
'Wednesday', 'Thursday', 'Friday',
# 6
'Saturday',
]
# Gather user input
# Need to use the `int` call so that it'll correctly be
# an integer for mathmatical operations
leaving_day = int(input('What day are you leaving? '))
length_of_vacation = int(input('How long will you be gone? '))
# Get the day we will hit, it may be larger than 7
final_day = length_of_vacation + leaving_day
# Get the index for the day of the week
# By using the modulo operator we can take advantage of
# knowing the remainder of any numbers.
# For example if the day is 2 and our stay is 5
# Then we will have 7, which means we leave on a Tuesday
# and get back on a Sunday.
final_day_index = final_day % len(days_of_week)
print(days_of_week[final_day_index], final_day_index)
Explanation: Studio: Holiday
It is possible to name the days 0 through 6, where day 0 is Sunday and day 6 is Saturday. If you go on a wonderful holiday leaving on day number 3 (a Wednesday) and you return home after 10 nights, you arrive home on day number 6 (a Saturday).
Write a general version of the program which asks for the starting day number, and the length of your stay, and it will tell you the number of day of the week you will return on.
What to think about
How many days in week?
What happens when we go past the last day in the week?
Why start at 0 instead of 1?
What mathematic operations could be useful?
Solution with day
End of explanation
Studio: Holiday
# get input: departure day (0-6) and duration
departure_day = input("Which day are you leaving on? (0=Sun, 1=Mon, etc)")
departure_day = int(departure_day)
duration = input("How many days will you be done?")
duration = int(duration)
# calculate return day and respond
return_day = (departure_day + duration) % 7
print("You will return on day", return_day)
Explanation: Solution with only Number
End of explanation
day = int(input("What day will you leave?"))
travelDays = int(input("How many days will you be gone?"))
newDay = day + travelDays
newDay = newDay % 7
print("You will return on", newDay)
Explanation: Solution provided in class
End of explanation |
12,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
# YOUR CODE HERE
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y - x)
dy = x*(rho - z) - y
dz = x*y - beta*z
return np.array([dx, dy, dz])
print(lorentz_derivs(np.array([0.0, 1.0, 0.0]), 1, 1, 1, 1))
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
# YOUR CODE HERE
t = np.linspace(0, max_time, 5*max_time)
soln = odeint(lorentz_derivs, ic, t, args=(sigma, rho, beta), atol=1e-9, rtol=1e-8)
return np.array(soln), np.array(t)
print(solve_lorentz(np.array([0.0, 1.0, 0.0]), 2, 1, 1, 1))
assert True # leave this to grade solve_lorenz
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
# YOUR CODE HERE
plt.figure(figsize = (15,8))
np.random.seed(1)
k= []
for i in range(N):
data = (np.random.random(3)-0.5)*30
k.append(solve_lorentz(data, max_time, sigma, rho, beta))
for j in k:
x = [p[0] for p in j[0]]
z = [p[2] for p in j[0]]
color = plt.cm.hot((x[0] + z[0])/60+0.5)
plt.scatter(x, z, color = color)
plt.xlabel('$x(t)$')
plt.ylabel('$z(t)$')
plt.title('Lorentz System')
# print(plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0))
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
# YOUR CODE HERE
interact(plot_lorentz, max_time = [1,10], N = [1,50], sigma=[0.0,50.0], rho=[0.0,50.0], beta=fixed(8/3));
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation |
12,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step3: Clone and build tensorflow_cloud
To use the latest version of the tensorflow_cloud, we will clone and build the repo. The resulting whl file is both used in the client side as well as in construction of a docker image for remote execution.
Step4: Restart the Kernel
We will automatically restart your kernel so the notebook has access to the packages you installed.
Step5: Import libraries and define constants
Step6: Create a docker file with tensorflow_cloud
In the next step we create a base docker file with the latest wheel file to use for remote training. You may use any base image. However, DLVM base images come pre-installed with most needed packages.
Step8: Tutorial 1 - Functional model
In this sample we will demonstrate using numpy.array as input data by creating a basic model and and submit it for remote training.
Define model building function
Step9: Prepare Data
Step10: Run the model locally for validation
Step11: Submit model and dataset for remote training
Step12: Retrieve the trained model
Once the training is complete you can access the trained model at remote_folder/output
Step13: Tutorial 2 - Sequential Models and Datasets
In this sample we will demonstrate using datasets by creating a basic model and submitting it for remote training.
Define model building function
Step14: Prepare Data
Step15: Run the model locally for validation
Step16: Submit model and dataset for remote training
Step17: Retrieve the trained model
Once the training is complete you can access the trained model at remote_folder/output | Python Code:
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth application-default login --quiet
! gcloud auth login --quiet
Explanation: <table align="left">
<td>
<a href="https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps://github.com/tensorflow/cloud/blob/master/examples/cloud_fit.ipynb">
<img src="https://www.gstatic.com/images/branding/product/1x/google_cloud_48dp.png" alt="AI Platform Notebooks"> Run in AI Platform Notebooks
</a>
</td>
<td>
<a href="https://colab.research.google.com/github/tensorflow/cloud/blob/master/examples/cloud_fit.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/tensorflow/cloud/blob/master/examples/cloud_fit.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">View on GitHub
</a>
</td>
</table>
Overview
Following is a quick introduction to cloud_fit. cloud_fit enables training on Google Cloud AI Platform in the same manner as model.fit().
In this notebook, we will start by installing libraries required, then proceed with two samples showing how to use numpy.array and tf.data.dataset with cloud_fit
What are the components of cloud_fit()?
cloud_fit has two main components as follows:
client.py: serializes the provided data and model along with typical model.fit() parameters and triggers a AI platform training
``` python
def cloud_fit(model,
remote_dir: Text,
region: Text = None,
project_id: Text = None,
image_uri: Text = None,
distribution_strategy: Text = DEFAULT_DISTRIBUTION_STRATEGY,
job_spec: Dict[str, Any] = None,
job_id: Text = None,
**fit_kwargs) -> Text:
Facilitates remote execution of in memory Models and Datasets on AI Platform.
Args:
model: A compiled Keras Model.
remote_dir: Google Cloud Storage path for temporary assets and AI Platform
training output. Will overwrite value in job_spec.
region: Target region for running the AI Platform Training job.
project_id: Project id where the training should be deployed to.
image_uri: base image used to use for AI Platform Training
distribution_strategy: Specifies the distribution strategy for remote
execution when a jobspec is provided. Accepted values are strategy names
as specified by 'tf.distribute.<strategy>.name'.
job_spec: AI Platform training job_spec, will take precedence over all other
provided values except for remote_dir. If none is provided a default
cluster spec and distribution strategy will be used.
job_id: A name to use for the AI Platform Training job (mixed-case letters,
numbers, and underscores only, starting with a letter).
**fit_kwargs: Args to pass to model.fit() including training and eval data.
Only keyword arguments are supported. Callback functions will be
serialized as is.
Returns:
AI Platform job ID
Raises:
RuntimeError: If executing in graph mode, eager execution is required for
cloud_fit.
NotImplementedError: Tensorflow v1.x is not supported.
```
remote.py: A job that takes in a remote_dir as parameter , load model and data from this location and executes the training with stored parameters.
```python
def run(remote_dir: Text, distribution_strategy_text: Text):
deserializes Model and Dataset and runs them.
Args:
remote_dir: Temporary cloud storage folder that contains model and Dataset
graph. This folder is also used for job output.
distribution_strategy_text: Specifies the distribution strategy for remote
execution when a jobspec is provided. Accepted values are strategy names
as specified by 'tf.distribute.<strategy>.name'.
```
Costs
This tutorial uses billable components of Google Cloud:
AI Platform Training
Cloud Storage
Learn about AI Platform Training
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AI Platform APIs
If running locally on your own machine, you will need to install the Google Cloud SDK.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Authenticate your Google Cloud account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip these steps.
End of explanation
!git clone https://github.com/tensorflow/cloud.git
!cd cloud/src/python && python3 setup.py -q bdist_wheel
!pip install -U cloud/src/python/dist/tensorflow_cloud-*.whl --quiet
Explanation: Clone and build tensorflow_cloud
To use the latest version of the tensorflow_cloud, we will clone and build the repo. The resulting whl file is both used in the client side as well as in construction of a docker image for remote execution.
End of explanation
# Restart the kernel after pip installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
We will automatically restart your kernel so the notebook has access to the packages you installed.
End of explanation
import os
import uuid
import numpy as np
import tensorflow as tf
from tensorflow_cloud.tuner import cloud_fit_client as client
# Setup and imports
REMOTE_DIR = '[gcs-bucket-for-temporary-files]' #@param {type:"string"}
REGION = 'us-central1' #@param {type:"string"}
PROJECT_ID = '[your-project-id]' #@param {type:"string"}
DOCKER_IMAGE_NAME = '[name-for-docker-image]' #@param {type:"string"}
! gcloud config set project $PROJECT_ID
IMAGE_URI = f'gcr.io/{PROJECT_ID}/{DOCKER_IMAGE_NAME}:latest' #@param {type:"string"}
Explanation: Import libraries and define constants
End of explanation
%%file Dockerfile
# Using DLVM base image
FROM gcr.io/deeplearning-platform-release/tf2-cpu
WORKDIR /root
# Path configuration
ENV PATH $PATH:/root/tools/google-cloud-sdk/bin
# Make sure gsutil will use the default service account
RUN echo '[GoogleCompute]\nservice_account = default' > /etc/boto.cfg
# Copy and install tensorflow_cloud wheel file
ADD cloud/src/python/dist/tensorflow_cloud-*.whl /tmp/
RUN pip3 install --upgrade /tmp/tensorflow_cloud-*.whl --quiet
# Sets up the entry point to invoke cloud_fit.
ENTRYPOINT ["python3","-m","tensorflow_cloud.tuner.cloud_fit_remote"]
!docker build -t {IMAGE_URI} -f Dockerfile . -q && docker push {IMAGE_URI}
Explanation: Create a docker file with tensorflow_cloud
In the next step we create a base docker file with the latest wheel file to use for remote training. You may use any base image. However, DLVM base images come pre-installed with most needed packages.
End of explanation
Simple model to compute y = wx + 1, with w trainable.
inp = tf.keras.layers.Input(shape=(1,), dtype=tf.float32)
times_w = tf.keras.layers.Dense(
units=1,
kernel_initializer=tf.keras.initializers.Constant([[0.5]]),
kernel_regularizer=tf.keras.regularizers.l2(0.01),
use_bias=False)
plus_1 = tf.keras.layers.Dense(
units=1,
kernel_initializer=tf.keras.initializers.Constant([[1.0]]),
bias_initializer=tf.keras.initializers.Constant([1.0]),
trainable=False)
outp = plus_1(times_w(inp))
simple_model = tf.keras.Model(inp, outp)
simple_model.compile(tf.keras.optimizers.SGD(0.002),
"mean_squared_error", run_eagerly=True)
Explanation: Tutorial 1 - Functional model
In this sample we will demonstrate using numpy.array as input data by creating a basic model and and submit it for remote training.
Define model building function
End of explanation
# Creating sample data
x = [[9.], [10.], [11.]] * 10
y = [[xi[0]/2. + 6] for xi in x]
Explanation: Prepare Data
End of explanation
# Verify the model by training locally for one step.
simple_model.fit(np.array(x), np.array(y), batch_size=len(x), epochs=1)
Explanation: Run the model locally for validation
End of explanation
# Create a unique remote sub folder path for assets and model training output.
SIMPLE_REMOTE_DIR = os.path.join(REMOTE_DIR, str(uuid.uuid4()))
print('your remote folder is %s' % (SIMPLE_REMOTE_DIR))
# Using default configuration with two workers dividing the dataset between the two.
simple_model_job_id = client.cloud_fit(model=simple_model, remote_dir = SIMPLE_REMOTE_DIR, region =REGION , image_uri=IMAGE_URI, x=np.array(x), y=np.array(y), epochs=100, steps_per_epoch=len(x)/2,verbose=2)
!gcloud ai-platform jobs describe projects/{PROJECT_ID}/jobs/{simple_model_job_id}
Explanation: Submit model and dataset for remote training
End of explanation
# Load the trained model from gcs bucket
trained_simple_model = tf.keras.models.load_model(os.path.join(SIMPLE_REMOTE_DIR, 'output'))
# Test that the saved model loads and works properly
trained_simple_model.evaluate(x,y)
Explanation: Retrieve the trained model
Once the training is complete you can access the trained model at remote_folder/output
End of explanation
# create a model
fashion_mnist_model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
fashion_mnist_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Tutorial 2 - Sequential Models and Datasets
In this sample we will demonstrate using datasets by creating a basic model and submitting it for remote training.
Define model building function
End of explanation
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255
dataset = tf.data.Dataset.from_tensor_slices((images, labels))
dataset = dataset.batch(32)
Explanation: Prepare Data
End of explanation
# Verify the model by training locally for one step. This is not necessary prior to cloud.fit() however it is recommended.
fashion_mnist_model.fit(dataset, epochs=1)
Explanation: Run the model locally for validation
End of explanation
# Create a unique remote sub folder path for assets and model training output.
FASHION_REMOTE_DIR = os.path.join(REMOTE_DIR, str(uuid.uuid4()))
print('your remote folder is %s' % (FASHION_REMOTE_DIR))
fashion_mnist_model_job_id = client.cloud_fit(model=fashion_mnist_model, remote_dir = FASHION_REMOTE_DIR,region =REGION , image_uri=IMAGE_URI, x=dataset,epochs=10, steps_per_epoch=15,verbose=2)
!gcloud ai-platform jobs describe projects/{PROJECT_ID}/jobs/{fashion_mnist_model_job_id}
Explanation: Submit model and dataset for remote training
End of explanation
# Load the trained model from gcs bucket
trained_fashion_mnist_model = tf.keras.models.load_model(os.path.join(FASHION_REMOTE_DIR, 'output'))
# Test that the saved model loads and works properly
test_images, test_labels = test
test_images = test_images/255
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels))
test_dataset = test_dataset.batch(32)
trained_fashion_mnist_model.evaluate(test_dataset)
Explanation: Retrieve the trained model
Once the training is complete you can access the trained model at remote_folder/output
End of explanation |
12,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wine Selection
Framing
I want to buy a fine wine but I have no idea about wine selection.I'm not good at wine tasting.
I will use the data and understand what goes into making fine wine
Step1: Wine Category
Let's create a new column 'category' which signifies the category of wine - High (1) or Low (0)
Wine with quality > 6 is considered to be High quality, rest are Low quality
Step2: This is the frequency count for each category
Step3: Visual Exploration
Let's see how the columns are related
To start, lets take 2 variables at a time to explore data
Correlation
Step4: Alcohol vs Category
Step5: Exercise
Step6: Time to build a predictive model
Let's build a model that can predict the category of wine, given information about alcohol content and volatile acidity
Building a predictive model involves training the model with historical data known as training data. Once we have the model trained, the model can predict labels (in this case, the category of wine) for the given features (test data)
We have 1600 rows of the wine data, lets split this data into 80
Step7: It’s a bird… it’s a plane… it… depends on your classifier’s threshold
-- Sancho McCann
Step8: Let's add more features - volatile acidity, sulphates, alcohol to predict the category
2 variable model
Step9: Accuracy Metrics
AUC
ROC
Misclassification Rate
Confusion Matrix
Precision & Recall
Confusion Matrix
Calculate True Positive Rate
TPR = TP / (TP+FN)
Calculate False Positive Rate
FPR = FP / (FP+TN)
Precise & Recall
AUC-ROC for the model | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (13,8)
df = pd.read_csv("./winequality-red.csv")
df.head()
df.shape
Explanation: Wine Selection
Framing
I want to buy a fine wine but I have no idea about wine selection.I'm not good at wine tasting.
I will use the data and understand what goes into making fine wine
End of explanation
#df.loc[df.b > 0, 'd'] = 1
df.loc[df.quality > 5, 'category'] = 1
df.loc[df.quality <= 5, 'category'] = 0
Explanation: Wine Category
Let's create a new column 'category' which signifies the category of wine - High (1) or Low (0)
Wine with quality > 6 is considered to be High quality, rest are Low quality
End of explanation
df.category.value_counts()
df.head()
Explanation: This is the frequency count for each category
End of explanation
df.corr()
from pandas.tools.plotting import scatter_matrix
scatter_matrix(df, figsize=(15,15), diagonal='kde')
Explanation: Visual Exploration
Let's see how the columns are related
To start, lets take 2 variables at a time to explore data
Correlation
End of explanation
df.plot(x="alcohol", y="category", kind="scatter")
Explanation: Alcohol vs Category
End of explanation
#df.plot(x="alcohol", y="volatile acidity", kind="scatter", c="category")
ax = df[df.category == 1].plot(x="alcohol", y="volatile acidity", kind="scatter", color="red", label="HIGH", s=100, alpha=0.5)
df[df.category == 0].plot(x="alcohol", y="volatile acidity", kind="scatter", color="green", label="LOW", s=100, alpha=0.5, ax=ax)
pd.set_option("precision",3)
Explanation: Exercise: Volatile Acidity vs Category
3 variable visualization
Let's add one more dimension to get more sense what is correlated
Alcohol vs Volatile Acidity vs Category
End of explanation
df.shape
df_train = df.iloc[:1280,]
df_test = df.iloc[1280:,]
X_train = df_train["volatile acidity"]
y_train = df_train["category"]
X_test = df_test["volatile acidity"]
y_test = df_test["category"]
X_train = X_train.reshape(X_train.shape[0],1)
X_test = X_test.reshape(X_test.shape[0],1)
from sklearn.linear_model import LogisticRegression
logistic_model = LogisticRegression()
logistic_model.fit(X_train, y_train)
sns.lmplot(data=df, x="alcohol", y="category", logistic=True)
Explanation: Time to build a predictive model
Let's build a model that can predict the category of wine, given information about alcohol content and volatile acidity
Building a predictive model involves training the model with historical data known as training data. Once we have the model trained, the model can predict labels (in this case, the category of wine) for the given features (test data)
We have 1600 rows of the wine data, lets split this data into 80:20 ratio as training:testingg data
Why do we need to do this?
We can compare the predicted label with the actual label.
By doing this, we can measure how accurate our model is.
End of explanation
predicted = logistic_model.predict(X_test)
df_compare = pd.DataFrame()
df_compare["actual"] = y_test
df_compare["predicted"] = predicted
df_compare["volatile acidity"] = df_test["volatile acidity"]
ax=df_compare.plot(x="volatile acidity", y="actual", kind="scatter", color="blue", label="actual")
df_compare.plot(x="volatile acidity", y="predicted", kind="scatter", color="red", label="predicted", ax=ax)
Explanation: It’s a bird… it’s a plane… it… depends on your classifier’s threshold
-- Sancho McCann
End of explanation
df_train = df.iloc[:1280,]
df_test = df.iloc[1280:,]
X_train = df_train[["sulphates", "alcohol"]]
y_train = df_train["category"]
X_test = df_test[["sulphates", "alcohol"]]
y_test = df_test["category"]
logistic_model = LogisticRegression()
logistic_model.fit(X_train, y_train)
predicted = logistic_model.predict(X_test)
df_compare = pd.DataFrame()
df_compare["actual"] = y_test
df_compare["predicted"] = predicted
df_compare["sulphates"] = df_test["sulphates"]
df_compare["alcohol"] = df_test["alcohol"]
df_compare.head()
ax = df_compare[df_compare.actual == 1].plot(x="alcohol", y="sulphates", kind="scatter", color="red", label="HIGH", s=100, alpha=0.5)
df_compare[df_compare.actual == 0].plot(x="alcohol", y="sulphates", kind="scatter", color="green", label="LOW", s=100, alpha=0.5, ax=ax)
ax = df_compare[df_compare.predicted == 1].plot(x="alcohol", y="sulphates", kind="scatter", color="red", label="HIGH", s=100, alpha=0.5)
df_compare[df_compare.predicted == 0].plot(x="alcohol", y="sulphates", kind="scatter", color="green", label="LOW", s=100, alpha=0.5, ax=ax)
Explanation: Let's add more features - volatile acidity, sulphates, alcohol to predict the category
2 variable model
End of explanation
from sklearn import metrics
#ols_auc = metrics.roc_auc_score(df_compare.actual, df_compare.predicted)
fpr, tpr, thresholds = metrics.roc_curve(df_compare.actual, df_compare.predicted)
plt.plot(fpr, tpr)
plt.plot([0,1],[0,1])
Explanation: Accuracy Metrics
AUC
ROC
Misclassification Rate
Confusion Matrix
Precision & Recall
Confusion Matrix
Calculate True Positive Rate
TPR = TP / (TP+FN)
Calculate False Positive Rate
FPR = FP / (FP+TN)
Precise & Recall
AUC-ROC for the model
End of explanation |
12,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to the Lomb-Scargle Periodogram
Version 0.2
By AA Miller (Northwester/CIERA)
15 Sep 2021
Today we examine the detection of periodic signals in noisy, irregular data (the standard for ground-based astronomical surveys).
This lecture is strongly influenced by Understanding the Lomb-Scarge Periodogram, by Jake VanderPlas, a former DSFP lecturer (VanderPlas 2017). Beyond that, the original papers by Lomb 1976 and Scargle 1982 are also worth a read.
There are many, many papers on the use of the Lomb-Scargle periodogram (and other period detection methods). A recent study led by former DSFP lecturer, Matthew Graham, conducted a systematic analysis of many of the most popular tools used to search for periodic signals on actual astronomical data (Graham et al. 2013).$^\dagger$
$^\dagger$Somewhat to my (our?) dismay, they found that none of the solutions work really well across all use cases.
Problem 1) Helper Functions
We need to simulate and plot the same types of data again and again. To start this lecture we will create a few helper functions to minimize repetitive commands (e.g., phase-folding a light curve).
Problem 1a
Create a function, gen_periodic_data, that creates simulated data (including noise) over a grid of user supplied positions
Step1: Problem 1b
Generate a noise-free signal with $A = 2$ and $p = \pi$ over a regular grid between 0 and 10. Plot the results (and make sure gen_periodic_data behaves as you would expect).
Step2: Problem 1c
Create a function, phase_plot, that takes x, y, and $P$ as inputs to create a phase-folded light curve (i.e., plot the data at their respective phase values given the period $P$).
Include an optional argument, y_unc, to include uncertainties on the y values, when available.
Step3: Problem 1d
Plot the phase folded data generated in 1b.
Does you plot match your expectations?
Step4: Problem 2) A Brief Review of Fourier Analysis
In astronomical time series, we crave the detection of periodic signals because they can often provide fundamental insight into the sources we are studying (e.g., masses in a binary, pulsation timescales in stars, etc).
The standard$^\dagger$ choice for most astronomers to identify such signals is the Lomb-Scargle (LS) periodogram (Lomb 1976; Scargle 1982).
$^\dagger$Standard does not mean best, fastest, or even correct depending on your specific application.
At the heart of understanding any periodic signals is Fourier analysis. Thus, to understand how to interpret the LS periodogram, we first need to consider Fourier transforms.
Note - the following discussion is not complete. See Lecture I from this session for a more thorough review.
Given a continuous signal, $g(t)$ the Fourier transform of that signal is defined as
Step5: The common Fourier pairs are especially useful in light of the convolution theorem. Fourier transforms convert convolutions into point-wise products. We define a convolution as
Step6: Fourier transforms are all well and good, but ultimately we desire a measure of periodicity in actual observations of astrophysical sources, which cannot be (a) continuous, or (b) infinite. So, we cannot calculate a Fourier transform.
Nyquist Frequency
The first thing to understand with real world observations is the Nyquist frequency limit. If observations are obtained in a uniformly spaced manner at a rate of $f_0 = 1/T$ one can only recover the frequncy information if the signal is band-limited between frequencies $\pm f_0/2$. Put another way, the highest frequency that can be detected in such data is $f_0/2$.
This result can be (somewhat) intuited by looking at simulated data.
Problem 2a
Generate and plot a periodic signal with $f = f_\mathrm{Ny} = 1/2$ on a grid from 0 to 10, comprising of 10 even samples (i.e., 0, 1, 2, 3, ..., 10). Overplot the underlying signal in addition to the observations.
Step7: Sampling a signal directly at the Nyquist frequency results in a lack of any variability. But does this just mean that $f_\mathrm{Ny}$ is special? What happens at $f > f_\mathrm{Ny}$?
Problem 2b
As above, generate and plot a periodic signal with $f = 0.7$ on an even grid from 0 to 10. Overplot the underlying signal in addition to the observations.
Step8: From the plot the signal is clearly variable (unlike when $f = f_\mathrm{Ny}$). However, there are fewer than 2 observations per cycle.
Problem 2c
Overplot a source with $f = 2.7$ on the same data shown in 2b.
Step9: The observations are identical! Here is what you need to remember about the Nyquist frequency
Step10: Problem 3b
Write a function to minimize the $\chi^2$ given everything but $A_f$ and $\phi_f$.
Hint - minimize within the scipy package is helpful.
Step11: Problem 3c
Write a function, ls_periodogram, to calculate the LS periodogram for observations $y$, $\sigma_y$, $t$ over a frequency grid f_grid.
Step12: Problem 3d
Generate a periodic signal with 100 observations taken at random intervals over a time period of 10 days. Use an input period of 5.25, amplitude of 7.4, and variance of the noise = 0.8. Then compute and plot the periodogram for the simulated data. Do you recover the simulated period?
Hint - set the minimum frequency in the grid to $1/T$ where $T$ is the duration of the observations. Set the maximum frequnecy to 10, and use an equally spaced grid with 50 points.
Step13: Problem 3e
For the same data, include 1000 points in f_grid and calculate and plot the periodogram.
Now do you recover the correct period?
Step14: Problem 3f
Plot the phase-folded data at the newly found "best" fit period.
Step15: Congratulations
You did it! You just developed the software necessary to find periodic signals in sparsely sampled, noisy data.
You are ready to conquer LSST.
But wait!
There should be a few things that are bothering you.
First and foremost, why did we use a grid with 50 points and then increase that to 1000 points for the previous simulation?
There are many important ramifications following the choice of an evaluation grid for the LS periodogram. When selecting the grid upon which to evaluate $f$ one must determine both the limits for the grid and the spacing within the grid.
The minimum frequency is straightforward
Step16: Code from PracticalLombScargle by Jake Van der Plas
The functions below implement plotting routines developed by Jake to illustrate some properties of Fourier transforms.
This code is distributed under a BSD-3 licence, which is repeated below | Python Code:
def gen_periodic_data(x, period=1, amplitude=1, phase=0, noise=0):
'''Generate periodic data given the function inputs
y = A*sin(2*pi*x/p - phase) + noise
Parameters
----------
x : array-like
input values to evaluate the array
period : float (default=1)
period of the periodic signal
amplitude : float (default=1)
amplitude of the periodic signal
phase : float (default=0)
phase offset of the periodic signal
noise : float (default=0)
variance of the noise term added to the periodic signal
Returns
-------
y : array-like
Periodic signal evaluated at all points x
'''
y = # complete
dy = # complete
return y + dy
Explanation: Introduction to the Lomb-Scargle Periodogram
Version 0.2
By AA Miller (Northwester/CIERA)
15 Sep 2021
Today we examine the detection of periodic signals in noisy, irregular data (the standard for ground-based astronomical surveys).
This lecture is strongly influenced by Understanding the Lomb-Scarge Periodogram, by Jake VanderPlas, a former DSFP lecturer (VanderPlas 2017). Beyond that, the original papers by Lomb 1976 and Scargle 1982 are also worth a read.
There are many, many papers on the use of the Lomb-Scargle periodogram (and other period detection methods). A recent study led by former DSFP lecturer, Matthew Graham, conducted a systematic analysis of many of the most popular tools used to search for periodic signals on actual astronomical data (Graham et al. 2013).$^\dagger$
$^\dagger$Somewhat to my (our?) dismay, they found that none of the solutions work really well across all use cases.
Problem 1) Helper Functions
We need to simulate and plot the same types of data again and again. To start this lecture we will create a few helper functions to minimize repetitive commands (e.g., phase-folding a light curve).
Problem 1a
Create a function, gen_periodic_data, that creates simulated data (including noise) over a grid of user supplied positions:
$$ y = A\,sin\left(\frac{2 \pi x}{P} - \phi\right) + \sigma_y$$
where $A, P, \phi$ are inputs to the function. gen_periodic_data should include Gaussian noise, $\sigma_y$, for each output $y_i$.
End of explanation
x = np.linspace( # complete
y = gen_periodic_data( # complete
fig, ax = plt.subplots()
ax.scatter(x,y, edgecolors='0.2', linewidths=0.5)
ax.set_xlabel('x')
ax.set_ylabel('y')
fig.tight_layout()
Explanation: Problem 1b
Generate a noise-free signal with $A = 2$ and $p = \pi$ over a regular grid between 0 and 10. Plot the results (and make sure gen_periodic_data behaves as you would expect).
End of explanation
def phase_plot(x, y, period, y_unc = 0.0):
'''Create phase-folded plot of input data x, y
Parameters
----------
x : array-like
data values along abscissa
y : array-like
data values along ordinate
period : float
period to fold the data
y_unc : array-like
uncertainty of the
'''
phases = # complete
if type(y_unc) == float:
y_unc = np.zeros_like(x)
plot_order = np.argsort(phases)
fig, ax = plt.subplots()
ax.errorbar(phases[plot_order], y[plot_order], y_unc[plot_order],
fmt='o', mec="0.2", mew=0.1)
ax.set_xlabel("phase")
ax.set_ylabel("signal")
fig.tight_layout()
Explanation: Problem 1c
Create a function, phase_plot, that takes x, y, and $P$ as inputs to create a phase-folded light curve (i.e., plot the data at their respective phase values given the period $P$).
Include an optional argument, y_unc, to include uncertainties on the y values, when available.
End of explanation
phase_plot( # complete
Explanation: Problem 1d
Plot the phase folded data generated in 1b.
Does you plot match your expectations?
End of explanation
fourier_pairs_plot()
Explanation: Problem 2) A Brief Review of Fourier Analysis
In astronomical time series, we crave the detection of periodic signals because they can often provide fundamental insight into the sources we are studying (e.g., masses in a binary, pulsation timescales in stars, etc).
The standard$^\dagger$ choice for most astronomers to identify such signals is the Lomb-Scargle (LS) periodogram (Lomb 1976; Scargle 1982).
$^\dagger$Standard does not mean best, fastest, or even correct depending on your specific application.
At the heart of understanding any periodic signals is Fourier analysis. Thus, to understand how to interpret the LS periodogram, we first need to consider Fourier transforms.
Note - the following discussion is not complete. See Lecture I from this session for a more thorough review.
Given a continuous signal, $g(t)$ the Fourier transform of that signal is defined as:
$$\hat{\mathrm{g}}(f) = \int_{-\infty}^{\infty} g(t) \,e^{-2\pi i f t} \,dt,$$
where $i$ is an imaginary number.
The inverse of this equation is defined as:
$$ g(t) = \int_{-\infty}^{\infty} \hat{\mathrm{g}}(f) \,e^{2\pi i f t} \,df.$$
For convenience, we will use the Fourier transform operator $\mathcal{F}$, from which the previous equations reduce to:
$$\mathcal{F}(g) = \hat g$$
$$\mathcal{F}^{-1}(\hat{g}) = g$$
There are many useful properties of the Fourier transform including that the Fourier transform is a linear operator. Additionally, a time shift in the signal imparts a phase shift in the transform.
Perhaps most importantly for our present purposes, however, is that the squared amplitude of the resulting transform allows us to get rid of the imaginary component and measure the power spectral density or power spectrum:
$$ \mathcal{P}_g = \left|\mathcal{F}(g)\right|^2.$$
The power spectrum is a real-valued function that quantifies the contribution of each frequency $f$ to the total signal in $g$. The power spectrum thus provides a way to identify the dominant frequency in any given signal.
Next we consider some common Fourier pairs, that will prove helpful in our interpretation of the LS periodogram.
End of explanation
fourier_pairs_plot()
Explanation: The common Fourier pairs are especially useful in light of the convolution theorem. Fourier transforms convert convolutions into point-wise products. We define a convolution as:
$$ [f \ast g] (t) = \int_{-\infty}^{\infty} f(\tau) \,g(t - \tau) \,d\tau,$$
where $\ast$ is the convolution symbol.
From the convolution theorem:
$$ \mathcal{F} {f \ast g} = \mathcal{F}(f) \mathcal{F}(g) $$
Furthermore, the Fourier transform of a product is equal to the convolution of the Fourier transforms:
$$ \mathcal{F}{f \cdot g} = \mathcal{F}(f) \ast \mathcal{F}(g) $$
This property will be very important for understanding the Lomb-Scargle periodogram.
End of explanation
x = np.linspace( # complete
y = gen_periodic_data( # complete
x_signal = np.linspace( # complete
y_signal = gen_periodic_data( # complete
fig, ax = plt.subplots(figsize=(8,4))
ax.scatter(x,y)
ax.plot(x_signal, y_signal)
ax.set_xlabel('x')
ax.set_ylabel('y')
fig.tight_layout()
Explanation: Fourier transforms are all well and good, but ultimately we desire a measure of periodicity in actual observations of astrophysical sources, which cannot be (a) continuous, or (b) infinite. So, we cannot calculate a Fourier transform.
Nyquist Frequency
The first thing to understand with real world observations is the Nyquist frequency limit. If observations are obtained in a uniformly spaced manner at a rate of $f_0 = 1/T$ one can only recover the frequncy information if the signal is band-limited between frequencies $\pm f_0/2$. Put another way, the highest frequency that can be detected in such data is $f_0/2$.
This result can be (somewhat) intuited by looking at simulated data.
Problem 2a
Generate and plot a periodic signal with $f = f_\mathrm{Ny} = 1/2$ on a grid from 0 to 10, comprising of 10 even samples (i.e., 0, 1, 2, 3, ..., 10). Overplot the underlying signal in addition to the observations.
End of explanation
x = np.linspace( # complete
y = gen_periodic_data( # complete
x_signal = np.linspace(# complete
y_signal = gen_periodic_data(# complete
fig, ax = plt.subplots(figsize=(8,4))
ax.scatter(x,y)
ax.plot(x_signal, y_signal)
ax.set_xlabel('x')
ax.set_ylabel('y')
fig.tight_layout()
Explanation: Sampling a signal directly at the Nyquist frequency results in a lack of any variability. But does this just mean that $f_\mathrm{Ny}$ is special? What happens at $f > f_\mathrm{Ny}$?
Problem 2b
As above, generate and plot a periodic signal with $f = 0.7$ on an even grid from 0 to 10. Overplot the underlying signal in addition to the observations.
End of explanation
x = np.linspace(# complete
y = gen_periodic_data( # complete
x_signal = np.linspace( # complete
y_signal = gen_periodic_data( # complete
fig, ax = plt.subplots(figsize=(8,4))
ax.scatter(x,y)
ax.plot(x_signal, y_signal)
y_high = gen_periodic_data( # complete
y_signal_high = gen_periodic_data( # complete
ax.scatter(x,y_high)
ax.plot(x_signal, y_signal_high)
ax.set_xlabel('x')
ax.set_ylabel('y')
fig.tight_layout()
Explanation: From the plot the signal is clearly variable (unlike when $f = f_\mathrm{Ny}$). However, there are fewer than 2 observations per cycle.
Problem 2c
Overplot a source with $f = 2.7$ on the same data shown in 2b.
End of explanation
def chi2(theta, y, y_unc, x, f):
a = theta[0]
phi = theta[1]
# complete
return # complete
Explanation: The observations are identical! Here is what you need to remember about the Nyquist frequency:
If you are going to obtain observations at regular intervals, and there is a specific signal you wish to detect, then be sure to sample the data such that $f_\mathrm{Ny} > f_\mathrm{signal}$.
For all $f > f_\mathrm{Ny}$, $f$ will be aliased with $f \pm 2n f_\mathrm{Ny}$ signals, where $n$ is an integer. Practically speaking, this means it does not make sense to search for signals with $f > f_\mathrm{Ny}$.
Finally, (and this is something that is often wrong in the literature) there is no Nyquist limit for unevenly sampled data (see VanderPlas 2017 for further details). Thus, for (virtually all) ground-based observing one need not worry about the Nyquist limit.
Staying on the topic of non-continuous observations, I present without derivation the discrete Fourier transform:
$$ \hat g_\mathrm{obs}(f) = \sum_{n = 0}^N g_n\,e^{-2\pi i f n\Delta t}$$
where $g_n = g(n\Delta t)$, and $\Delta t$ is the sampling interval. Our discussion of the Nyquist frequency tells us that we cannot detect frequencies $f > 1/2\Delta T$. Thus, the relevant frequencies to search for power given $\Delta t$ are between 0 and $f_\mathrm{Ny}$, which we can sample on a grid $\Delta f = 1/(N \Delta t)$.
From there:
$$\hat g_k = \sum_{n = 0}^N g_n\,e^{-2\pi i f knN}$$
where $\hat g_k = \hat g_\mathrm{obs} (k\Delta f)$. This is the discrete Fourier transform.
I said a full derivation will not be provided, and that is true. To understand how we went from a continuous integral to the summation above, recall that regular observations of a continous signal over a finite interval is equivalent to multiplying the continuous signal by a Dirac comb function and a window function. The delta functions from the Dirac comb function collapse the integral to a sum, while the window function limits that sum from $0$ to $N$.
From the discrete Fourier transform we can then calculate the periodogram (an estimator of the power spectrum):
$$\mathcal{P}(f) = \frac{1}{N}\left|\sum_{n=1}^{N} g_n\,e^{-2\pi i f knN}\right|^2$$
which is also called the classical periodogram or the Schuster periodogram (Schuster 1898).
Problem 3) The LS Periodogram
Ultimately, we care about applications where the data are not perfectly uniformly sampled (even Kepler data is not uniformly sampled).
We can re-write the classical periodogram as:
$$\mathcal{P}(f) = \frac{1}{N}\left|\sum_{n=1}^{N} g_n\,e^{-2\pi i f t_n}\right|^2$$
where $t_n$ corresponds to the observation times. Irregular sampling removes a lot of the nice statistical properties of the discrete Fourier transform. Scargle (1982) was able to address these issues via a generalized form of the periodogram.
[Full disclosure - I'm just skipping the derivation in this case]
Instead, I will simplify things slightly by using the fact that Scargle's modified periodogram is identical to the result one obtains by fitting a sinusoid model to the data at each frequency $f$ and constructing a "periodogram" from the corresponding $\chi^2$ values at each frequency $f$ [this was considered in great detail by Lomb (1976)].
Note - to this day I find this particular identity remarkable.
Thus, using the model:
$$y(t;f) = A_f \sin(2\pi f(t - \phi_f))$$
we can calculate the $\chi^2$ for every frequency $f$:
$$\chi^2 = \sum_n (y_n - y(t_n; f))^2$$
The "best" model for a given frequency requires the selection of $A_f$ and $\phi_f$ that minimizes $\chi^2$, which we will call $\hat \chi^2$. Scargle (1982) then showed that the Lomb-Scargle periodogram can be written
$$\mathcal{P}_\mathrm{LS}(f) = \frac{1}{2}\left[ \hat \chi^2_0 - \hat \chi^2(f) \right]$$
where $\hat \chi^2_0$ is the value for a non-varying reference model.
This realization further enables the inclusion of observational uncertainty in the periodogram, via a familiar adjustment to the $\chi^2$ value:
$$\chi^2 = \sum_n \left(\frac{y_n - y(t_n; f)}{\sigma_n}\right)^2$$
where $\sigma_n$ is the uncertainty on the each measurement, $y_n$.
Now we will construct a Lomb-Scargle periodogram.
Problem 3a
Write a function, chi2, to calculate the $\chi^2$ given $f$, $A_f$, and $\phi$, for observations $y_n$ with uncertainty $\sigma_{y,n}$ taken at times $t_n$.
Hint - store $A_f$ and $\phi$ in a single variable theta, where a = theta[0] and phi = theta[1] (for later)
End of explanation
from scipy.optimize import minimize
def min_chi2(theta, y, y_unc, x, f):
res = minimize( # complete
return res.fun
Explanation: Problem 3b
Write a function to minimize the $\chi^2$ given everything but $A_f$ and $\phi_f$.
Hint - minimize within the scipy package is helpful.
End of explanation
def ls_periodogram(y, y_unc, x, f_grid):
psd = np.empty_like(f_grid)
chi2_0 = np.sum(((y - np.mean(y))/y_unc)**2)
for f_num, f in enumerate(f_grid):
psd[f_num] = 0.5*(chi2_0 - min_chi2([0,0], y, y_unc, x, f))
return psd
Explanation: Problem 3c
Write a function, ls_periodogram, to calculate the LS periodogram for observations $y$, $\sigma_y$, $t$ over a frequency grid f_grid.
End of explanation
np.random.seed(185)
# calculate the periodogram
x = 10*np.random.rand(100)
y = gen_periodic_data(x, period=5.25, amplitude=7.4, noise=0.8)
y_unc = np.ones_like(x)*np.sqrt(0.8)
f_grid = np.linspace( # complete
psd_ls = ls_periodogram(y, y_unc, x, f_grid)
# plot the periodogram
fig, ax = plt.subplots()
ax.plot(1/f_grid, psd_ls)
ax.set_ylabel('P')
ax.set_xlabel('Period')
fig.tight_layout()
Explanation: Problem 3d
Generate a periodic signal with 100 observations taken at random intervals over a time period of 10 days. Use an input period of 5.25, amplitude of 7.4, and variance of the noise = 0.8. Then compute and plot the periodogram for the simulated data. Do you recover the simulated period?
Hint - set the minimum frequency in the grid to $1/T$ where $T$ is the duration of the observations. Set the maximum frequnecy to 10, and use an equally spaced grid with 50 points.
End of explanation
# calculate the periodogram
f_grid = np.linspace( # complete
psd_ls = ls_periodogram(y, y_unc, x, f_grid)
# plot the periodogram
fig,ax = plt.subplots()
ax.plot(1/f_grid, psd_ls)
ax.set_ylabel('P')
ax.set_xlabel('Period')
fig.tight_layout()
print("The best fit period is: {:.4f}".format(1/f_grid[np.argmax(psd_ls)]))
Explanation: Problem 3e
For the same data, include 1000 points in f_grid and calculate and plot the periodogram.
Now do you recover the correct period?
End of explanation
phase_plot( # complete
Explanation: Problem 3f
Plot the phase-folded data at the newly found "best" fit period.
End of explanation
f_min = # complete
f_max = # complete
delta_f = # complete
f_grid = np.arange(f_min, f_max, delta_f)
print("{:d} grid points are needed to sample the periodogram".format(len(f_grid)))
Explanation: Congratulations
You did it! You just developed the software necessary to find periodic signals in sparsely sampled, noisy data.
You are ready to conquer LSST.
But wait!
There should be a few things that are bothering you.
First and foremost, why did we use a grid with 50 points and then increase that to 1000 points for the previous simulation?
There are many important ramifications following the choice of an evaluation grid for the LS periodogram. When selecting the grid upon which to evaluate $f$ one must determine both the limits for the grid and the spacing within the grid.
The minimum frequency is straightforward: $f_\mathrm{min} = 1/T$ corresponds to a signal that experiences 1 cycle in the span of the data. Computationally, $f_\mathrm{min} = 0$ does not add much time.
The maximum frequency is straightforward (if you have evenly spaced data): $f_\mathrm{Ny}$.
What if the data are not evenly spaced (a situation for which we said $f_\mathrm{Ny}$ does not exist?
There are many ad-hoc methods in the literature, such as $f_\mathrm{max} = 1/<\Delta T>$, where $<\Delta T>$ is the mean separation of consecutive observations. Again - this is not correct.
VanderPlas (2017) discusses this in detail.
My useful practical advice is to set $f_\mathrm{max}$ to the maximum frequency that you might expect to see in the data (for example, with the exception of a few extreme white dwarf systems, essentially no stars show periodicity at $< 1\,\mathrm{hr}$).
Of course, we still haven't decided what grid to adopt. If we use too few points, we will not resolve the peak in the periodogram. Alternatively, if we include too many points in the grid we will waste a lot of computation.
Fortunately, we can determine $\Delta f$ based on the window function (i.e., duration of the observations). The Fourier transform of a window function of length $T$ produces a sinc signal with width $\sim 1/T$. Thus, we need $\Delta f$ to sample $\sim 1/T$, which means $\Delta f = 1/n_0T$, where $n_0$ is a constant, and 5 is a good choice for $n_0$.
Problem 3g
Calculate the optimal frequency grid for Rubin light curves. Assume time is measured in days, a survey duration of 10 years and that the observations cannot recover periods less than 1 hr.
What is the size of this frequency grid?
End of explanation
def fourier_pairs_plot():
fig, ax = plt.subplots(4, 2, figsize=(10, 6))
fig.subplots_adjust(left=0.04, right=0.98, bottom=0.02, top=0.95,
hspace=0.3, wspace=0.2)
x = np.linspace(-5, 5, 1000)
for axi in ax.flat:
axi.xaxis.set_major_formatter(plt.NullFormatter())
axi.yaxis.set_major_formatter(plt.NullFormatter())
# draw center line
axi.axvline(0, linestyle='dotted', color='gray')
axi.axhline(0, linestyle='dotted', color='gray')
style_re = dict(linestyle='solid', color='k', linewidth=2)
style_im = dict(linestyle='solid', color='gray', linewidth=2)
text_style = dict(size=14, color='gray')
# sine -> delta
ax[0, 0].plot(x, np.cos(x),**style_re)
ax[0, 0].set(xlim=(-5, 5), ylim=(-1.2, 1.2))
ax[0, 0].annotate('', (-np.pi, 0), (np.pi, 0),
arrowprops=dict(arrowstyle='|-|', color='gray'))
ax[0, 0].text(0, 0, '$1/f_0$', ha='center', va='bottom', **text_style)
ax[0, 0].set_title('Sinusoid')
ax[0, 1].plot([-5, 2, 2, 2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].plot([-5, -2, -2, -2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[0, 1].annotate('', (0, 0.4), (2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].annotate('', (0, 0.4), (-2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].text(1, 0.45, '$+f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].text(-1, 0.45, '$-f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].set_title('Delta Functions')
# gaussian -> gaussian
ax[1, 0].plot(x, np.exp(-(2 * x) ** 2), **style_re)
ax[1, 0].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 0].annotate('', (0, 0.35), (0.6, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 0].text(0, 0.4, '$\sigma$', ha='center', va='bottom', **text_style)
ax[1, 0].set_title('Gaussian')
ax[1, 1].plot(x, np.exp(-(x / 2) ** 2), **style_re)
ax[1, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 1].annotate('', (0, 0.35), (2, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 1].text(0, 0.4, '$(2\pi\sigma)^{-1}$', ha='center', va='bottom', **text_style)
ax[1, 1].set_title('Gaussian')
# top hat -> sinc
ax[2, 0].plot([-2, -1, -1, 1, 1, 2], [0, 0, 1, 1, 0, 0], **style_re)
ax[2, 0].set(xlim=(-2, 2), ylim=(-0.3, 1.2))
ax[2, 0].annotate('', (-1, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 0].text(0.0, 0.5, '$T$', ha='center', va='bottom', **text_style)
ax[2, 0].set_title('Top Hat')
ax[2, 1].plot(x, np.sinc(x), **style_re)
ax[2, 1].set(xlim=(-5, 5), ylim=(-0.3, 1.2))
ax[2, 1].annotate('', (-1, 0), (1, 0), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 1].text(0.0, 0.0, '$2/T$', ha='center', va='bottom', **text_style)
ax[2, 1].set_title('Sinc')
# comb -> comb
ax[3, 0].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 0].set(xlim=(-5.5, 5.5), ylim=(-0.2, 1.2))
ax[3, 0].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 0].text(0.5, 0.6, '$T$', ha='center', va='bottom', **text_style)
ax[3, 0].set_title('Dirac Comb')
ax[3, 1].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 1].set(xlim=(-2.5, 2.5), ylim=(-0.2, 1.2));
ax[3, 1].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 1].text(0.5, 0.6, '$1/T$', ha='center', va='bottom', **text_style)
ax[3, 1].set_title('Dirac Comb')
for i, letter in enumerate('abcd'):
ax[i, 0].set_ylabel('({0})'.format(letter), rotation=0)
# Draw arrows between pairs of axes
for i in range(4):
left = ax[i, 0].bbox.transformed(fig.transFigure.inverted()).bounds
right = ax[i, 1].bbox.transformed(fig.transFigure.inverted()).bounds
x = 0.5 * (left[0] + left[2] + right[0])
y = left[1] + 0.5 * left[3]
fig.text(x, y, r'$\Longleftrightarrow$',
ha='center', va='center', size=30)
Explanation: Code from PracticalLombScargle by Jake Van der Plas
The functions below implement plotting routines developed by Jake to illustrate some properties of Fourier transforms.
This code is distributed under a BSD-3 licence, which is repeated below:
Copyright (c) 2015, Jake Vanderplas
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
Neither the name of PracticalLombScargle nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
End of explanation |
12,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Example
Step5: Problem data
this algorithm has the same flavor as the thing I'd like to do, but actually converges very slowly
will take a very long time to converge anything other than the smallest examples
don't worry if convergence plots look flat when dealing with 100s of rows
Step6: I'll fix the test data to something large enough so that each iteration's computational task is significant
just 10 iterations of the algorithm (along with the setup factorizations) in serial takes about a second on my laptop
Step7: parallel map
Step12: Dask Solution
I create a few weird functions to have pretty names in dask graphs
Step13: Visualize
the setup step involving the matrix factorizations
Step14: visualize one iteration
Step15: The setup step along with 3 iterations gives the following dask graph. (Which I'm showing mostly because it was satisfying to make.)
Step16: Reuse dask graph
Obviously, it's not efficient to make a huge dask graph, especially if I'll be doing thousands of iterations.
I really just want to create the dask graph for computing $x^{k+1}$ from $x^k$ and re-apply it at every iteration.
Is it more efficient to create that dask graph once and reuse it? Maybe that's a premature optimization... I'll do it anyway for fun.
Step17: iterative projection algorithm thoughts
I don't see any performance gain in using the threaded scheduler, but I don't see what I'm doing wrong here
I don't see any difference in runtime switching between dask.set_options(get=dask.threaded.get) and dask.set_options(get=dask.async.get_sync); not sure if it's actually changing the scheduler, but I haven't looked into it closely
random Dask thoughts
would be nice to be able to rename a Value and change the data that it points to
at least for visualizing, I wanted more control over intermediate value names and the ability to enumerate those names with subscripts, which lead to my kludgy functions above. probably not a high-priority item for you tho...
would be nice if dask.visualize also worked on Dask graph dictionaries, so I don't have to remember dask.dot.dot_graph
Dask attempt 2
I tried a different method for using the threaded scheduler, but got similar results
Step18: Runtime error
As I was experimenting and switching schedulers and between my first and second dask attempts, I would very often get the following "can't start new thread" error
I would also occasionally get an "TypeError | Python Code:
import numpy as np
from scipy.linalg import cho_factor, cho_solve
%matplotlib inline
import matplotlib.pyplot as plt
def factor(A,b):
Return cholesky factorization data to project onto Ax=b.
AAt = A.dot(A.T)
chol = cho_factor(AAt, overwrite_a=True)
c = cho_solve(chol, b, overwrite_b=False)
c = A.T.dot(c)
proj_data = dict(A=A, b=b, chol=chol,c=c)
return proj_data
def proj(proj_data, x0):
Use cholesky factorization data to project onto Ax=b.
A, chol, c = (proj_data[k] for k in 'A chol c'.split())
x = A.dot(x0)
x = cho_solve(chol, x, overwrite_b=True)
x = A.T.dot(x)
x = x0 - x + c
return x
def average(*vals):
Come to a consensus
return np.mean(vals, axis=0)
def make_data(k, rows, seed=0):
Make some random test data.
# each of k chunks gets 'rows' rows of the full matrix
n = rows*k
np.random.seed(seed)
Ahat = np.random.randn(n,n)
bhat = np.random.randn(n)
x_true = np.linalg.solve(Ahat,bhat)
x0 = np.random.randn(n)
As = []
bs = []
for i in range(k):
s = slice(i*rows,(i+1)*rows)
As += [Ahat[s,:]]
bs += [bhat[s]]
return As, bs, x_true, x0
Explanation: Example: $\hat{A}x=\hat{b}$
want to solve $\hat{A}x=\hat{b}$
$\hat{A}$ is so big that we can't work on it on a single machine
split $\hat{A}$ into groups of rows
$$
\hat{A}x=\hat{b}
$$
$$
\begin{bmatrix}
-\ A_1 - \
-\ A_2 - \
-\ A_3 -
\end{bmatrix}x =
\begin{bmatrix}
b_1 \
b_2 \
b_3
\end{bmatrix}
$$
$\hat{A}x=\hat{b}$ equivalent to set intersection problem
$$
x \in \lbrace z \mid A_1z = b_1 \rbrace \cap \lbrace z \mid A_2z = b_2 \rbrace \cap \lbrace z \mid A_3z = b_3 \rbrace
$$
i.e., find $x$ in the intersection of subspaces
on each machine, easy to project onto its subspace:
$$
\begin{array}{ll}
\mbox{minimize} & \|x - x_0 \|_2^2 \
\mbox{subject to} & A_i x = b_i
\end{array}
$$
"easy" because it's just linear algebra; involves a matrix factorization that can be reused at each iteration
let $\mbox{proj}_i(x_0)$ be the projection of $x_0$ onto the subspace
$$\lbrace z \mid A_iz = b_i \rbrace$$
Projection
projection of $x_0$ onto $\lbrace x \mid Ax = b\rbrace$ given by
$$
\mbox{proj}(x_0) = \left(I - A^T (AA^T)^{-1}A\right)x_0 + A^T (AA^T)^{-1}b = \left(I - A^T (AA^T)^{-1}A\right)x_0 + c
$$
One-time computation
compute $AA^T$ and form its Cholesky factorization once to reuse at each iteration
compute $c = A^T (AA^T)^{-1}b$ to reuse
Iteration
$z_i^{k+1} = \mbox{proj}_i(x^k)$
$\bar{x}^{k+1} = \frac{1}{N} \sum_{i=1}^N z_i^{k+1}$
the projection operation involves using the Cholesky factorization of $AA^T$ to compute $A^T (AA^T)^{-1}Ax^k$
End of explanation
As, bs, x_true, x0 = make_data(4, 2, seed=0)
proj_data = list(map(factor, As, bs))
x = x0
r = []
for i in range(1000):
z = (proj(d,x) for d in proj_data)
x = average(*z)
r.append(np.linalg.norm(x_true-x))
plt.semilogy(r)
As, bs, x_true, x0 = make_data(4, 100, seed=0)
proj_data = list(map(factor, As, bs))
x = x0
r = []
for i in range(1000):
z = (proj(d,x) for d in proj_data)
x = average(*z)
r.append(np.linalg.norm(x_true-x))
plt.semilogy(r)
Explanation: Problem data
this algorithm has the same flavor as the thing I'd like to do, but actually converges very slowly
will take a very long time to converge anything other than the smallest examples
don't worry if convergence plots look flat when dealing with 100s of rows
End of explanation
As, bs, x_true, x0 = make_data(4, 1000, seed=0)
%%time
proj_data = list(map(factor, As, bs))
x = x0
r = []
for i in range(10):
z = (proj(d,x) for d in proj_data)
x = average(*z)
r.append(np.linalg.norm(x_true-x))
Explanation: I'll fix the test data to something large enough so that each iteration's computational task is significant
just 10 iterations of the algorithm (along with the setup factorizations) in serial takes about a second on my laptop
End of explanation
As, bs, x_true, x0 = make_data(4, 3000, seed=0)
proj_data = list(map(factor, As, bs))
%%timeit -n1 -r50
a= list(map(lambda d: proj(d, x0), proj_data))
import concurrent.futures
from multiprocessing.pool import ThreadPool
ex = concurrent.futures.ThreadPoolExecutor(2)
pool = ThreadPool(2)
%timeit -n1 -r50 list(ex.map(lambda d: proj(d, x0), proj_data))
%timeit -n1 -r50 list(pool.map(lambda d: proj(d, x0), proj_data))
242/322.0
Explanation: parallel map
End of explanation
import dask
from dask import do, value, compute, visualize, get
from dask.imperative import Value
from dask.dot import dot_graph
from itertools import repeat
def enum_values(vals, name=None):
Create values with a name and a subscript
if not name:
raise ValueError('Need a name.')
return [value(v,name+'_%d'%i) for i,v in enumerate(vals)]
def rename(value, name):
Rename a Value.
d = dict(value.dask)
d[name] = d[value.key]
del d[value.key]
return Value(name, [d])
def enum_map(func, *args, name=None):
Map `func` over `args` to create `Value`s with a name and a subscript.
if not name:
raise ValueError('Need a name.')
values = (do(func)(*a) for a in zip(*args))
return [rename(v, name+'_%d'%i) for i, v in enumerate(values)]
def step(proj_data, xk, k=None):
One step of the projection iteration.
if k is None:
sufx = '^k+1'
else:
sufx = '^%d'%k
z = enum_map(proj, proj_data, repeat(xk), name='z'+sufx)
xkk = do(average)(*z)
xkk = rename(xkk, 'x'+sufx)
return xkk
Explanation: Dask Solution
I create a few weird functions to have pretty names in dask graphs
End of explanation
lAs = enum_values(As, 'A')
lbs = enum_values(bs, 'b')
proj_data = enum_map(factor, lAs, lbs, name='proj_data')
visualize(*proj_data)
Explanation: Visualize
the setup step involving the matrix factorizations
End of explanation
pd_val = [pd.compute() for pd in proj_data]
xk = value(x0,'x^k')
xkk = step(pd_val, xk)
xkk.visualize()
Explanation: visualize one iteration
End of explanation
x = value(x0,'x^0')
for k in range(3):
x = step(proj_data, x, k+1)
x.visualize()
Explanation: The setup step along with 3 iterations gives the following dask graph. (Which I'm showing mostly because it was satisfying to make.)
End of explanation
proj_data = enum_map(factor, As, bs, name='proj_data')
proj_data = compute(*proj_data)
x = value(0,'x^k')
x = step(proj_data, x)
dsk_step = x.dask
dot_graph(dsk_step)
dask.set_options(get=dask.threaded.get) # multiple threads
#dask.set_options(get=dask.async.get_sync) # single thread
%%time
# do one-time computation of factorizations
proj_data = enum_map(factor, As, bs, name='proj_data')
# realize the computations, so they aren't recomputed at each iteration
proj_data = compute(*proj_data)
# get dask graph for reuse
x = value(x0,'x^k')
x = step(proj_data, x)
dsk_step = x.dask
K = 100
r = []
for k in range(K):
dsk_step['x^k'] = get(dsk_step, 'x^k+1')
r.append(np.linalg.norm(x_true-dsk_step['x^k']))
%%time
# serial execution
proj_data = list(map(factor, As, bs))
x = x0
K = 100
r = []
for i in range(K):
z = (proj(d,x) for d in proj_data)
x = average(*z)
r.append(np.linalg.norm(x_true-x))
Explanation: Reuse dask graph
Obviously, it's not efficient to make a huge dask graph, especially if I'll be doing thousands of iterations.
I really just want to create the dask graph for computing $x^{k+1}$ from $x^k$ and re-apply it at every iteration.
Is it more efficient to create that dask graph once and reuse it? Maybe that's a premature optimization... I'll do it anyway for fun.
End of explanation
%%time
# do one-time computation of factorizations
proj_data = enum_map(factor, As, bs, name='proj_data')
# realize the computations, so they aren't recomputed at each iteration
proj_data = compute(*proj_data, get=dask.threaded.get, num_workers=2)
# get dask graph for reuse
x = value(x0,'x^k')
x = step(proj_data, x)
dsk_step = x.dask
K = 100
r = []
for k in range(K):
dsk_step['x^k'] = dask.threaded.get(dsk_step, 'x^k+1', num_workers=2)
r.append(np.linalg.norm(x_true-dsk_step['x^k']))
Explanation: iterative projection algorithm thoughts
I don't see any performance gain in using the threaded scheduler, but I don't see what I'm doing wrong here
I don't see any difference in runtime switching between dask.set_options(get=dask.threaded.get) and dask.set_options(get=dask.async.get_sync); not sure if it's actually changing the scheduler, but I haven't looked into it closely
random Dask thoughts
would be nice to be able to rename a Value and change the data that it points to
at least for visualizing, I wanted more control over intermediate value names and the ability to enumerate those names with subscripts, which lead to my kludgy functions above. probably not a high-priority item for you tho...
would be nice if dask.visualize also worked on Dask graph dictionaries, so I don't have to remember dask.dot.dot_graph
Dask attempt 2
I tried a different method for using the threaded scheduler, but got similar results
End of explanation
%%time
# do one-time computation of factorizations
proj_data = enum_map(factor, As, bs, name='proj_data')
# realize the computations, so they aren't recomputed at each iteration
proj_data = compute(*proj_data)
# get dask graph for reuse
x = value(x0,'x^k')
x = step(proj_data, x)
dsk_step = x.dask
K = 100
r = []
for k in range(K):
dsk_step['x^k'] = get(dsk_step, 'x^k+1', num_workers=2)
r.append(np.linalg.norm(x_true-dsk_step['x^k']))
%%time
# do one-time computation of factorizations
proj_data = enum_map(factor, As, bs, name='proj_data')
# realize the computations, so they aren't recomputed at each iteration
proj_data = compute(*proj_data)
# get dask graph for reuse
x = value(x0,'x^k')
x = step(proj_data, x)
dsk_step = x.dask
K = 100
r = []
for k in range(K):
dsk_step['x^k'] = get(dsk_step, 'x^k+1', num_workers=2)
r.append(np.linalg.norm(x_true-dsk_step['x^k']))
np.__config__.show()
Explanation: Runtime error
As I was experimenting and switching schedulers and between my first and second dask attempts, I would very often get the following "can't start new thread" error
I would also occasionally get an "TypeError: get_async() got multiple values for argument 'num_workers'" even though I had thought I'd set dask.set_options(get=dask.threaded.get)
End of explanation |
12,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
verify pyEMU null space projection with the freyberg problem
Step1: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
Step2: Draw some vectors from the prior and write the vectors to par files
Step3: Run pnulpar
Step4: Now for pyemu | Python Code:
%matplotlib inline
import os
import shutil
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyemu
Explanation: verify pyEMU null space projection with the freyberg problem
End of explanation
mc = pyemu.MonteCarlo(jco="freyberg.jcb",verbose=False,forecasts=[])
mc.drop_prior_information()
jco_ord = mc.jco.get(mc.pst.obs_names,mc.pst.par_names)
ord_base = "freyberg_ord"
jco_ord.to_binary(ord_base + ".jco")
mc.pst.control_data.parsaverun = ' '
mc.pst.write(ord_base+".pst")
Explanation: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
End of explanation
# setup the dirs to hold all this stuff
par_dir = "prior_par_draws"
proj_dir = "proj_par_draws"
parfile_base = os.path.join(par_dir,"draw_")
projparfile_base = os.path.join(proj_dir,"draw_")
if os.path.exists(par_dir):
shutil.rmtree(par_dir)
os.mkdir(par_dir)
if os.path.exists(proj_dir):
shutil.rmtree(proj_dir)
os.mkdir(proj_dir)
mc = pyemu.MonteCarlo(jco=ord_base+".jco")
# make some draws
mc.draw(10)
#for i in range(10):
# mc.parensemble.iloc[i,:] = i+1
#write them to files
mc.parensemble.index = [str(i+1) for i in range(mc.parensemble.shape[0])]
mc.parensemble.to_parfiles(parfile_base)
mc.parensemble.shape
Explanation: Draw some vectors from the prior and write the vectors to par files
End of explanation
exe = os.path.join("pnulpar.exe")
args = [ord_base+".pst","y","1","y","pnulpar_qhalfx.mat",parfile_base,projparfile_base]
in_file = os.path.join("misc","pnulpar.in")
with open(in_file,'w') as f:
f.write('\n'.join(args)+'\n')
os.system(exe + ' <'+in_file)
pnul_en = pyemu.ParameterEnsemble(mc.pst)
parfiles =[os.path.join(proj_dir,f) for f in os.listdir(proj_dir) if f.endswith(".par")]
pnul_en.read_parfiles(parfiles)
pnul_en.loc[:,"fname"] = pnul_en.index
pnul_en.index = pnul_en.fname.apply(lambda x:str(int(x.split('.')[0].split('_')[-1])))
f = pnul_en.pop("fname")
pnul_en.sort_index(axis=1,inplace=True)
pnul_en.sort_index(axis=0,inplace=True)
pnul_en
Explanation: Run pnulpar
End of explanation
print(mc.parensemble.istransformed)
mc.parensemble._transform()
en = mc.project_parensemble(nsing=1,inplace=False)
print(mc.parensemble.istransformed)
#en._back_transform()
en.sort_index(axis=1,inplace=True)
en.sort_index(axis=0,inplace=True)
en
#pnul_en.sort(inplace=True)
#en.sort(inplace=True)
diff = 100.0 * np.abs(pnul_en - en) / en
#diff[diff<1.0] = np.NaN
dmax = diff.max(axis=0)
dmax.sort_index(ascending=False,inplace=True)
dmax.plot(figsize=(10,10))
diff
en.loc[:,"wf6_2"]
pnul_en.loc[:,"wf6_2"]
Explanation: Now for pyemu
End of explanation |
12,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2. Gender Detection
Figuring out genders from names
We're going to use 3 different methods, all of which use a similar philosophy. Essentially, each of these services have build databases from datasets where genders are known or can be identified. For example, national census data and social media profiles.
GenderDetector can be run locally, but only provides "male", "female" or "unknown", and has a limitted number of names in the database.
genderize.io and Gender API are web services that allow us to query names and return genders
Each of these services provides a "probability" that the gender is correct (so if "Jamie" shows up 80 times in their data as a female name, and 20 times as a male name, they'll say it's "female" with a probability of 0.8)
They also tell us how certain we can be of that gender by telling us how many times that name shows up (in the above example, the count would be 100. This is useful because some names might only have 1 or 2 entries, in which case a 100% probability of being male would be less reliable than a name that has 1000 entries.
The web APIs have superior data, but the problem is that they are services that require you to pay if you make more than a certain number of queries in a short period of time. The owners of both services have generously provided me with enough queries to do this research for free.
Getting names to query
First, we'll take the names from our pubmed queries and collapse them into sets. We don't really need to query the
name "John" a thousand times - once will do. I'm going to loop through the csv we wrote out in the last section and pull the fourth column, which contains our author name.
Step1: Then we'll convert the list to a set, which is an unordered array of unique values (so it removes duplicates)
Step2: Here's a function that does the same thing.
Step3: The set.union() function will merge 2 sets into a single set, so we'll do this with our other datasets.
Step4: Getting genders from names
GenderDetector
First up - GenderDetector. The usage is pretty straighforward
Step5: Output datasets
Step6: Genderize.io
This one is a bit more complicated, since we have to make a call to the web api, and then parse the json that's returned. Happily, someone already wrote a python package to do most of the work. We can query 10 names at a time rather than each one individually, and we'll get back a list of dictionaries, one for each query
Step7: Gender-API
This is a similar service, but I didn't find a python package for it. Thankfully, it's pretty easy too. The following code is for python2, but you can find the python3 code on the website. The vaule that gets returned comes in the form of a dictionary as well | Python Code:
import os
os.chdir("../data/pubdata")
names = []
with open("comp.csv") as infile:
for line in infile:
names.append(line.split(",")[5])
Explanation: 2. Gender Detection
Figuring out genders from names
We're going to use 3 different methods, all of which use a similar philosophy. Essentially, each of these services have build databases from datasets where genders are known or can be identified. For example, national census data and social media profiles.
GenderDetector can be run locally, but only provides "male", "female" or "unknown", and has a limitted number of names in the database.
genderize.io and Gender API are web services that allow us to query names and return genders
Each of these services provides a "probability" that the gender is correct (so if "Jamie" shows up 80 times in their data as a female name, and 20 times as a male name, they'll say it's "female" with a probability of 0.8)
They also tell us how certain we can be of that gender by telling us how many times that name shows up (in the above example, the count would be 100. This is useful because some names might only have 1 or 2 entries, in which case a 100% probability of being male would be less reliable than a name that has 1000 entries.
The web APIs have superior data, but the problem is that they are services that require you to pay if you make more than a certain number of queries in a short period of time. The owners of both services have generously provided me with enough queries to do this research for free.
Getting names to query
First, we'll take the names from our pubmed queries and collapse them into sets. We don't really need to query the
name "John" a thousand times - once will do. I'm going to loop through the csv we wrote out in the last section and pull the fourth column, which contains our author name.
End of explanation
print(len(names))
names = set(names)
print(len(names))
Explanation: Then we'll convert the list to a set, which is an unordered array of unique values (so it removes duplicates)
End of explanation
def get_unique_names(csv_file):
names = []
with open(csv_file) as infile:
for line in infile:
names.append(line.split(",")[5])
return set(names)
Explanation: Here's a function that does the same thing.
End of explanation
names = names.union(get_unique_names("bio.csv"))
print(len(all_names))
Explanation: The set.union() function will merge 2 sets into a single set, so we'll do this with our other datasets.
End of explanation
from gender_detector import GenderDetector
detector = GenderDetector('us')
print(detector.guess("kevin"))
print(detector.guess("melanie"))
print(detector.guess("ajasja"))
gender_dict = {}
counter = 0
for name in names:
try:
gender = detector.guess(name)
gender_dict[name] = gender
except:
print(name)
print(len(gender_dict))
print(sum([1 for x in gender_dict if gender_dict[x] == 'unknown']))
print(sum([1 for x in gender_dict if gender_dict[x] != 'unknown']))
Explanation: Getting genders from names
GenderDetector
First up - GenderDetector. The usage is pretty straighforward:
End of explanation
import json
with open("GenderDetector_genders.json", "w+") as outfile:
outfile.write(json.dumps(gender_dict, indent=4))
Explanation: Output datasets
End of explanation
from api_keys import genderize_key
from genderize import Genderize
all_names = list(all_names)
genderize = Genderize(
user_agent='Kevin_Bonham',
api_key=genderize_key)
genderize_dict = {}
for i in range(0, len(all_names), 10):
query = all_names[i:i+10]
genders = genderize.get(query)
for gender in genders:
n = gender["name"]
g = gender["gender"]
if g != None:
p = gender["probability"]
c = gender["count"]
else:
p = None
c = 0
genderize_dict[n] = {"gender":g, "probability":p, "count": c}
with open("genderize_genders.json", "w+") as outfile:
outfile.write(json.dumps(genderize_dict, indent=4))
print(len(genderize_dict))
print(sum([1 for x in genderize_dict if genderize_dict[x]["gender"] == 'unknown']))
print(sum([1 for x in genderize_dict if genderize_dict[x]["gender"] != 'unknown']))
Explanation: Genderize.io
This one is a bit more complicated, since we have to make a call to the web api, and then parse the json that's returned. Happily, someone already wrote a python package to do most of the work. We can query 10 names at a time rather than each one individually, and we'll get back a list of dictionaries, one for each query:
[{u'count': 1037, u'gender': u'male', u'name': u'James', u'probability': 0.99},
{u'count': 234, u'gender': u'female', u'name': u'Eva', u'probability': 1.0},
{u'gender': None, u'name': u'Thunderhorse'}]
I will turn that into a dictionary of dictionaries, where the name is the key, and the other elements are stored under them. Eg:
{
u'James':{
u'count': 1037,
u'gender': u'male',
u'probability': 0.99
},
u'Eva':{
u'count': 234,
u'gender': u'female',
u'probability': 1.0
},
u'Thunderhorse':{
u'count: 0,
u'gender': None,
u'probability': None
}
}
Note:
I've got an API key stored in a separate file called api_keys.py (that I'm not putting on git because you can't have my queries!) that looks like this:
genderize_key = "s0m3numb3rsandl3tt3rs"
genderAPI_key = "0th3rnumb3rsandl3tt3rs"
You can get a key from both services for free, but you'll be limited in the number of queries you can make. Just make a similar file, or add them in below in place of the proper variables.
End of explanation
from api_keys import genderAPI_key
import urllib2
genderAPI_dict = {}
counter = 0
for i in range(counter, len(all_names), 20):
names = all_names[i:i+20]
query = ";".join(names)
data = json.load(urllib2.urlopen("https://gender-api.com/get?key={}&name={}".format(genderAPI_key, query)))
for r in data['result']:
n = r["name"]
g = r["gender"]
if g != u"unknown":
p = float(r["accuracy"]) / 100
c = r["samples"]
else:
p = None
c = 0
genderAPI_dict[n] = {"gender":g, "probability":p, "count": c}
with open("../data/pubs/genderAPI_genders.json", "w+") as outfile:
outfile.write(json.dumps(genderAPI_dict, indent=4))
Explanation: Gender-API
This is a similar service, but I didn't find a python package for it. Thankfully, it's pretty easy too. The following code is for python2, but you can find the python3 code on the website. The vaule that gets returned comes in the form of a dictionary as well:
{u'accuracy': 99,
u'duration': u'26ms',
u'gender': u'male',
u'name': u'markus',
u'samples': 26354}
Which I'll convert to the same keys and value types used from genderize above (eg. "probability" instead of "accuracy", "count" instead of "samples", and 0.99 instead of 99),
End of explanation |
12,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trace Analysis Examples
Tasks Latencies
This notebook shows the features provided for task latency profiling. It will be necessary to collect the following events
Step1: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
Step2: Workload Configuration and Execution
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
Step3: Parse Trace and Profiling Data
Step4: Trace visualization
Step5: Latency Analysis
Latency DataFrames
Step6: Latency Plots
Step7: Activations Analysis
Activations DataFrames
Step8: Activations Plots
Step9: Runtimes Analysis
Runtimes DataFrames
Step10: Runtimes Plots | Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%matplotlib inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
# Support for workload generation
from wlgen import RTA, Ramp
# Support for trace analysis
from trace import Trace
# Support for plotting
import numpy
import pandas as pd
import matplotlib.pyplot as plt
import trappy
Explanation: Trace Analysis Examples
Tasks Latencies
This notebook shows the features provided for task latency profiling. It will be necessary to collect the following events:
Details on idle states profiling ar given in Latency DataFrames and Latency Plots below.
End of explanation
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
"password" : 'juno',
# Folder where all the results will be collected
"results_dir" : "TraceAnalysis_TaskLatencies",
# Define devlib modules to load
"exclude_modules" : [ 'hwmon' ],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_load_avg_cpu",
"sched_load_avg_task",
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
"rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
Explanation: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
End of explanation
def experiment(te):
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# FTrace the execution of this workload
te.ftrace.start()
rtapp.run(out_dir=te.res_dir)
te.ftrace.stop()
# Collect and keep track of the trace
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
# Collect and keep track of the Kernel Functions performance data
stats_file = os.path.join(te.res_dir, 'trace.stats')
te.ftrace.get_stats(stats_file)
# Dump platform descriptor
te.platform_dump(te.res_dir)
experiment(te)
Explanation: Workload Configuration and Execution
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
End of explanation
# Base folder where tests folder are located
res_dir = te.res_dir
logging.info('Content of the output folder %s', res_dir)
!tree {res_dir}
with open(os.path.join(res_dir, 'platform.json'), 'r') as fh:
platform = json.load(fh)
logging.info('LITTLE cluster max capacity: %d',
platform['nrg_model']['little']['cpu']['cap_max'])
trace_file = os.path.join(res_dir, 'trace.dat')
trace = Trace(platform, trace_file, events=my_conf['ftrace']['events'])
Explanation: Parse Trace and Profiling Data
End of explanation
trappy.plotter.plot_trace(trace.ftrace)
Explanation: Trace visualization
End of explanation
print trace.data_frame.latency_df.__doc__
# Report full set of task status informations available from the trace
trace.data_frame.latency_df('ramp').head()
# Report information on sched_switch events
df = trace.data_frame.trace_event('sched_switch')
df.head()
print trace.data_frame.latency_wakeup_df.__doc__
# Report WAKEUP events and their duration
trace.data_frame.latency_wakeup_df('ramp').head()
print trace.data_frame.latency_preemption_df.__doc__
# Report PREEMPTION events and their duration
trace.data_frame.latency_preemption_df('ramp').head()
Explanation: Latency Analysis
Latency DataFrames
End of explanation
print trace.analysis.latency.plotLatency.__doc__
# Plot latency events for a specified task
latency_stats_df = trace.analysis.latency.plotLatency('ramp')
# Plot statistics on task latencies
latency_stats_df.T
print trace.analysis.latency.plotLatencyBands.__doc__
# Plot latency events for a specified task
trace.analysis.latency.plotLatencyBands('ramp')
# Zoom into a spefific time frame
trace.setXTimeRange(4.28,4.29)
trace.analysis.latency.plotLatencyBands('ramp')
Explanation: Latency Plots
End of explanation
print trace.data_frame.activations_df.__doc__
# Report the sequence of activations intervals:
# Time: wakeup time
# activation_internal: time interval wrt previous wakeup
trace.data_frame.activations_df('ramp').head()
Explanation: Activations Analysis
Activations DataFrames
End of explanation
print trace.analysis.latency.plotActivations.__doc__
# Plot activation internvals for a specified task
activations_df = trace.analysis.latency.plotActivations('ramp', threshold_ms=120)
# Plot statistics on task activation intervals
activations_df.T
Explanation: Activations Plots
End of explanation
print trace.data_frame.runtimes_df.__doc__
# Report the sequence of running times:
# Time: task block time (i.e. sleep or exit)
# running_time: cumulative ruinning times since last wakeup event
trace.data_frame.runtimes_df('ramp').head()
Explanation: Runtimes Analysis
Runtimes DataFrames
End of explanation
print trace.analysis.latency.plotRuntimes.__doc__
# Plot activation internvals for a specified task
runtimes_df = trace.analysis.latency.plotRuntimes('ramp', threshold_ms=120)
# Plot statistics on task running times
runtimes_df.T
Explanation: Runtimes Plots
End of explanation |
12,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic Survival Analysis
Step1: The next step is to explore the dataset
Step2: We can see that Passenger ID, Name and Cabin have little value to the analysis, so we drop these columns off the dataset
Step3: Data cleaning
Step4: Someone's family size would be equal to their number of spouses/siblings and parents/children on the ship, plus themselves
Step5: Now we would extract the survived dataset for future analysis
Step6: Questions
Step7: We can see that male adults are the initial largest type of people on the ship, followed by female adults and child.
Now looking into the survival rate
Step8: Comparing to the initial number of people of each type, we can see that children have more than 50% survival rate, female adults have an impressive survival rate around 75%, while male adults have a small survival rate of around 16% comparing to their intial numbers. So we can see that there was an inherent "women and children first" code when it came to saving people on the ship.
Step9: The histogram of the age distribution of the survival group also confirms that younger people had a higher advantage in survival comparing to older ages.
Socio-economic classes
Step10: Approximately 55% of the passengers belonged to the third class, while the rest of the ship belong to the first and second classes. Now we'll see if the first and second class passengers also paid a premimum when it comes to safety?
Step11: The survival rate of the first class passengers was more than 60%, while the survival rate of the third class ones was merely around 25%. So we can see that there was a bias on weathiness and soci-economic statuses, even in life-threatning situations.
Now, what if we factor in both passenger classes and types (male, female or children), which would have more weight in survival rate?
Step12: We can see the women and children of the first class had a significantly impressive survival rate (more than 90% and 80% respectively), when the women and children of the third class had a much lower survival rate (more than 45% and around 40% respectively). However, the women and children from the third class did have a higher survival rate than the men from higher classes. Men from the first class had a survival rate of around 35%, which was actually below the overall survival rate of 38.38%. Men from the second and third classes suffered very low survival rates, which was around 8 % and around 12 % respectively comparing to their initial numbers.
Family size
Step13: We can see that the majority of the ship traveled by themselves, followed by families of 2 or 3. The families that had more than 3 members made up a small part of the ship. Now look into the survival statistics | Python Code:
# Import the libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
# Read the csv file
titanic = pd.read_csv("titanic-data.csv")
Explanation: Titanic Survival Analysis:
First steps:
First, we need to import all the libraries needed for the analysis and load the data file:
End of explanation
titanic.shape
titanic.columns
titanic
Explanation: The next step is to explore the dataset:
End of explanation
titanic = titanic.drop(['PassengerId','Name','Ticket', 'Cabin', 'Embarked'], axis=1)
titanic['Survived'].describe()
Explanation: We can see that Passenger ID, Name and Cabin have little value to the analysis, so we drop these columns off the dataset:
End of explanation
titanic['Age'].describe()
average_age = titanic["Age"].mean()
std_age = titanic["Age"].std()
count_nan_age = titanic["Age"].isnull().sum()
# generate random numbers between (mean - std) & (mean + std)
rand = np.random.randint(average_age - std_age, average_age + std_age, size = count_nan_age)
# Fill NAs in age with median age
titanic['Age'][np.isnan(titanic["Age"])] = rand
titanic['Age'].describe()
sns.distplot(titanic['Age'])
plt.ylabel('Distribution')
plt.title("The distribution of ages of Titanic passengers")
plt.show()
Explanation: Data cleaning:
We can see that both the Age column has a lot of NAs. We would need to fill in the blank with random values generated within their standardized value.
End of explanation
# Family size
titanic['Family_size'] = titanic['SibSp'] + titanic['Parch'] + 1
Explanation: Someone's family size would be equal to their number of spouses/siblings and parents/children on the ship, plus themselves:
End of explanation
survived = titanic[titanic['Survived'] == 1]
Explanation: Now we would extract the survived dataset for future analysis:
End of explanation
def passenger_type(person):
if person['Age'] <= 16:
return "child"
elif person['Sex'] == "female":
return "female_adult"
else:
return "male_adult"
titanic['Type'] = titanic.apply(passenger_type, axis = 1)
titanic
titanic['Type'].value_counts()
sns.set(style="darkgrid")
ax = sns.countplot(x="Type", data = titanic)
plt.title("Number of passengers sorted by type")
plt.show()
Explanation: Questions:
According to Wikipedia, "Women and children first" is a code of conduct dating from 1860, whereby the lives of women and children were to be saved first in a life-threatening situation, typically abandoning ship, when survival resources such as lifeboats were limited. The wiki page actually gives some insights and statistics on the survival rate of the Titanic; however, in this analysis, I would reconfirm them, and attempt to find out which other factors that determine the survival rate in the Titanic tragedy.
The questions I am going to answer in this analysis are:
Was there really a "Women and children first" rule on the Titanic?
Did other factors such as wealth/classes and family sizes affect someone's chance of survival?
Women and children first?
Assuming people are neutral on the gender of a kid, I would split the passengers into 3 types:
End of explanation
survived = titanic[titanic['Survived'] == 1]
non_survived = titanic[titanic['Survived'] == 0]
survived['Type'].value_counts()
non_survived['Type'].value_counts()
sns.set(style="darkgrid")
ax = sns.countplot(x="Survived", hue = "Type", data = titanic)
plt.title("Numbers of survivals and non-survivals, sorted by type")
plt.show()
Explanation: We can see that male adults are the initial largest type of people on the ship, followed by female adults and child.
Now looking into the survival rate:
End of explanation
sns.distplot(survived['Age'])
plt.ylabel("Distribution")
plt.title("The distribution of ages of Titanic survivals")
plt.show()
Explanation: Comparing to the initial number of people of each type, we can see that children have more than 50% survival rate, female adults have an impressive survival rate around 75%, while male adults have a small survival rate of around 16% comparing to their intial numbers. So we can see that there was an inherent "women and children first" code when it came to saving people on the ship.
End of explanation
titanic['Pclass'].value_counts()
sns.set(style="darkgrid")
ax = sns.countplot(x = "Pclass", data = titanic)
plt.xlabel("Passenger classes")
plt.title("Number of passengers, sorted by passenger classes")
plt.show()
Explanation: The histogram of the age distribution of the survival group also confirms that younger people had a higher advantage in survival comparing to older ages.
Socio-economic classes:
We can assume that someone's class on the Titanic represented their socio-economic status. Also, we would assume that the fares have a direct correlation with the classes; so we only need to examine one of them.
End of explanation
survived['Pclass'].value_counts()
sns.set(style="darkgrid")
ax = sns.countplot(x = "Pclass", hue = "Survived", data = titanic)
plt.xlabel("Passenger classes")
plt.title("Number of survivals and non-survivals, sorted by passenger classes")
plt.show()
Explanation: Approximately 55% of the passengers belonged to the third class, while the rest of the ship belong to the first and second classes. Now we'll see if the first and second class passengers also paid a premimum when it comes to safety?
End of explanation
titanic.groupby(['Pclass', 'Type']).Type.count()
sns.set(style="darkgrid")
ax = sns.countplot(x = "Pclass", hue = "Type", data = titanic)
plt.title("Number of people per type, sorted by passenger classes")
plt.show()
titanic.groupby(['Pclass', 'Type']).agg({'Survived': 'sum'})
sns.set(style="darkgrid")
ax = sns.countplot(x = "Pclass", hue = "Type", data = survived)
plt.title("Number of survivals and non-survivals per type, sorted by passenger classes")
plt.show()
Explanation: The survival rate of the first class passengers was more than 60%, while the survival rate of the third class ones was merely around 25%. So we can see that there was a bias on weathiness and soci-economic statuses, even in life-threatning situations.
Now, what if we factor in both passenger classes and types (male, female or children), which would have more weight in survival rate?
End of explanation
titanic['Family_size'].value_counts()
Explanation: We can see the women and children of the first class had a significantly impressive survival rate (more than 90% and 80% respectively), when the women and children of the third class had a much lower survival rate (more than 45% and around 40% respectively). However, the women and children from the third class did have a higher survival rate than the men from higher classes. Men from the first class had a survival rate of around 35%, which was actually below the overall survival rate of 38.38%. Men from the second and third classes suffered very low survival rates, which was around 8 % and around 12 % respectively comparing to their initial numbers.
Family size:
Did people have a higher chance of survival if they traveled with family rather than traveling alone? We'll find out.
End of explanation
survived['Family_size'].value_counts()
sns.boxplot(x="Survived", y="Family_size", data=titanic)
plt.title("The distribution of family sizes of non-survivals and survivals")
plt.show()
sns.kdeplot(survived['Family_size'], shade=True)
plt.ylabel("Distribution")
plt.xlabel("Family size")
plt.title("The distribution of family sizes of non-survivals and survivals")
plt.show()
Explanation: We can see that the majority of the ship traveled by themselves, followed by families of 2 or 3. The families that had more than 3 members made up a small part of the ship. Now look into the survival statistics:
End of explanation |
12,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dieses Notebook ist ein Skript (Drehbuch) zur Vorstellung grundlegender Funktionen von Jupyter, Python, Pandas und matplotlib, um ein Gefühl für die Arbeit mit den Biblotheken zu bekommen. Daher ist das gewählte Beispiel so gewählt, dass wir typische Aufgaben während einer Datenanalyse bearbeiten. Inhaltlich ist diese Analyse allerdings nicht repräsentativ, da sie lediglich einfach Statistiken über ein Git-Repository darstellt.
Jupyter
Zuallerst sehen wir uns Jupyter genauer an. Das hier ist Jupyter, die interaktive Notebook-Umgebung zum Programmieren. Wir sehen hier eine Zelle, in der wir Python-Code eingeben können. Geben wir einfach einmal einen String namens "Hello World" ein. Mit der Tastenkombination Strg + Enter.
Step1: Das Ergebnis ist sofort unter der Zelle sichtbar. Legen wir nun eine weitere Zelle an. Dies funktioniert mit dem Drücken der Taste ESC und einem darauffolgendem Buchstaben b. Alternativ können wir am Ende eines Notebooks eine Zelle mit Shift + Enter ausführen und gleich eine neue Zelle erstellen.
Hier sehen wir gleich eine wichtige Eigenheit von Jupyter
Step2: Die weitere Funktionalität einer Bibliothek können wir erkunden, indem wir die Methoden und Attribute einer Klasse oder eines Objekts ansehen. Dazu schreiben wir in unserem String-Beispiel text. und nutzen die integrierte Autovervollständigung von Jupyter mittels der Tabulatortaste Tab, um zu sehen, welche Methoden uns aktuell verwendetes Objekt bietet. Gehen wir dann mit der Pfeiltaste unten oder drücken z. B. die ersten Buchstaben von upper, drücken Enter und schließend Shift+ Tab, dann erscheint die Signatur des entsprechenden Funktionalität und der Ausschnitt der Hilfedokumentation. Bei zweimaligem Drücken von Shift + Tab erscheint die Hilfe vollständig. Mit dem Aufruf von upper() auf unsere text-Variable können wir unseren Text in Großbuchstaben schreiben lassen.
Step3: Die interaktive Quellcode-Dokumentation hilft uns auch herauszufinden, welche Argumente wir in einer Methode zusätzlich zu normale Übergabeparametern hinzufügen können.m
Step4: Git-Historienanalyse
In diesem Notebook wollen wir uns die Entwicklungsgeschichte des Spring Framework Beispielprojekts "Spring PetClinic" anhand der Historie des dazugehörigen Git-Repositories ein wenig genauer ansehen.
Das GitHub-Repository https
Step5: Ob das Importieren des Moduls auch wirklich funktioniert hat, können wir prüfen, in dem wir mit dem pd-Modul arbeiten. Dazu hängen wir an die pd-Variable den ? Operator an und führen die Zelle aus. Es erscheint die Dokumentation des Moduls im unteren Bereich des Notebooks. Diesen Bereich können wir durchlesen und mit der Taste ESC auch wieder verschwinden lassen.
Step6: Danach lesen wir die oben beschriebene CSV-Datei timestamp_author.csv ein und speichern das Ergebnis in der Variable git_log. Neben dem Dateinamen müssen wir zusätzlich über das Argument names noch eine Liste an Namen für die Kopfzeile mitgeben, da unsere Git-Log-Datei keine entsprechende Kopfzeile besitzt.
Wir haben nun die Daten in einen DataFrame (so etwas ähnliches wie ein programmierbares Excel-Arbeitsblatt) geladen, der in unserem Fall aus zwei Series (in etwa Spalten) besteht. Auf den DataFrame können wir nun Operationen ausführen. Z. B. können wir uns mittels head() die fünf ersten Einträge anzeigen lassen.
Step7: Als nächstes rufen wir info() auf den DataFrame auf, um einige Eckdaten über die eingelesenen Daten zu erhalten.
Step8: Den Zugriff auf die einzelnen Series können wir mittels der Schreibeweise [<spaltenname>] oder (in den meisten Fällen) per direkter Nutzung des Namens der Series erreichen.
Step9: Auch auf einer Series selbst können wir verschiedene Operationen ausführen. Z. B. können wir mit value_counts() die in einer Series enthaltenen Werte zählen und nach gleichzeitig nach ihrer Häufigkeit sortieren lassen. Das Ergebnis ist wieder eine Series, diesmal aber mit den zusammengezählten und sortieren Werten. Auf diese Series können wir zusätzlich ein head(10) aufrufen. So erhalten wir eine schnelle Möglichkeit, die TOP-10-Werte einer Series anzeigen zu lassen. Das Ergebnis können wir dann in einer Variable top10 festhalten und ausgeben lassen, in dem wir die Variable in die nächste Zellenzeile schreiben.
Step10: Plotten/Visualisierung
Als nächstes wollen wir das Ergebnis visualisieren bzw. plotten. Um die das Plot-Ergebnis der intern verwendeten Plotting-Bibliothek matplotlib direkt im Notebook anzuzeigen, müssen wir Jupyter dies mit dem Magic-Kommando
%matplotlib inline
vor dem Aufruf der plot() Methode mitteilen.
Standardmäßig wird beim Aufruf von plot() auf einen DataFrame oder einer Series ein Liniendiagramm erstellt.
Step11: Das macht hier wenig Sinn, weshalb wir mittels einer Untermethode von plot namens bar() ein Balkendiagramm erzeugen lassen.
Step12: Für diese Daten bietet sich auch eine Visualisierung als Tortendiagramm an. Hierfür rufen wir statt bar() die Methode pie() auf.
Step13: Das Diagramm sieht hier jedoch nicht sehr schön aus.
Mit den optionalen Styling-Parametern können wir erreichen, dass wir eine schönere Grafik angezeigt bekommen. Wir verwenden dazu
* figsize=[7,7] als Größenangabe
* title="Top 10 Autoren" als Titel
* labels=None, um die überflüssige Beschriftung nicht anzuzeigen.
Step14: Extraktion von Informationen
Nun widmen wir uns den Zeitstempelangaben. Wir wollen anhand dieser ungefähr herausfinden, wo die meisten Entwickler wohnen. Dazu extrahieren wir die Informationen über den Zeitstempel in timestamp in eine neue Spalte/Series mittels der split()-Funktion, welche uns von den str / String-Funktionen einer Series bereitgestellt wird. Die split()-Funktion benötigt als erstes, optionales Argument das Trennzeichen (standardmäßig ist dies das Leerzeichen) zur Trennung des Strings. Zusätzlich geben wir mit expand=True mit, dass wir als Rückgabewert einen DataFrame haben möchten. Damit können wir mit Hilfe der Selektion einer beliebigen Spalte den für uns interessanten Teil des DataFrames auswählen (in unserem Fall [5]).
Step15: Analog zu den TOP 10 Autoren können wir nun die TOP 10 Zeitzonen ausgeben lassen.
Step16: Arbeiten mit Datumsangaben
Bevor wir in die Welt der Zeitreihenverarbeitung einsteigen können, müssen wir unsere Spalte mit den Datumsangabe zuerst in den passenden Datentyp umwandeln. Zurzeit ist unsere Spalte timestamp noch ein String, also von textueller Natur. Wir können dies sehen, in dem wir uns mittels der Helferfunktion type(<object>) den ersten Eintrag der timestamp-Spalte anzeigen lassen
Step17: Pandas konvertiert standardmäßig automatisch die Zeitzonen, damit wir uns um nichts mehr kümmern müssen. In unserem Fall ist das aber schlecht, da wir die jeweilige lokale Zeit eines Commits erhalten wollen. Daher schneiden wir kurzerhand die Angabe über die Zeitzone ab. Mittels der str-Funktion und dem passenden Selektor [
Step18: Beim Umwandeln von Datentypen hilft uns Pandas natürlich ebenfalls. Die Funktion pd.to_datetime nimmt als ersten Parameter eine Series mit Datumsangaben entgegen und wandelt diese um. Als Rückgabewert erhalten wir entsprechend eine Series vom Datentype Timestamp. Die Umwandlung funktioniert für die meisten textuellen Datumsangaben auch meistens automagisch, da Pandas mit unterschiedlichesten Datumsformaten umgehen kann.
Step19: Ob die Umwandlung erfolgreich war, können wir mit einem nochmaligen Aufruf von type() auf den ersten Wert unserer umgewandelten Spalte timestamp_local überprüfen.
Step20: Nun haben wir einen neuen Datentyp Timestamp in der timestamp_local erhalten, der uns Berechnungen mit Zeitangaben erheblich vereinfacht. Z. B. können wir nun mittels eines einfachen >=-Vergleichs herausfinden, welche Commits nach dem 01.02.2018 stattgefunden haben.
Step21: Wir können nun auch auf einzelne Bestandteile der Datumsangaben zugreifen. Dazu verwenden wir das dt-Objekt ("datetime") und können auf dessen Eigenschaften wie etwa hour zurückgreifen.
Step22: Zusammen mit der bereits oben vorgestellten value_counts()-Methode können wir nun wieder Werte zählen lassen. Wichtig ist hier jedoch, dass wir zusätzlich den Parameter sort=False setzen, um die sortierung nach Mengenangaben zu vermeiden.
Step23: Das Ergebnis können wir entsprechend mittels eines Balkendiagramms ausgeben und erhalten so eine Übersicht, zu welcher Tageszeit Quellcode committet wird.
Step24: Wir beschriften nun zusätzlich die Grafik. Dazu speichern wir uns das Rückgabeobjekt der bar()-Funktion in der Variable ax. Hierbei handelt es sich um ein Axes-Objekt der darunterliegenden Plotting-Bibliothek matplotlib, durch das wir zusätzliche Eigenschaften des Plots beliebig anpassen können. Wir setzen hier
den Titel über set_title(<titelname>)
die Beschriftung der X-Achse mit set_xlabel(<x_achsenname>) und
die Beschriftung der Y-Achse mit set_ylabel<y_achsenname>)
Als Ergebnis erhalten wir nun ein ausagekräftiges, beschriftetes Balkendiagramm.
Step25: Wir können auch nach Wochentagen auswerten. Dazu verwenden wir das weekday-Attribut auf dem DateTime-Attribut dt. Wie üblich, lassen wir hier die Werte über value_counts zählen, lassen die Werte aber nicht der Größe nach sortieren.
Step26: Das Ergebnis in commits_je_wochentag lassen wir als ein Balkendiagramm mittels plot.bar() ausgeben.
Step27: Commit-Verlauf
Nachfolgend wollen wir den Verlauf aller Commits über die letzten Jahre aufzeichnen lassen. Dazu setzen wir die timestamp Spalte als Index mittels set_index(<spaltenname>). Zudem selektieren wir lediglich die author-Spalte mittels [<spaltenname>]. Dadurch arbeiten wir fortlaufend auf einer reinen Series statt eines DataFrame. Randnotiz
Step28: Über die resample(<zeiteinheit>)-Funktion des DataFrames können wir nun Werte nach bestimmten Zeiteinheiten gruppieren wie z. B. nach Tage (D), Monate (M), Quartale (Q) oder Jahre (A). Wir verwenden hier ein resample("D") für tageweises zählen. Zudem geben wir noch an, wie die Einzelwerte pro Zeiteinheit zusammengeführt werden sollen. Hierzu wählen wir die count()-Funktion, um die Anzahl der Commits für jeden einzelnen Tag zu zählen.
Step29: Um den Commit-Verlauf über die Jahre hinweg aufzuzeigen, bilden wir die kumulative Summe über alle Tageseinträge mittels cumsum(). Damit werden alle Werte nacheinander aufsummiert.
Step30: Das Ergebnis plotten wir nun als Liniendiagramm und erhalten somit die Anzahl der Commits über die Jahre hinweg aufgezeichnet. | Python Code:
"Hello World"
Explanation: Dieses Notebook ist ein Skript (Drehbuch) zur Vorstellung grundlegender Funktionen von Jupyter, Python, Pandas und matplotlib, um ein Gefühl für die Arbeit mit den Biblotheken zu bekommen. Daher ist das gewählte Beispiel so gewählt, dass wir typische Aufgaben während einer Datenanalyse bearbeiten. Inhaltlich ist diese Analyse allerdings nicht repräsentativ, da sie lediglich einfach Statistiken über ein Git-Repository darstellt.
Jupyter
Zuallerst sehen wir uns Jupyter genauer an. Das hier ist Jupyter, die interaktive Notebook-Umgebung zum Programmieren. Wir sehen hier eine Zelle, in der wir Python-Code eingeben können. Geben wir einfach einmal einen String namens "Hello World" ein. Mit der Tastenkombination Strg + Enter.
End of explanation
"Hello World"
text = "Hello World!"
text[0]
text[-1]
text[2:5]
text[:-1]
Explanation: Das Ergebnis ist sofort unter der Zelle sichtbar. Legen wir nun eine weitere Zelle an. Dies funktioniert mit dem Drücken der Taste ESC und einem darauffolgendem Buchstaben b. Alternativ können wir am Ende eines Notebooks eine Zelle mit Shift + Enter ausführen und gleich eine neue Zelle erstellen.
Hier sehen wir gleich eine wichtige Eigenheit von Jupyter: Die Unterscheidung zwischen Befehlsmodus (erreichbar über Taste Esc) und dem Eingabemodus (erreichbar über die Taste Enter). Im Befehlsmodus ist die Umrahmung der aktuellen Zelle blau. Im Eingabemodus wird die Umrahmung grün. Gehen wir in den Befehlsmodus und drücken m. Dies ändert den Zelltyp zu einer Markdown-Zelle. Markdown ist eine einfache Markup-Sprache, mit der Text geschrieben und formatiert werden kann. Damit lassen sich unsere durchgeführten Schritte direkt mit dokumentieren.
Python
Sehen wir uns ein paar grundlegende Python-Programmierkonstrukte an, die wir später in der Arbeit mit Pandas benötigen.
End of explanation
text.upper
Explanation: Die weitere Funktionalität einer Bibliothek können wir erkunden, indem wir die Methoden und Attribute einer Klasse oder eines Objekts ansehen. Dazu schreiben wir in unserem String-Beispiel text. und nutzen die integrierte Autovervollständigung von Jupyter mittels der Tabulatortaste Tab, um zu sehen, welche Methoden uns aktuell verwendetes Objekt bietet. Gehen wir dann mit der Pfeiltaste unten oder drücken z. B. die ersten Buchstaben von upper, drücken Enter und schließend Shift+ Tab, dann erscheint die Signatur des entsprechenden Funktionalität und der Ausschnitt der Hilfedokumentation. Bei zweimaligem Drücken von Shift + Tab erscheint die Hilfe vollständig. Mit dem Aufruf von upper() auf unsere text-Variable können wir unseren Text in Großbuchstaben schreiben lassen.
End of explanation
text.split(maxsplit=2, sep=" ")
Explanation: Die interaktive Quellcode-Dokumentation hilft uns auch herauszufinden, welche Argumente wir in einer Methode zusätzlich zu normale Übergabeparametern hinzufügen können.m
End of explanation
import pandas as pd
Explanation: Git-Historienanalyse
In diesem Notebook wollen wir uns die Entwicklungsgeschichte des Spring Framework Beispielprojekts "Spring PetClinic" anhand der Historie des dazugehörigen Git-Repositories ein wenig genauer ansehen.
Das GitHub-Repository https://github.com/spring-projects/spring-petclinic wurde dafür über den Befehl
https://github.com/spring-projects/spring-petclinic.git
auf die lokale Festplatte geklont.
Die für diese Auswertung relevanten Teile der Historie wurde mittels
git log --pretty="%ad,%aN" --no-merges > timestamp_author.csv
exportiert. Dieser Befehl liefert pro Commit des Git-Repositories den Zeitstempel des Commits (%ad) sowie den Namen des Autors (%aN). Die jeweiligen Werte sind kommasepariert. Wir geben zusätzlich mit an, dass wir reine Merge-Commits nicht erhalten wollen (über --no-merges). Das Ergebnis der Ausgabe speichern wir in die Datei timestamp_author.csv.
Pandas
Nun können wir diese Daten mit Hilfe des Datenanalyse-Frameworks Pandas einlesen. Wir importieren dazu pandas mit der gängigen Abkürzung pd mittels der import ... as ... Syntax von Pyhton.
End of explanation
pd?
Explanation: Ob das Importieren des Moduls auch wirklich funktioniert hat, können wir prüfen, in dem wir mit dem pd-Modul arbeiten. Dazu hängen wir an die pd-Variable den ? Operator an und führen die Zelle aus. Es erscheint die Dokumentation des Moduls im unteren Bereich des Notebooks. Diesen Bereich können wir durchlesen und mit der Taste ESC auch wieder verschwinden lassen.
End of explanation
git_log = pd.read_csv(
"datasets/git_timestamp_author.csv",
# oder:
#"https://pastebin.com/raw/C40C9S82",
names=['timestamp', 'author'])
git_log.head()
Explanation: Danach lesen wir die oben beschriebene CSV-Datei timestamp_author.csv ein und speichern das Ergebnis in der Variable git_log. Neben dem Dateinamen müssen wir zusätzlich über das Argument names noch eine Liste an Namen für die Kopfzeile mitgeben, da unsere Git-Log-Datei keine entsprechende Kopfzeile besitzt.
Wir haben nun die Daten in einen DataFrame (so etwas ähnliches wie ein programmierbares Excel-Arbeitsblatt) geladen, der in unserem Fall aus zwei Series (in etwa Spalten) besteht. Auf den DataFrame können wir nun Operationen ausführen. Z. B. können wir uns mittels head() die fünf ersten Einträge anzeigen lassen.
End of explanation
git_log.info()
Explanation: Als nächstes rufen wir info() auf den DataFrame auf, um einige Eckdaten über die eingelesenen Daten zu erhalten.
End of explanation
git_log.author.head()
Explanation: Den Zugriff auf die einzelnen Series können wir mittels der Schreibeweise [<spaltenname>] oder (in den meisten Fällen) per direkter Nutzung des Namens der Series erreichen.
End of explanation
top10 = git_log.author.value_counts().head(10)
top10
Explanation: Auch auf einer Series selbst können wir verschiedene Operationen ausführen. Z. B. können wir mit value_counts() die in einer Series enthaltenen Werte zählen und nach gleichzeitig nach ihrer Häufigkeit sortieren lassen. Das Ergebnis ist wieder eine Series, diesmal aber mit den zusammengezählten und sortieren Werten. Auf diese Series können wir zusätzlich ein head(10) aufrufen. So erhalten wir eine schnelle Möglichkeit, die TOP-10-Werte einer Series anzeigen zu lassen. Das Ergebnis können wir dann in einer Variable top10 festhalten und ausgeben lassen, in dem wir die Variable in die nächste Zellenzeile schreiben.
End of explanation
%matplotlib inline
top10.plot()
Explanation: Plotten/Visualisierung
Als nächstes wollen wir das Ergebnis visualisieren bzw. plotten. Um die das Plot-Ergebnis der intern verwendeten Plotting-Bibliothek matplotlib direkt im Notebook anzuzeigen, müssen wir Jupyter dies mit dem Magic-Kommando
%matplotlib inline
vor dem Aufruf der plot() Methode mitteilen.
Standardmäßig wird beim Aufruf von plot() auf einen DataFrame oder einer Series ein Liniendiagramm erstellt.
End of explanation
top10.plot.bar()
Explanation: Das macht hier wenig Sinn, weshalb wir mittels einer Untermethode von plot namens bar() ein Balkendiagramm erzeugen lassen.
End of explanation
top10.plot.pie()
Explanation: Für diese Daten bietet sich auch eine Visualisierung als Tortendiagramm an. Hierfür rufen wir statt bar() die Methode pie() auf.
End of explanation
top10.plot.pie(
figsize=[7,7],
title="Top 10 Autoren",
label="")
Explanation: Das Diagramm sieht hier jedoch nicht sehr schön aus.
Mit den optionalen Styling-Parametern können wir erreichen, dass wir eine schönere Grafik angezeigt bekommen. Wir verwenden dazu
* figsize=[7,7] als Größenangabe
* title="Top 10 Autoren" als Titel
* labels=None, um die überflüssige Beschriftung nicht anzuzeigen.
End of explanation
git_log.timestamp.head()
git_log.timestamp.str.split().str[5].head()
zeitzone = git_log.timestamp.str.split().str[5]
zeitzone.head()
git_log['timezone'] = zeitzone
git_log.head()
Explanation: Extraktion von Informationen
Nun widmen wir uns den Zeitstempelangaben. Wir wollen anhand dieser ungefähr herausfinden, wo die meisten Entwickler wohnen. Dazu extrahieren wir die Informationen über den Zeitstempel in timestamp in eine neue Spalte/Series mittels der split()-Funktion, welche uns von den str / String-Funktionen einer Series bereitgestellt wird. Die split()-Funktion benötigt als erstes, optionales Argument das Trennzeichen (standardmäßig ist dies das Leerzeichen) zur Trennung des Strings. Zusätzlich geben wir mit expand=True mit, dass wir als Rückgabewert einen DataFrame haben möchten. Damit können wir mit Hilfe der Selektion einer beliebigen Spalte den für uns interessanten Teil des DataFrames auswählen (in unserem Fall [5]).
End of explanation
git_log.timezone.value_counts().head(10).plot.pie(
figsize=[7,7],
title="Top 10 Zeitzonen",
label="")
Explanation: Analog zu den TOP 10 Autoren können wir nun die TOP 10 Zeitzonen ausgeben lassen.
End of explanation
type(git_log.timestamp[0])
Explanation: Arbeiten mit Datumsangaben
Bevor wir in die Welt der Zeitreihenverarbeitung einsteigen können, müssen wir unsere Spalte mit den Datumsangabe zuerst in den passenden Datentyp umwandeln. Zurzeit ist unsere Spalte timestamp noch ein String, also von textueller Natur. Wir können dies sehen, in dem wir uns mittels der Helferfunktion type(<object>) den ersten Eintrag der timestamp-Spalte anzeigen lassen:
End of explanation
zeitstempel = git_log.timestamp.str[:-6]
zeitstempel.head()
Explanation: Pandas konvertiert standardmäßig automatisch die Zeitzonen, damit wir uns um nichts mehr kümmern müssen. In unserem Fall ist das aber schlecht, da wir die jeweilige lokale Zeit eines Commits erhalten wollen. Daher schneiden wir kurzerhand die Angabe über die Zeitzone ab. Mittels der str-Funktion und dem passenden Selektor [:-6] können wir das einfach bewerkstelligen.
End of explanation
git_log['timestamp_local'] = pd.to_datetime(git_log.timestamp.str[:-6])
git_log.head()
Explanation: Beim Umwandeln von Datentypen hilft uns Pandas natürlich ebenfalls. Die Funktion pd.to_datetime nimmt als ersten Parameter eine Series mit Datumsangaben entgegen und wandelt diese um. Als Rückgabewert erhalten wir entsprechend eine Series vom Datentype Timestamp. Die Umwandlung funktioniert für die meisten textuellen Datumsangaben auch meistens automagisch, da Pandas mit unterschiedlichesten Datumsformaten umgehen kann.
End of explanation
type(git_log.timestamp_local[0])
Explanation: Ob die Umwandlung erfolgreich war, können wir mit einem nochmaligen Aufruf von type() auf den ersten Wert unserer umgewandelten Spalte timestamp_local überprüfen.
End of explanation
(git_log.timestamp_local > "01.02.2018").head()
Explanation: Nun haben wir einen neuen Datentyp Timestamp in der timestamp_local erhalten, der uns Berechnungen mit Zeitangaben erheblich vereinfacht. Z. B. können wir nun mittels eines einfachen >=-Vergleichs herausfinden, welche Commits nach dem 01.02.2018 stattgefunden haben.
End of explanation
git_log.timestamp_local.dt.hour.head()
Explanation: Wir können nun auch auf einzelne Bestandteile der Datumsangaben zugreifen. Dazu verwenden wir das dt-Objekt ("datetime") und können auf dessen Eigenschaften wie etwa hour zurückgreifen.
End of explanation
commits_je_stunde = git_log.timestamp_local.dt.hour.value_counts(sort=False)
commits_je_stunde.head()
Explanation: Zusammen mit der bereits oben vorgestellten value_counts()-Methode können wir nun wieder Werte zählen lassen. Wichtig ist hier jedoch, dass wir zusätzlich den Parameter sort=False setzen, um die sortierung nach Mengenangaben zu vermeiden.
End of explanation
commits_je_stunde.plot.bar()
Explanation: Das Ergebnis können wir entsprechend mittels eines Balkendiagramms ausgeben und erhalten so eine Übersicht, zu welcher Tageszeit Quellcode committet wird.
End of explanation
ax = commits_je_stunde.plot.bar()
ax.set_title("Commits pro Stunde")
ax.set_xlabel("Tagesstunde")
ax.set_ylabel("Commits")
Explanation: Wir beschriften nun zusätzlich die Grafik. Dazu speichern wir uns das Rückgabeobjekt der bar()-Funktion in der Variable ax. Hierbei handelt es sich um ein Axes-Objekt der darunterliegenden Plotting-Bibliothek matplotlib, durch das wir zusätzliche Eigenschaften des Plots beliebig anpassen können. Wir setzen hier
den Titel über set_title(<titelname>)
die Beschriftung der X-Achse mit set_xlabel(<x_achsenname>) und
die Beschriftung der Y-Achse mit set_ylabel<y_achsenname>)
Als Ergebnis erhalten wir nun ein ausagekräftiges, beschriftetes Balkendiagramm.
End of explanation
commits_je_wochentag = git_log.timestamp_local.dt.weekday.value_counts(sort=False)
commits_je_wochentag
Explanation: Wir können auch nach Wochentagen auswerten. Dazu verwenden wir das weekday-Attribut auf dem DateTime-Attribut dt. Wie üblich, lassen wir hier die Werte über value_counts zählen, lassen die Werte aber nicht der Größe nach sortieren.
End of explanation
commits_je_wochentag.plot.bar()
Explanation: Das Ergebnis in commits_je_wochentag lassen wir als ein Balkendiagramm mittels plot.bar() ausgeben.
End of explanation
git_timed = git_log.set_index('timestamp_local')['author']
git_timed.head()
Explanation: Commit-Verlauf
Nachfolgend wollen wir den Verlauf aller Commits über die letzten Jahre aufzeichnen lassen. Dazu setzen wir die timestamp Spalte als Index mittels set_index(<spaltenname>). Zudem selektieren wir lediglich die author-Spalte mittels [<spaltenname>]. Dadurch arbeiten wir fortlaufend auf einer reinen Series statt eines DataFrame. Randnotiz: Die Verarbeitung mittels Series folgt fast analog wie bei einem DataFrame. Eine Series wird jedoch nicht so schön in einer Tabelle formatiert angezeigt, weshalb ich persönlich die Bearbeitung mittels DataFrame bevorzuge.
End of explanation
commits_per_day = git_timed.resample("D").count()
commits_per_day.head()
Explanation: Über die resample(<zeiteinheit>)-Funktion des DataFrames können wir nun Werte nach bestimmten Zeiteinheiten gruppieren wie z. B. nach Tage (D), Monate (M), Quartale (Q) oder Jahre (A). Wir verwenden hier ein resample("D") für tageweises zählen. Zudem geben wir noch an, wie die Einzelwerte pro Zeiteinheit zusammengeführt werden sollen. Hierzu wählen wir die count()-Funktion, um die Anzahl der Commits für jeden einzelnen Tag zu zählen.
End of explanation
commits_pro_tag_kumulativ = commits_per_day.cumsum()
commits_pro_tag_kumulativ.head()
Explanation: Um den Commit-Verlauf über die Jahre hinweg aufzuzeigen, bilden wir die kumulative Summe über alle Tageseinträge mittels cumsum(). Damit werden alle Werte nacheinander aufsummiert.
End of explanation
commits_pro_tag_kumulativ.plot()
Explanation: Das Ergebnis plotten wir nun als Liniendiagramm und erhalten somit die Anzahl der Commits über die Jahre hinweg aufgezeichnet.
End of explanation |
12,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Pachetul Python Networkx. Popularitatea nodurilor unei retele</center>
Networkx este un pachet Python destinat generarii si analizei structurii si proprietatilor unei retele.
O retea este un graf $G=(V, E)$, ce consta dintr-o multime finita de noduri,
$V={0,1,2,\ldots, n-1}$ si multimea, $E$, a arcelor formata din perechi de noduri.
Grafurile sunt neorientate sau orientate. In cazul grafurilor neorientate nodurile $i,j$
conectate printr-un arc nu implica o ordine a conexiunii, de la $i$ spre $j$ sau de la $j$ spre $i$. In grafurile orientate perechea $(i,j)$ este ordonata. Se presupune ca
exista conexiune de la $i$ spre $j$, dar daca perechea $(j,i)$ nu apartine lui $E$, atunci nu exista conexiune inversa, de la $j$ la $i$.
Unui graf neorientat/orientat i se asociaza matricea de adiacenta sau conectivitate, $A=(a_{ij})$, $i, j=\overline{0,n-1}$
Step1: Pentru a calcula radacinile polinomului caracteristic al matricii $A$ si vectorii proprii corespunzatori, apelam functia np.linalg.eig(A) care returneaza array-ul 1D, Lamb, ce contine radacinile polinomului caracteristic si array-ul 2D, V,
care are pe o coloana j coordonatele unui vector propriu corespunzator valorii proprii Lamb[j].
Step2: Matricea data este o matrice binara, deci poate fi interpretata ca matricea de adiacenta a unui graf.
Fiind o matrice nenegativa asociata unui graf conex, i se poate aplica Teorema Perron-Frobenius.
Sa determinam valoarea proprie dominanta, adica valoarea proprie reala, strict pozitiva $\lambda_d$
cu proprietatea ca $|\lambda_j|\leq \lambda_d$, $\forall\
Step3: Concentrat putem scrie
Step4: Deci valoarea proprie dominanta este
Step5: iar pozitia ei in array-ul Lamb este returnata de np.argmax(np.fabs(Lamb))
Step6: Observam ca acest vector are toate coordonatele negative, deci -x este vectorul propriu cu
toate coordonatele pozitive, conform teoremei
Perron-Frobenius.
Vectorul $x$ normalizat este $r=x/\sum_{i=0}^{n-1}x[i]$ si reprezinta vectorul rating, avand drept coordonate coeficientii de popularitate/importanta a nodurilor retelei. Adica $r[j]$ este coeficientul de popularitate al nodului $j$ din retea
Step7: Sa realizam acum ranking-ul nodurilor, sortand elementele vectorului $r$, descrescator si retinand
indicii ce dau pozitia initiala in r a elementelor sortate.
Step8: Deci nodul retelei cu cea mai mare popularitate este nodul 4, urmat de 0, 3,2,1.
Sa aplicam acum aceasta procedura pentru retele neorientate si apoi retele orientate, folosind
pachetul networkx
Step9: Linia urmatoare defineste un graf vid, G, neorientat (G este un obiect din clasa Graph)
Step10: 1. Constructia grafului pornind de la lista nodurilor si lista arcelor
Se defineste lista nodurilor, V, si lista arcelor, E, si apoi se apeleaza pentru graful G metoda
add_nodes_from(V), respectiv add_edges_from(E).
Se pot adauga noduri/arce individuale
apeland metoda add_node()/add_edge()
Step11: Dupa ce elementele definitorii au fost setate, urmeaza generarea/trasarea grafului, folosind functia nx.draw care se bazeaza pe functii din biblioteca grafica matplotlib.
Step12: Pozitionarea relativa a nodurilor este realizata conform algoritmului numit spring layout algorithm.
Exista mai multe modalitati de amplasare a nodurilor in spatiu, dar aceasta este cea mai convenabila pentru prezentarea noastra.
Extragem matricea de adiacenta a grafului
Step13: Pentru a lucra doar cu numpy.array, convertim A.todense() (se pot determina valorile si vectorii proprii
ai lui A.todense(), dar e putin diferit de modul de lucru cu numpy.array)
Step14: Sa determinam coeficientul de popularitate a nodurilor acestei retele. Cum graful asociat este neorientat
matricea de adiacenta este simetrica si deci are sigur toate radacinile polinomului caracteristic, reale (Cursul 12).
Step15: Sa determinam vectorul rating asociat nodurilor retelei
Step16: Rezulta astfel ca nodul cu cea mai mare popularitate este nodul 4.
Coeficientul de popularitate este
Step17: Fiecarui nod dintr-o retea i se asociaza gradul, ca fiind numarul de noduri cu care este conectat
printr-un arc (drum de lungime 1).
Functia grad=nx.degree(nod) returneaza gradul unui nod, iar grad=nx.degree(G), gradele tuturor
nodurilor retelei. In acest al doilea caz, grad este un dictionar, adica o structura de date in Python
ce consta dintr-o multime ordonata de perechi cheie
Step18: Remarcam ca nodul 4 care are cel mai mare coeficient de popularitate are si cel mai mare
grad (este "cel mai conectat" nod din retea).
2. Constructia grafului neorientat pornind de la matricea sa de adiacenta.
Daca se da matricea de adiacenta, $A$, a unui graf atunci graful este creat de functia
Step19: Popularitatea nodurilor unei retele orientate
Constructia unei retele (graf) orientat se realizeaza la fel ca in cazul celor neorientate,
doar ca obiectul nu mai este declarat de tip Graph, ci DiGraph.
Step20: Sa construim o retea orientata din matricea sa adiacenta si sa determinam popularitatea nodurilor
Step21: Conform teoriei din cursul 11, vectorul rating asociat unei retele orientate este vectorul propriu
al valorii proprii dominante a matricii de conectivitate, transpusa
Step22: Matricea de adiacenta nefiind simetrica, polinomul sau caracteristic poate avea si radacini complex conjugate. Radacinile reale sunt afisate
si ele in forma complexa, $a=a+ 0.j$ (in Python numarul complex $i=\sqrt{-1}$ este notat $j$, ca in electronica).
Determinam acum valoarea proprie dominanta, adica radacina reala, pozitiva, care domina valorile absolute ale celorlalte
Step23: Proiect
Step24: Avand acest dictionar atunci cand am calculat vectorul ranking, printam informatia in felul urmator | Python Code:
import numpy as np
A=np.array([0, 1, 0, 1, 1,
1, 0, 0, 0, 1,
0, 0, 0, 1, 1,
1, 0, 1, 0, 1,
1, 1, 1, 1, 0], float).reshape((5,5))
print A
Explanation: <center> Pachetul Python Networkx. Popularitatea nodurilor unei retele</center>
Networkx este un pachet Python destinat generarii si analizei structurii si proprietatilor unei retele.
O retea este un graf $G=(V, E)$, ce consta dintr-o multime finita de noduri,
$V={0,1,2,\ldots, n-1}$ si multimea, $E$, a arcelor formata din perechi de noduri.
Grafurile sunt neorientate sau orientate. In cazul grafurilor neorientate nodurile $i,j$
conectate printr-un arc nu implica o ordine a conexiunii, de la $i$ spre $j$ sau de la $j$ spre $i$. In grafurile orientate perechea $(i,j)$ este ordonata. Se presupune ca
exista conexiune de la $i$ spre $j$, dar daca perechea $(j,i)$ nu apartine lui $E$, atunci nu exista conexiune inversa, de la $j$ la $i$.
Unui graf neorientat/orientat i se asociaza matricea de adiacenta sau conectivitate, $A=(a_{ij})$, $i, j=\overline{0,n-1}$:
$$a_{ij}=\left{\begin{array}{ll} 1&\mbox{daca exista arc intre nodul i si j}\
0& \mbox{in caz contrar}\end{array}\right.$$
Daca graful este neorientat atunci matricea de adiacenta este simetrica.
Inainte de a trece la definirea, vizualizarea si analiza unei retele, precizam cum se calculeaza valorile
si vectorii proprii ai unei matrici (numpy.array de shape $(n, n)$ in Python).
End of explanation
Lamb, V=np.linalg.eig(A)
print 'Radacinile polinomului caracteristic sunt\n', Lamb
print'\n iar vectorii proprii corespunzatori: \n', V.round(2)
print 'Vectorul propriu corespunzator valorii', Lamb[3], 'este:\n', V[:,3].round(2)
Explanation: Pentru a calcula radacinile polinomului caracteristic al matricii $A$ si vectorii proprii corespunzatori, apelam functia np.linalg.eig(A) care returneaza array-ul 1D, Lamb, ce contine radacinile polinomului caracteristic si array-ul 2D, V,
care are pe o coloana j coordonatele unui vector propriu corespunzator valorii proprii Lamb[j].
End of explanation
print np.fabs(Lamb)
Explanation: Matricea data este o matrice binara, deci poate fi interpretata ca matricea de adiacenta a unui graf.
Fiind o matrice nenegativa asociata unui graf conex, i se poate aplica Teorema Perron-Frobenius.
Sa determinam valoarea proprie dominanta, adica valoarea proprie reala, strict pozitiva $\lambda_d$
cu proprietatea ca $|\lambda_j|\leq \lambda_d$, $\forall\: j=\overline{0,4}$ si vectorul propriu corespunzator:
Teoretic ar trebui sa calculam mai intai array-ul valorilor absolute ale elementelor din Lamb si
apoi elementul maxim:
End of explanation
print np.amax(np.fabs(Lamb))# functia np.amax(array) returneaza elementul maxim dintr-un array 1D
Explanation: Concentrat putem scrie:
End of explanation
lambD=np.amax(np.fabs(Lamb))# valoarea proprie dominanta calculata
Explanation: Deci valoarea proprie dominanta este:
End of explanation
j=np.argmax(np.fabs(Lamb))
print 'Valoarea proprie dominanta este plasata in pozitia:', j
#vectorul propriu corespunzator:
x=V[:,j]
print 'Valoarea proprie dominanta este:', lambD, \
'\n\n iar vectorul propriu dominant este\n', V[:,j].round(2)
Explanation: iar pozitia ei in array-ul Lamb este returnata de np.argmax(np.fabs(Lamb)):
End of explanation
r=x/np.sum(x)
print 'Coeficientii de popularitate a nodurilor retelei de matricede conectivitate'+\
'$A$ sunt\n', r.round(2)# semnul + intre doua stringuri inseamna concatenarea lor
# semnul \ reprezinta continuare pe linia urmatoare
Explanation: Observam ca acest vector are toate coordonatele negative, deci -x este vectorul propriu cu
toate coordonatele pozitive, conform teoremei
Perron-Frobenius.
Vectorul $x$ normalizat este $r=x/\sum_{i=0}^{n-1}x[i]$ si reprezinta vectorul rating, avand drept coordonate coeficientii de popularitate/importanta a nodurilor retelei. Adica $r[j]$ este coeficientul de popularitate al nodului $j$ din retea:
End of explanation
ranking=np.argsort(r)[::-1] #Functia np.argsort, sorteaza crescator array-ul 1D, rating,
# si returneaza indicii din r a elementelor sortate
# Pentru a gasi ordinea indicilor pentru sortarea descrescatoare
# se inverseaza elementele array-ului returnat de
# np.argsort(rating) folosind notatia tipica pt reversing, [::-1]
print ranking
Explanation: Sa realizam acum ranking-ul nodurilor, sortand elementele vectorului $r$, descrescator si retinand
indicii ce dau pozitia initiala in r a elementelor sortate.
End of explanation
import networkx as nx
Explanation: Deci nodul retelei cu cea mai mare popularitate este nodul 4, urmat de 0, 3,2,1.
Sa aplicam acum aceasta procedura pentru retele neorientate si apoi retele orientate, folosind
pachetul networkx:
Definirea unui graf in networkx
Importam modulul networkx astfel:
End of explanation
G=nx.Graph()
Explanation: Linia urmatoare defineste un graf vid, G, neorientat (G este un obiect din clasa Graph):
End of explanation
n=9
V=[i for i in range(n)]
G.add_nodes_from(V)
E=[(0,1), (0,2), (1,3), (1,4), (1,7), (2,5), (2,8), (3, 4), (3,5),(4,6), (4,7), (4,8), (5,7)]
G.add_edges_from(E)
G.add_edge(6,8)
Explanation: 1. Constructia grafului pornind de la lista nodurilor si lista arcelor
Se defineste lista nodurilor, V, si lista arcelor, E, si apoi se apeleaza pentru graful G metoda
add_nodes_from(V), respectiv add_edges_from(E).
Se pot adauga noduri/arce individuale
apeland metoda add_node()/add_edge():
End of explanation
%matplotlib inline
# comanda "%matplotlib inline" se da pentru a insera figurile generate, inline, in notebook
import matplotlib.pyplot as plt # importam biblioteca grafica
nx.draw(G, node_color='c',edge_color='b', with_labels=True)# in mod implicit graful este trasat
#fara a afisa etichetele nodurilor
# with_labels=True conduce la afisarea lor
Explanation: Dupa ce elementele definitorii au fost setate, urmeaza generarea/trasarea grafului, folosind functia nx.draw care se bazeaza pe functii din biblioteca grafica matplotlib.
End of explanation
A=nx.adjacency_matrix(G)# A este un obiect al unei clase speciale in networkx
#A.todense() defineste matricea de adiacenta ca un obiect al unei clase din numpy,
#dar NU clasa `numpy.array`
print A.todense()
print type(A.todense())
Explanation: Pozitionarea relativa a nodurilor este realizata conform algoritmului numit spring layout algorithm.
Exista mai multe modalitati de amplasare a nodurilor in spatiu, dar aceasta este cea mai convenabila pentru prezentarea noastra.
Extragem matricea de adiacenta a grafului:
End of explanation
A=np.array(A.todense())# interpretati aceasta linie ca un cast
print type(A)
Explanation: Pentru a lucra doar cu numpy.array, convertim A.todense() (se pot determina valorile si vectorii proprii
ai lui A.todense(), dar e putin diferit de modul de lucru cu numpy.array):
End of explanation
Lamb,V=np.linalg.eig(A)
lamb=np.amax(Lamb)# radacinile fiind reale, valoarea dominata este maximumul valorilor proprii
j=np.argmax(Lamb)#pozitia in Lamb a valorii maxime
print j
x=V[:,j]
print 'Valoarea proprie dominanta este:', lamb
print 'Vectorul propriu corespunzator:\n', x.round(3)
Explanation: Sa determinam coeficientul de popularitate a nodurilor acestei retele. Cum graful asociat este neorientat
matricea de adiacenta este simetrica si deci are sigur toate radacinile polinomului caracteristic, reale (Cursul 12).
End of explanation
s=np.sum(x)
rating=x/s # vectorul propriu dominant, normalizat
print 'Vectorul rating al nodurilor\n', rating.round(3)
ranking=np.argsort(rating)[::-1]
print ranking
Explanation: Sa determinam vectorul rating asociat nodurilor retelei:
End of explanation
print rating [ranking[0]]
Explanation: Rezulta astfel ca nodul cu cea mai mare popularitate este nodul 4.
Coeficientul de popularitate este:
End of explanation
dictionar={'grupa1':35, 'grupa2':40, 'grupa3': 43, 'grupa4':45}
print dictionar
print dictionar.keys()
print 'In grupa 2 sunt', dictionar['grupa2'], 'studenti'
grad=nx.degree(G)
print 'Dictionarul gradelor nodurilor:', grad
print 'Gradul nodului 4, ce are ceam mai mare popularitate este:', grad[4]
Explanation: Fiecarui nod dintr-o retea i se asociaza gradul, ca fiind numarul de noduri cu care este conectat
printr-un arc (drum de lungime 1).
Functia grad=nx.degree(nod) returneaza gradul unui nod, iar grad=nx.degree(G), gradele tuturor
nodurilor retelei. In acest al doilea caz, grad este un dictionar, adica o structura de date in Python
ce consta dintr-o multime ordonata de perechi cheie:valoare, inserate intre acolade:
End of explanation
Ad=np.array([[0,1,1,1,0,0,0,1],
[1,0,1,0,1,1,1,0],
[1,1,0,0,0,0,1,1],
[1,0,0,0,1,1,1,1],
[0,1,0,1,0,1,1,0],
[0,1,0,1,1,0,1,0],
[0,1,1,1,1,1,0,1],
[1,0,1,1,0,0,1,0]], float)
Gr=nx.from_numpy_matrix(Ad)
print 'Nodurile grafului sunt:\n', Gr.nodes()
print 'Lista arcelor:\n', Gr.edges()
nx.draw(Gr, node_color='g', with_labels=True, alpha=0.5)
# alpha este parametrul de transparenta a culorii nodurilor
Explanation: Remarcam ca nodul 4 care are cel mai mare coeficient de popularitate are si cel mai mare
grad (este "cel mai conectat" nod din retea).
2. Constructia grafului neorientat pornind de la matricea sa de adiacenta.
Daca se da matricea de adiacenta, $A$, a unui graf atunci graful este creat de functia:
G= nx.from_numpy_matrix(A):
End of explanation
H=nx.DiGraph()
n=5
Noduri=[k for k in range(n)]
Arce=[(0,3), (0,4), (1,2),(1,3), (1,4), (2,3), (4,1), (4,3)]
H.add_nodes_from(Noduri)
H.add_edges_from(Arce)
nx.draw(H, node_color='r', with_labels=True, alpha=0.5)
Explanation: Popularitatea nodurilor unei retele orientate
Constructia unei retele (graf) orientat se realizeaza la fel ca in cazul celor neorientate,
doar ca obiectul nu mai este declarat de tip Graph, ci DiGraph.
End of explanation
plt.rcParams['figure.figsize'] = 8, 8 #setam dimensiunile figurii
W=np.array([[0,1,1,1,0,0,0,0],[0,0,1,0,1,1,1,0],[0,0,0,0,0,0,0,1],[0,0,0,0,1,1,0,0],
[0,0,0,0,0,0,1,0], [0,0,0,0,1,0,1,0],[0,1,1,1,0,0,0,1], [1,0,0,1,0,0,0,0]], float)
GW=nx.from_numpy_matrix(W, create_using=nx.DiGraph())
print 'Nodurile grafului sunt:\n', GW.nodes()
print 'Lista arcelor:\n', GW.edges()
nx.draw(GW, node_color='g', with_labels=True, alpha=0.5)
Explanation: Sa construim o retea orientata din matricea sa adiacenta si sa determinam popularitatea nodurilor:
End of explanation
Lamb, V=np.linalg.eig(W.transpose()) # aflam radacinile polinomului caracteristic a matricii W^T
print Lamb.round(3)
Explanation: Conform teoriei din cursul 11, vectorul rating asociat unei retele orientate este vectorul propriu
al valorii proprii dominante a matricii de conectivitate, transpusa:
End of explanation
absLamb=np.abs(Lamb)
j=np.argmax(absLamb)
if not np.isreal(Lamb[j]):# daca valoarea absoluta maxima nu este reala
raise ValueError("matricea A nu indeplineste conditiile T Perron-Frobenius sau alta cauza")
else:
lamD=np.real(Lamb[j])# afiseaza nr real fara 0*j
print 'valoarea proprie dominanta este:', lamD
print 'valorile absolute ale radacinilor sunt:\n', absLamb.round(3)
x=V[:,j]
s=np.sum(x)
rating=x/s
print 'Vectorul rating:\n', np.real(rating.round(3))# fortam sa afiseze coordonatele fara 0.j
ranking=np.argsort(rating)[::-1]
print 'Nodurile in ordinea descrescatoare a popularitatii lor:\n', ranking
print 'Nodul cel mai important este:', ranking[0]
Explanation: Matricea de adiacenta nefiind simetrica, polinomul sau caracteristic poate avea si radacini complex conjugate. Radacinile reale sunt afisate
si ele in forma complexa, $a=a+ 0.j$ (in Python numarul complex $i=\sqrt{-1}$ este notat $j$, ca in electronica).
Determinam acum valoarea proprie dominanta, adica radacina reala, pozitiva, care domina valorile absolute ale celorlalte:
End of explanation
Jucatori={ 0: 'Manuel NEUER',
1: 'Benedikt HOEWEDES',
2: 'Mats HUMMELS'}# etc
Explanation: Proiect: Determinarea popularitatii jucatorilor unei echipe de fotbal la Campionatul Mondial, Brazilia 2014
Sa se determine popularitatea jucatorilor unei echipe de fotbal intr-unul din meciurile jucate la campionatul Mondial de Fotbal, Brazilia 2014.
Reteaua asociata echipei implicata intr-un joc are ca noduri jucatorii (fara rezervele ce nu au intrat in jocul respectiv).
Exista arc orientat de la jucatorul i la jucatorul j, daca in cursul meciului numarul de pase de la i la j este nenul.
Notam cu $W$ matricea ponderare:
$$W_{ij}=\mbox{numarul de pase de la i la j}$$
Evident $W_{ij}=0$, daca jucatorul i nu a avut nicio pasa spre j.
Prin urmare matricea de conectivitate nu este o matrice binara.
Datele pentru acest proiect le descarcati de la FIFA.
La adresa URL http://www.fifa.com/worldcup/statistics/matches/passes.html
dati click pe un meci, de exemplu Germania-Argentina si se deschide pagina:
http://www.fifa.com/worldcup/matches/round=255959/match=300186501/index.html#games
De pe aceasta pagina descarcam fisierul Passing Distribution.pdf
Copiati intr-un fisier PaseNumeEchipa.txt matricea paselor din tabelul cel mai din stanga. Evident, nu includeti ca nod, jucatorii de rezerva, neinclusi in meciul respectiv.
De exemplu, jucatorul nr 17, Per MERTESACKER, din echipa Germaniei se vede ca n-a jucat in meciul cu Argentina.
Apoi o cititi astfel:
W=np.loadtxt('PaseNumeEchipa.txt', dtype=float)
Generati reteaua paselor setand in prealabil o figura de dimensiuni mai mari, ca sa fie vizualizate arcele cat mai bine. Aceasta setare se realizeaza inainte de a desena reteaua prin linia:
plt.rcParams['figure.figsize'] = 10, 10
Cu aceasta setare figura va fi de 10 pe 10. Puteti creste la 12 pe 12.
Creati apoi dictionarul jucatorilor. De exemplu in meciul Germania-Argentina, pe linia $i$ a matricii paselor jucatorilor germani, figureaza jucatorul 'Prenume Nume'. Astfel dictionarul Jucatori s-ar defini pentru Germania, astfel:
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Avand acest dictionar atunci cand am calculat vectorul ranking, printam informatia in felul urmator:
Cel mai popular jucator (cel care a primit cele mai multe pase in timpul meciului) este
jucatorul Jucatori[ranking[0]].
i=ranking[0]este codul numeric al jucatorului,
$i \in{0,1, \ldots, n-1}$, cel mai bun, iar Jucatori[i] este numele acestuia extras din dictionar.
Alegeti meciuri diferite si echipe diferite, nu analizati toti echipa Germaniei.
O analiza mai detaliata a performantelor jucatorilor o vom putea efectua in semstrul II, dupa ce studiem
Lanturi Markov la Probabilitati.
This notebook was created early in december 2014 (hence it is history). Meanwhile networkx evolved and some cells could display errors after running.
End of explanation |
12,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sparse and dense representations for text data
Before we can start training we need to prepare our input data in a way that our model will understand it.
Step1: Since we're dealing with text, we need to turn the characters into numbers in order to perform our calculations on them. We do this in two steps
Step2: ExerciseIn this exercise we're going to the functions that we just learned about to translate text into numeric input tensors.
A) A simple character encoder.
Using the examples above, write a simple encoder that takes the sentences
python
sents = ['Hello, world!', 'Bye bye.']
and returns both the encoded sentences.
Step3: B) Get sparse representation.
Create a one-hot encoded (sparse) representation of the sentences that we encoded above.
Step4: C) Get dense representation.
Same as the previous exercise, except now use an embedding matrix to create a dense representation of the sentences. | Python Code:
import tensorflow as tf
import numpy as np
import pandas as pd
%matplotlib inline
Explanation: Sparse and dense representations for text data
Before we can start training we need to prepare our input data in a way that our model will understand it.
End of explanation
from utils import SentenceEncoder
sents = ["Hello, world!", "Hi again!", "Bye bye now."]
encoder = SentenceEncoder(sents, batch_size=2)
for batch in encoder:
seq = batch[0]
print encoder.decode(seq)
print seq
print
Explanation: Since we're dealing with text, we need to turn the characters into numbers in order to perform our calculations on them. We do this in two steps: first we get the sparse (one-hot encoded) representation of each character and then we learn a dense representation (so-called embeddings) as part of our model training.
Sparse representation: one-hot encoding
Our sparse representation will consist of sparse vectors of dimension n_chars, which in our case is 129 (128 ascii chars + 1 end-of-sequence char). The feature vector for a single character will thus be of the form:
$\qquad x(\text{char})\ =\ (0, 0, 1, 0, \dots, 0)$
Or equivalently in components,
$\qquad x_i(\text{char})\ =\ \left{\begin{matrix}1&\text{if } i = h(\text{char})\0&\text{otherwise}\end{matrix}\right.$
where $h$ is a function that maps a character to an integer (e.g. a hash function). In our case, we use the build-in function ord:
python
In [1]: ord('H')
Out[1]: 72
As it turns out, we don't actually need to construct the vector $x(\text{char})$ as displayed above. If you think about it, the only information that we need about $x$ is which component is switched on. In other words, the only information we need is $h(\text{char})$, in our case ord(char). So, the most efficient representation for our sparse feature vectors (single integers) turns out to be incredibly simple. For instance, the sparse representation of the phrase "Hello, world!" is simply:
python
In [1]: x = [ord(char) for char in "Hello, world!"]
In [2]: x
Out[2]: [72, 101, 108, 108, 111, 44, 32, 119, 111, 114, 108, 100, 33]
Actually, we need to append an end-of-sequence (EOS) character to tell our model to stop generating more text. Let's set the index 0 aside for the EOS character, then we one-hot encode our phrase as follows:
python
In [1]: x = [ord(char) + 1 for char in "Hello, world!"] + [0]
In [2]: x
Out[2]: [73, 102, 109, 109, 112, 45, 33, 120, 112, 115, 109, 101, 34, 0]
To go from a list of indices to a one-hot encoded vector in Tensorflow is super easy using tf.one_hot:
```python
n_chars = 129
x_indices = tf.constant([73, 102, 109, 109, 112])
x_one_hot = tf.one_hot(x_indices, n_chars) # shape = (5, 129)
```
Dense representation: embeddings
If we only have a few input characters, we can use the one-hot encoded representation directly as our input. In reality, though, we know that text consists of a large number characters (in our case 129). In this case it's either infeasible or at best highly inefficient to use the sparse representation for our characters.
Moreover, the sparse representation has no notion of proximity between characters such as 'a' and 'A' or more subtly 'i' and 'y'.
A trick that we often use is to translate the high-dimensional sparse feature vectors to low-dimensional dense vectors. These dense vectors are called embeddings. Because the embeddings are low-dimensional, our model needs to learn far fewer weights. Of course, the model does need to learn the embeddings themselves, but this is a trade-off that does pay off. One of the interesting properties of embeddings is that the embedding for 'a' and 'A' are very similar, which means that the rest our network can focus on learning more abstract relations between characters.
Another point of view is that learning embeddings is kind of like having an automated pre-processing step included in the model. Pre-processing in such an end-to-end setting ensures optimal performance in the task that we're actually interested in.
An embedding matrix in Tensorflow must have the shape (n_chars, emd_dim), where n_chars is the number of characters (or tokens) and emb_dim is the dimensionality of the dense embedding vector space. We typically initialize the embedding matrix randomly, e.g.
python
n_chars = 129
emb_dim = 10
emb = tf.Variable(tf.random_uniform([n_chars, emb_dim]))
Then, in order to get the relevant embeddings we could use the one-hot encoded (sparse) representation x_one_hot (see above) as a mask:
python
x_dense = tf.matmul(x_one_hot, emb)
There's a more efficient way of doing this, though. For this we use Tensorflow's embedding lookup function:
python
x_dense = tf.nn.embedding_lookup(emb, x_indices)
The reason why this is more efficient is that avoid constructing x_one_hot explicitly (x_indices is enough).
In the training process, our model will learn an appropriate embedding matrix emb alongside the rest of the model parameters.
Below, we show a visual representation of the character embeddings as well as the mini-batched dense input tensor.
We have supplied a simple encoder in the utils module, which implements the procedure explained above (plus some more):
End of explanation
# input sentences
sents = ['Hello, world!', 'Bye bye.']
# this is the expected output
out = [[ 73, 102, 109, 109, 112, 45, 33, 120, 112, 115, 109, 101, 34, 0],
[ 67, 122, 102, 33, 99, 122, 102, 47, 0, 0, 0, 0, 0, 0]]
def encode(sents):
'<your code here>'
print encode(sents)
np.testing.assert_array_equal(out, encode(sents))
# %load sol/ex_char_encoder.py
Explanation: ExerciseIn this exercise we're going to the functions that we just learned about to translate text into numeric input tensors.
A) A simple character encoder.
Using the examples above, write a simple encoder that takes the sentences
python
sents = ['Hello, world!', 'Bye bye.']
and returns both the encoded sentences.
End of explanation
# clear any previous computation graphs
tf.reset_default_graph()
# dimensions
n_chars = '<your code here>'
batch_size = '<your code here>'
max_seqlen = '<your code here>'
# input placeholder
sents_enc = '<your code here>'
# sparse representation
x_one_hot = '<your code here>'
# input
sents = ['Hello, world!', 'Bye bye.']
with tf.Session() as s:
'<your code here>'
# %load sol/ex_one_hot.py
Explanation: B) Get sparse representation.
Create a one-hot encoded (sparse) representation of the sentences that we encoded above.
End of explanation
# clear any previous computation graphs
tf.reset_default_graph()
# dimensions
n_chars = '<your code here>'
batch_size = '<your code here>'
emb_dim = '<your code here>'
max_seqlen = '<your code here>'
# input placeholder
sents_enc = '<your code here>'
# character embeddings
emb = '<your code here>'
# dense representation
x_dense = '<your code here>'
# input
sents = ['Hello, world!', 'Bye bye.']
with tf.Session() as s:
'<your code here>'
# %load sol/ex_embedding_lookup.py
Explanation: C) Get dense representation.
Same as the previous exercise, except now use an embedding matrix to create a dense representation of the sentences.
End of explanation |
12,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 8
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: In the previous chapter we developed a quadratic model of world
population growth from 1950 to 2016. It is a simple model, but it fits
the data well and the mechanisms it's based on are plausible.
In this chapter we'll use the quadratic model to generate projections of future growth, and compare our results to projections from actual
demographers.
Here's the code that downloads the data.
Step2: And here's the code that reads table2, which contains world populations estimates from the U.S. Census and U.N. DESA, among other organizations.
Step3: Generating Projections
Now let's run the quadratic model, extending the results until 2100, and see how our projections compare to the professionals'.
Here's the code we'll need from the previous chapter.
Step4: And here are the results.
Step5: According to the model, population growth will slow gradually after 2020, approaching 12.5 billion by 2100.
I am using the word "projection" deliberately, rather than
"prediction", with the following distinction
Step6: Some values are NaN, which indicates missing data, because some organizations did not publish projections for some years.
The column names are long strings; for convenience, I'll replace them with abbreviations.
Step8: The following function plots projections from the U.N. DESA and U.S. Census. It uses dropna to remove the NaN values from each series before plotting it.
Step9: Here are their projections compared to the results of the quadratic model.
Step10: The U.N. DESA expects the world population to reach 11 billion around 2100, and then level off.
Projections by U.S. Census are a little lower, and they only go until 2050.
Real demographers expect world population to grow more slowly than our model projects, probably because their models are broken down by region and country, where conditions are different, and they take into account expected economic development.
Nevertheless, their projections are qualitatively similar to ours, and
theirs differ from each other almost as much as they differ from ours.
So the results from this model, simple as it is, are not entirely unreasonable.
Summary
You might be interested in this video by Hans Rosling about the demographic changes we expect in this century.
Exercises
Exercise
Step11: The first element is NaN because we don't have the data for 1945, so we can't compute the first difference.
If we divide these differences by the populations, the result is an estimate of the growth rate during each year
Step12: The following function computes and plots the growth rates for the census and un estimates
Step13: And here's what it looks like.
Step14: Other than a bump around 1990, net growth rate has been declining roughly linearly since 1970.
We can model the decline by fitting a line to this data and extrapolating into the future.
Here's a function that takes a time stamp and computes a line that roughly fits the growth rates since 1970.
Step15: To see what it looks like, I'll create an array of time stamps from 1960 to 2020 and use alpha_func to compute the corresponding growth rates.
Step16: Here's what it looks like, compared to the data.
Step17: If you don't like the slope and intercept I chose, feel free to adjust them.
Now, as an exercise, you can use this function to make a projection of world population until 2100.
Create a System object that includes alpha_func as a system variable.
Define a growth function that uses alpha_func to compute the net growth rate at the given time t.
Run a simulation from 1960 to 2100 with your update function, and plot the results.
Compare your projections with those from the US Census and UN. | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 8
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
import os
filename = 'World_population_estimates.html'
if not os.path.exists(filename):
!wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/World_population_estimates.html
Explanation: In the previous chapter we developed a quadratic model of world
population growth from 1950 to 2016. It is a simple model, but it fits
the data well and the mechanisms it's based on are plausible.
In this chapter we'll use the quadratic model to generate projections of future growth, and compare our results to projections from actual
demographers.
Here's the code that downloads the data.
End of explanation
from pandas import read_html
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
Explanation: And here's the code that reads table2, which contains world populations estimates from the U.S. Census and U.N. DESA, among other organizations.
End of explanation
from modsim import TimeSeries
def run_simulation(system, growth_func):
results = TimeSeries()
results[system.t_0] = system.p_0
for t in range(system.t_0, system.t_end):
growth = growth_func(results[t], t, system)
results[t+1] = results[t] + growth
return results
def growth_func_quad(pop, t, system):
return system.alpha * pop + system.beta * pop**2
census = table2.census / 1e9
un = table2.un / 1e9
from modsim import System
t_0 = census.index[0]
p_0 = census[t_0]
system = System(t_0=t_0,
p_0=p_0,
t_end=2100)
system.alpha = 25 / 1000
system.beta = -1.8 / 1000
results = run_simulation(system, growth_func_quad)
from modsim import show
show(results.tail())
Explanation: Generating Projections
Now let's run the quadratic model, extending the results until 2100, and see how our projections compare to the professionals'.
Here's the code we'll need from the previous chapter.
End of explanation
from modsim import decorate
results.plot(color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Quadratic Model Projection')
Explanation: And here are the results.
End of explanation
table3 = tables[3]
table3.head()
Explanation: According to the model, population growth will slow gradually after 2020, approaching 12.5 billion by 2100.
I am using the word "projection" deliberately, rather than
"prediction", with the following distinction: "prediction" implies
something like "this is what we should reasonably expect to happen, at
least approximately"; "projection" implies something like "if this
model is actually a good description of what is happening in this
system, and if nothing in the future causes the parameters of the model to change, this is what would happen."
Using "projection" leaves open the possibility that there are important things in the real world that are not captured in the model. It also suggests that, even if the model is good, the parameters we estimate based on the past might be different in the future.
The quadratic model we've been working with is based on the assumption
that population growth is limited by the availability of resources; in
that scenario, as the population approaches carrying capacity, birth
rates fall and death rates rise because resources become scarce.
If that assumption is valid, we might be able to use actual population
growth to estimate carrying capacity, especially if we observe the
transition into the regime where the growth rate starts to fall.
But in the case of world population growth, those conditions don't
apply. Over the last 50 years, the net growth rate has leveled off, but not yet started to fall, so we don't have enough data to make a credible estimate of carrying capacity. And resource limitations are probably not the primary reason growth has slowed. As evidence, consider:
First, the death rate is not increasing; rather, it has declined
from 1.9% in 1950 to 0.8% now (see http://modsimpy.com/mortality).
So the decrease in net growth is due entirely to declining birth
rates.
Second, the relationship between resources and birth rate is the
opposite of what the model assumes; as nations develop and people
become more wealthy, birth rates tend to fall.
We should not take too seriously the idea that this model can estimate
carrying capacity. But the predictions of a model can be credible even
if the assumptions of the model are not strictly true. For example,
population growth might behave as if it is resource limited, even if
the actual mechanism is something else.
In fact, demographers who study population growth often use models
similar to ours. In the next section, we'll compare our projections to
theirs.
Projections
From the same page where we got the past population estimates, we'll read table3, which contains predictions for population growth over the next 50-100 years, generated by the U.S. Census, U.N. DESA, and the Population Reference Bureau.
End of explanation
table3.columns = ['census', 'prb', 'un']
Explanation: Some values are NaN, which indicates missing data, because some organizations did not publish projections for some years.
The column names are long strings; for convenience, I'll replace them with abbreviations.
End of explanation
def plot_projections(table):
Plot world population projections.
table: DataFrame with columns 'un' and 'census'
census_proj = table.census.dropna() / 1e9
un_proj = table.un.dropna() / 1e9
census_proj.plot(style=':', label='US Census')
un_proj.plot(style='--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
Explanation: The following function plots projections from the U.N. DESA and U.S. Census. It uses dropna to remove the NaN values from each series before plotting it.
End of explanation
plot_projections(table3)
results.plot(color='gray', label='model')
decorate(title='Quadratic Model Projection')
Explanation: Here are their projections compared to the results of the quadratic model.
End of explanation
diff = census.diff()
diff.head()
Explanation: The U.N. DESA expects the world population to reach 11 billion around 2100, and then level off.
Projections by U.S. Census are a little lower, and they only go until 2050.
Real demographers expect world population to grow more slowly than our model projects, probably because their models are broken down by region and country, where conditions are different, and they take into account expected economic development.
Nevertheless, their projections are qualitatively similar to ours, and
theirs differ from each other almost as much as they differ from ours.
So the results from this model, simple as it is, are not entirely unreasonable.
Summary
You might be interested in this video by Hans Rosling about the demographic changes we expect in this century.
Exercises
Exercise: The net growth rate of world population has been declining for several decades. That observation suggests one more way to generate more realistic projections, by extrapolating observed changes in growth rate.
To compute past growth rates, we'll use a function called diff, which computes the difference between successive elements in a Series. For example, here are the changes from one year to the next in census:
End of explanation
alpha = census.diff() / census
alpha.head()
Explanation: The first element is NaN because we don't have the data for 1945, so we can't compute the first difference.
If we divide these differences by the populations, the result is an estimate of the growth rate during each year:
End of explanation
def plot_alpha():
alpha_census = census.diff() / census
alpha_census.plot(style='.', label='US Census')
alpha_un = un.diff() / un
alpha_un.plot(style='.', label='UN DESA')
decorate(xlabel='Year', label='Net growth rate')
Explanation: The following function computes and plots the growth rates for the census and un estimates:
End of explanation
plot_alpha()
Explanation: And here's what it looks like.
End of explanation
def alpha_func(t):
intercept = 0.02
slope = -0.00021
return intercept + slope * (t - 1970)
Explanation: Other than a bump around 1990, net growth rate has been declining roughly linearly since 1970.
We can model the decline by fitting a line to this data and extrapolating into the future.
Here's a function that takes a time stamp and computes a line that roughly fits the growth rates since 1970.
End of explanation
from numpy import linspace
t_array = linspace(1960, 2020, 5)
alpha_array = alpha_func(t_array)
Explanation: To see what it looks like, I'll create an array of time stamps from 1960 to 2020 and use alpha_func to compute the corresponding growth rates.
End of explanation
from matplotlib.pyplot import plot
plot_alpha()
plot(t_array, alpha_array, color='gray')
Explanation: Here's what it looks like, compared to the data.
End of explanation
# Solution
t_0 = 1960
t_end = 2100
p_0 = census[t_0]
# Solution
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha_func=alpha_func)
# Solution
def growth_func_alpha(pop, t, system):
return system.alpha_func(t) * pop
# Solution
growth_func_alpha(p_0, t_0, system)
# Solution
results2 = run_simulation(system, growth_func_alpha);
# Solution
plot_projections(table3)
results2.plot(color='gray', label='model')
decorate(title='Proportional model, linearly decreasing rate')
# Solution
# If the net growth rate continues to decrease linearly,
# world population will peak around 2065 at about 9.8 billion,
# and then start to decline.
# Solution
results.idxmax(), results.max()
Explanation: If you don't like the slope and intercept I chose, feel free to adjust them.
Now, as an exercise, you can use this function to make a projection of world population until 2100.
Create a System object that includes alpha_func as a system variable.
Define a growth function that uses alpha_func to compute the net growth rate at the given time t.
Run a simulation from 1960 to 2100 with your update function, and plot the results.
Compare your projections with those from the US Census and UN.
End of explanation |
12,185 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
We will describe the model in three parts:
1) Photo Feature Extractor. This is a 16-layer VGG model pre-trained on the ImageNet dataset. We have pre-processed the photos with the VGG model (without the output layer) and will use the extracted features predicted by this model as input.
2) Sequence Processor. This is a word embedding layer for handling the text input, followed by a Long Short-Term Memory (LSTM) recurrent neural network layer.
3) Decoder (for lack of a better name). Both the feature extractor and sequence processor output a fixed-length vector. These are merged together and processed by a Dense layer to make a final prediction.
| Python Code::
# define the captioning model
def define_model(vocab_size, max_length):
# feature extractor model
inputs1 = Input(shape=(4096,))
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
# sequence model
inputs2 = Input(shape=(max_length,))
se1 = Embedding(vocab_size, 256, mask_zero=True)(inputs2)
se2 = Dropout(0.5)(se1)
se3 = LSTM(256)(se2)
# decoder model
decoder1 = add([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
# tie it together [image, seq] [word]
model = Model(inputs=[inputs1, inputs2], outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer='adam')
# summarize model
print(model.summary())
plot_model(model, to_file='model.png', show_shapes=True)
return model
|
12,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image features exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
Step1: Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
Step2: Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The hog_feature and color_histogram_hsv functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
Step3: Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
Step4: Inline question 1 | Python Code:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Image features exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
End of explanation
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
Explanation: Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
End of explanation
from cs231n.features import *
num_color_bins = 25 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
Explanation: Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The hog_feature and color_histogram_hsv functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
End of explanation
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9,1e-8, 1e-7, 5e-7,1e-6] #[1e-9, 1e-8, 1e-7]
regularization_strengths = [1e3,5e3,1e4,5e4,1e5,5e5,1e6,5e6,1e7,5e7,1e8] #[1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for each_learning_rate in learning_rates:
for each_regularization_strengths in regularization_strengths:
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=each_learning_rate,
reg=each_regularization_strengths,
num_iters=2500, verbose=True)
y_train_pred = svm.predict(X_train_feats)
training_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
validation_accuracy = np.mean(y_val == y_val_pred)
results[(each_learning_rate,each_regularization_strengths)]=(training_accuracy,validation_accuracy)
if best_val < validation_accuracy:
best_val = validation_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
Explanation: Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
End of explanation
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
best_val = -1
for reg in [1e-5,1e-3,1e-2,1e-1]:
for learning_rate in [5e-2,5e-1,1,2]:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=2500, batch_size=200,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=reg, verbose=False)
print "."
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
if best_val < val_acc:
best_val = val_acc
best_net = net
print "best till now ",best_val
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print test_acc
Explanation: Inline question 1:
Describe the misclassification results that you see. Do they make sense?
Neural Network on image features
Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.
For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
End of explanation |
12,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem statement
The Stokes problem is a classical example of a mixed problem.
Initialize
Step1: Geometry and mesh generation
Step2: Assembly
Step3: Next we create assemblers for the elements. We can give different elements for the solution vector and the test function. In this case we form the blocks $A$ and $B$ separately. | Python Code:
import sys
sys.path.append('../')
import numpy as np
import matplotlib.pyplot as plt
from spfem.geometry import GeometryMeshPyTriangle
%matplotlib inline
Explanation: Problem statement
The Stokes problem is a classical example of a mixed problem.
Initialize
End of explanation
g = GeometryMeshPyTriangle(np.array([(0, 0), (1, 0), (1, 0.2), (2, 0.4), (2, 0.6), (1, 0.8), (1, 1), (0, 1)]))
m = g.mesh(0.03)
m.draw()
m.show()
Explanation: Geometry and mesh generation
End of explanation
from spfem.element import ElementTriP1, ElementTriP2, ElementH1Vec
from spfem.assembly import AssemblerElement
Explanation: Assembly
End of explanation
a = AssemblerElement(m, ElementH1Vec(ElementTriP2()))
b = AssemblerElement(m, ElementH1Vec(ElementTriP2()), ElementTriP1())
c = AssemblerElement(m, ElementTriP1())
def stokes_bilinear_a(du, dv):
def inner_product(a, b):
return a[0][0]*b[0][0] +\
a[0][1]*b[0][1] +\
a[1][0]*b[1][0] +\
a[1][1]*b[1][1]
def eps(dw): # symmetric part of the velocity gradient
import copy
dW = copy.deepcopy(dw)
dW[0][1] = .5*(dw[0][1] + dw[1][0])
dW[1][0] = dW[0][1]
return dW
return inner_product(eps(du), eps(dv))
A = a.iasm(stokes_bilinear_a) # iasm takes a function handle defining the weak form
def stokes_bilinear_b(du, v):
return (du[0][0]+du[1][1])*v
B = b.iasm(stokes_bilinear_b)
from spfem.utils import stack
from scipy.sparse import csr_matrix
eps = 1e-3
C = c.iasm(lambda u, v: u*v)
K = stack(np.array([[A, B.T], [B, -eps*C]])).tocsr()
from spfem.utils import direct
import copy
x = np.zeros(K.shape[0])
f = copy.deepcopy(x)
# find DOF sets
dirichlet_dofs, _ = a.find_dofs(lambda x, y: x >= 1.0)
inflow_dofs, inflow_locs = a.find_dofs(lambda x, y: x == 2.0, dofrows=[0])
# set inflow condition and solve with direct method
def inflow_profile(y):
return (y-0.4)*(y-0.6)
x[inflow_dofs] = inflow_profile(inflow_locs[1, :])
I = np.setdiff1d(np.arange(K.shape[0]), dirichlet_dofs)
x = direct(K, f, x=x, I=I)
m.plot(x[np.arange(C.shape[0]) + A.shape[0]])
m.plot(np.sqrt(x[a.dofnum_u.n_dof[0, :]]**2+x[a.dofnum_u.n_dof[0, :]]**2), smooth=True)
plt.figure()
plt.quiver(m.p[0, :], m.p[1, :], x[a.dofnum_u.n_dof[0, :]], x[a.dofnum_u.n_dof[1, :]])
m.show()
Explanation: Next we create assemblers for the elements. We can give different elements for the solution vector and the test function. In this case we form the blocks $A$ and $B$ separately.
End of explanation |
12,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Connecting Spectra to Mocks
The purpose of this notebook is to demonstrate how to generate spectra and apply target selection cuts for various mock catalogs and target types. Here we generate spectra for targets in a single healpixel with no constraints on the target density (relative to the expected target density) or contaminants.
For code to generate large numbers of spectra over significant patches of sky and to create a representative DESI dataset (with parallelism), see desitarget/bin/select_mock_targets (as well as its MPI-parallelized cousin, desitarget/bin/select_mock_targets) and desitarget.mock.build.targets_truth.
Finally, note that the various python Classes instantiated here (documented in desitarget.mock.mockmaker) are easily extensible to other mock catalogs and galaxy/QSO/stellar physics. Please contact @desi-data if you have specific suggestions, requirements, or desired features.
John Moustakas
Siena College
2018 September
Step1: To keep the calculations below manageable we specify a single nside=64 healpixel in an arbitrary location of the DESI footprint.
Step2: Specifying the random seed makes our calculations reproducible.
Step4: Define a couple wrapper routines we will use below several times.
Step5: Tracer QSOs
Both tracer and Lya QSO spectra contain an underlying QSO spectrum, but the Lya QSOs (which we demonstrate below) also include the Lya forest (here, based on the v2.0 of the "London" mocks).
Every target class has its own dedicated "Maker" class.
Step6: The various read methods return a dictionary with (hopefully self-explanatory) target- and mock-specific quantities.
Because most mock catalogs only come with (cosmologically accurate) 3D positions (RA, Dec, redshift), we use Gaussian mixture models trained on real data to assign other quantities like shapes, magnitudes, and colors, depending on the target class. For more details see the gmm-dr7.pynb Python notebook.
Step7: Now we can generate the spectra as well as the targeting catalogs (targets) and corresponding truth table.
Step8: The truth catalog contains the target-type-agnostic, known properties of each object (including the noiseless photometry), while the objtruth catalog contains different information depending on the type of target.
Step9: Next, let's run target selection, after which point the targets catalog should look just like an imaging targeting catalog (here, using the DR7 data model).
Step10: And indeed, we can see that only a subset of the QSOs were identified as targets (the rest scattered out of the QSO color selection boxes).
Step11: Finally, let's plot some example spectra.
Step12: Generating QSO spectra with cosmological Lya skewers proceeds along similar lines.
Here, we also include BALs with 25% probability.
Step13: Lets plot together some of the spectra with the old and new continum model
Step14: And finally we compare the colors, for the two runs with the new and old continum
Step15: Conclusion
Step16: Demonstrate the other extragalactic target classes
Step17: LRGs
Step18: ELGs
Step19: BGS
Step20: Next, demonstrate how to generate spectra of stars...
MWS_MAIN
Step21: MWS_NEARBY
Step22: White dwarfs (WDs)
Step23: Finally demonstrate how to generate (empyt) SKY spectra. | Python Code:
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
from desiutil.log import get_logger, DEBUG
log = get_logger()
import seaborn as sns
sns.set(style='white', font_scale=1.1, palette='Set2')
%matplotlib inline
Explanation: Connecting Spectra to Mocks
The purpose of this notebook is to demonstrate how to generate spectra and apply target selection cuts for various mock catalogs and target types. Here we generate spectra for targets in a single healpixel with no constraints on the target density (relative to the expected target density) or contaminants.
For code to generate large numbers of spectra over significant patches of sky and to create a representative DESI dataset (with parallelism), see desitarget/bin/select_mock_targets (as well as its MPI-parallelized cousin, desitarget/bin/select_mock_targets) and desitarget.mock.build.targets_truth.
Finally, note that the various python Classes instantiated here (documented in desitarget.mock.mockmaker) are easily extensible to other mock catalogs and galaxy/QSO/stellar physics. Please contact @desi-data if you have specific suggestions, requirements, or desired features.
John Moustakas
Siena College
2018 September
End of explanation
healpixel = 26030
nside = 64
Explanation: To keep the calculations below manageable we specify a single nside=64 healpixel in an arbitrary location of the DESI footprint.
End of explanation
seed = 555
rand = np.random.RandomState(seed)
Explanation: Specifying the random seed makes our calculations reproducible.
End of explanation
def plot_subset(wave, flux, truth, objtruth, nplot=16, ncol=4, these=None,
xlim=None, loc='right', targname='', objtype=''):
Plot a random sampling of spectra.
nspec, npix = flux.shape
if nspec < nplot:
nplot = nspec
nrow = np.ceil(nplot / ncol).astype('int')
if loc == 'left':
xtxt, ytxt, ha = 0.05, 0.93, 'left'
else:
xtxt, ytxt, ha = 0.93, 0.93, 'right'
if these is None:
these = rand.choice(nspec, nplot, replace=False)
these = np.sort(these)
ww = (wave > 5500) * (wave < 5550)
fig, ax = plt.subplots(nrow, ncol, figsize=(2.5*ncol, 2*nrow), sharey=False, sharex=True)
for thisax, indx in zip(ax.flat, these):
thisax.plot(wave, flux[indx, :] / np.median(flux[indx, ww]))
if objtype == 'STAR' or objtype == 'WD':
thisax.text(xtxt, ytxt, r'$T_{{eff}}$={:.0f} K'.format(objtruth['TEFF'][indx]),
ha=ha, va='top', transform=thisax.transAxes, fontsize=13)
else:
thisax.text(xtxt, ytxt, 'z={:.3f}'.format(truth['TRUEZ'][indx]),
ha=ha, va='top', transform=thisax.transAxes, fontsize=13)
thisax.xaxis.set_major_locator(plt.MaxNLocator(3))
if xlim:
thisax.set_xlim(xlim)
for thisax in ax.flat:
thisax.yaxis.set_ticks([])
thisax.margins(0.2)
fig.suptitle(targname)
fig.subplots_adjust(wspace=0.05, hspace=0.05, top=0.93)
Explanation: Define a couple wrapper routines we will use below several times.
End of explanation
from desitarget.mock.mockmaker import QSOMaker
QSO = QSOMaker(seed=seed)
Explanation: Tracer QSOs
Both tracer and Lya QSO spectra contain an underlying QSO spectrum, but the Lya QSOs (which we demonstrate below) also include the Lya forest (here, based on the v2.0 of the "London" mocks).
Every target class has its own dedicated "Maker" class.
End of explanation
dir(QSOMaker)
data = QSO.read(healpixels=healpixel, nside=nside)
for key in sorted(list(data.keys())):
print('{:>20}'.format(key))
Explanation: The various read methods return a dictionary with (hopefully self-explanatory) target- and mock-specific quantities.
Because most mock catalogs only come with (cosmologically accurate) 3D positions (RA, Dec, redshift), we use Gaussian mixture models trained on real data to assign other quantities like shapes, magnitudes, and colors, depending on the target class. For more details see the gmm-dr7.pynb Python notebook.
End of explanation
%time flux, wave, targets, truth, objtruth = QSO.make_spectra(data)
print(flux.shape, wave.shape)
Explanation: Now we can generate the spectra as well as the targeting catalogs (targets) and corresponding truth table.
End of explanation
truth
objtruth
Explanation: The truth catalog contains the target-type-agnostic, known properties of each object (including the noiseless photometry), while the objtruth catalog contains different information depending on the type of target.
End of explanation
QSO.select_targets(targets, truth)
targets
Explanation: Next, let's run target selection, after which point the targets catalog should look just like an imaging targeting catalog (here, using the DR7 data model).
End of explanation
from desitarget.targetmask import desi_mask
isqso = (targets['DESI_TARGET'] & desi_mask.QSO) != 0
print('Identified {} / {} QSO targets.'.format(np.count_nonzero(isqso), len(targets)))
Explanation: And indeed, we can see that only a subset of the QSOs were identified as targets (the rest scattered out of the QSO color selection boxes).
End of explanation
plot_subset(wave, flux, truth, objtruth, targname='QSO')
Explanation: Finally, let's plot some example spectra.
End of explanation
from desitarget.mock.mockmaker import LYAMaker
mockfile='/project/projectdirs/desi/mocks/lya_forest/london/v9.0/v9.0.0/master.fits'
LYA = LYAMaker(seed=seed, balprob=0.25)
lyadata = LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside)
%time lyaflux, lyawave, lyatargets, lyatruth, lyaobjtruth = LYA.make_spectra(lyadata)
lyaobjtruth
plot_subset(lyawave, lyaflux, lyatruth, lyaobjtruth, xlim=(3500, 5500), targname='LYA')
#Now lets generate the same spectra but including the different features and the new continum model.
#For this we need to reload the desitarget module, for some reason it seems not be enough with defining a diferen variable for the LYAMaker
del sys.modules['desitarget.mock.mockmaker']
from desitarget.mock.mockmaker import LYAMaker
LYA = LYAMaker(seed=seed,sqmodel='lya_simqso_model_develop',balprob=0.25)
lyadata_continum = LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside)
%time lyaflux_cont, lyawave_cont, lyatargets_cont, lyatruth_cont, lyaobjtruth_cont = LYA.make_spectra(lyadata_continum)
Explanation: Generating QSO spectra with cosmological Lya skewers proceeds along similar lines.
Here, we also include BALs with 25% probability.
End of explanation
plt.figure(figsize=(20, 10))
indx=rand.choice(len(lyaflux),9)
for i in range(9):
plt.subplot(3, 3, i+1)
plt.plot(lyawave,lyaflux[indx[i]],label="Old Continum")
plt.plot(lyawave_cont,lyaflux_cont[indx[i]],label="New Continum")
plt.legend()
Explanation: Lets plot together some of the spectra with the old and new continum model
End of explanation
plt.plot(lyatruth["FLUX_W1"],lyatruth_cont["FLUX_W1"]/lyatruth["FLUX_W1"]-1,'.')
plt.xlabel("FLUX_W1")
plt.ylabel(r"FLUX_W1$^{new}$/FLUX_W1-1")
plt.plot(lyatruth["FLUX_W2"],lyatruth_cont["FLUX_W2"]/lyatruth["FLUX_W2"]-1,'.')
plt.xlabel("FLUX_W2")
plt.ylabel(r"(FLUX_W2$^{new}$/FLUX_W2)-1")
plt.hist(lyatruth["FLUX_W1"],bins=100,label="Old Continum",alpha=0.7)
plt.hist(lyatruth_cont["FLUX_W1"],bins=100,label="New Continum",histtype='step',linestyle='--')
plt.xlim(0,100) #Limiting to 100 to see it better.
plt.xlabel("FLUX_W1")
plt.legend()
plt.hist(lyatruth["FLUX_W2"],bins=100,label="Old Continum",alpha=0.7)
plt.hist(lyatruth_cont["FLUX_W2"],bins=100,label="New Continum",histtype='step',linestyle='--')
plt.xlim(0,100) #Limiting to 100 to see it better.
plt.xlabel("FLUX_W2")
plt.legend()
Explanation: And finally we compare the colors, for the two runs with the new and old continum
End of explanation
del sys.modules['desitarget.mock.mockmaker']
from desitarget.mock.mockmaker import LYAMaker ##Done in order to reload the desitarget, it doesn't seem to be enough with initiating a diferent variable for the LYAMaker class.
LYA = LYAMaker(seed=seed,sqmodel='lya_simqso_model',balprob=0.25,add_dla=True,add_metals="all",add_lyb=True)
lyadata_all= LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside)
%time lyaflux_all, lyawave_all, lyatargets_all, lyatruth_all, lyaobjtruth_all = LYA.make_spectra(lyadata_all)
plot_subset(lyawave_all, lyaflux_all, lyatruth_all, lyaobjtruth_all, xlim=(3500, 5500), targname='LYA')
Explanation: Conclusion: Colors are slightly affected by changing the continum model.
To Finalize the LYA section, lets generate another set of spectra now including DLAs, metals, LYB, etc.
End of explanation
def demo_mockmaker(Maker, seed=None, nrand=16, loc='right'):
TARGET = Maker(seed=seed)
log.info('Reading the mock catalog for {}s'.format(TARGET.objtype))
tdata = TARGET.read(healpixels=healpixel, nside=nside)
log.info('Generating {} random spectra.'.format(nrand))
indx = rand.choice(len(tdata['RA']), np.min( (nrand, len(tdata['RA'])) ) )
tflux, twave, ttargets, ttruth, tobjtruth = TARGET.make_spectra(tdata, indx=indx)
log.info('Selecting targets')
TARGET.select_targets(ttargets, ttruth)
plot_subset(twave, tflux, ttruth, tobjtruth, loc=loc,
targname=tdata['TARGET_NAME'], objtype=TARGET.objtype)
Explanation: Demonstrate the other extragalactic target classes: LRG, ELG, and BGS.
For simplicity let's write a little wrapper script that does all the key steps.
End of explanation
from desitarget.mock.mockmaker import LRGMaker
%time demo_mockmaker(LRGMaker, seed=seed, loc='left')
Explanation: LRGs
End of explanation
from desitarget.mock.mockmaker import ELGMaker
%time demo_mockmaker(ELGMaker, seed=seed, loc='left')
Explanation: ELGs
End of explanation
from desitarget.mock.mockmaker import BGSMaker
%time demo_mockmaker(BGSMaker, seed=seed)
Explanation: BGS
End of explanation
from desitarget.mock.mockmaker import MWS_MAINMaker
%time demo_mockmaker(MWS_MAINMaker, seed=seed, loc='left')
Explanation: Next, demonstrate how to generate spectra of stars...
MWS_MAIN
End of explanation
from desitarget.mock.mockmaker import MWS_NEARBYMaker
%time demo_mockmaker(MWS_NEARBYMaker, seed=seed, loc='left')
Explanation: MWS_NEARBY
End of explanation
from desitarget.mock.mockmaker import WDMaker
%time demo_mockmaker(WDMaker, seed=seed, loc='right')
Explanation: White dwarfs (WDs)
End of explanation
from desitarget.mock.mockmaker import SKYMaker
SKY = SKYMaker(seed=seed)
skydata = SKY.read(healpixels=healpixel, nside=nside)
skyflux, skywave, skytargets, skytruth, objtruth = SKY.make_spectra(skydata)
SKY.select_targets(skytargets, skytruth)
Explanation: Finally demonstrate how to generate (empyt) SKY spectra.
End of explanation |
12,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing for data analysis
In a data analysis context, we want to test our code, as usual, but also our data (i.e., expected schema; e.g., data types) and our statistics (i.e., expected properties of distributions; e.g., value ranges). We focus on a defensive programming approach, by running expectation checks.
Step1: Testing code
As far as code is concerned (when we implement operations to transform data), please refer to the lesson on testing, debugging, and profiling.
In the first notebook, we came across pd.testing.assert_frame_equal(); be aware that pd.testing.assert_series_equal() and pd.testing.assert_index_equal() are also available.
Step2: Testing data
Step3: Testing statistics
Step4: When datasets are large, it might be difficult to carry out exact tests (for example, using pd.testing.assert_series_equal()). It might then be reasonable to test for properties of a series, rather than element-wise equality.
Step5: Make use of visual checks too
Step6: Handling missing data
Some data are missing, either because they exist but were not collected or because they never existed. How can we detect missing data (null values)?
Step7: When summing data, null (missing) values are treated as zero. | Python Code:
import pandas as pd
df = pd.read_csv('../data/tidy_who.csv')
df.sample(5)
Explanation: Testing for data analysis
In a data analysis context, we want to test our code, as usual, but also our data (i.e., expected schema; e.g., data types) and our statistics (i.e., expected properties of distributions; e.g., value ranges). We focus on a defensive programming approach, by running expectation checks.
End of explanation
pd.testing.assert_index_equal(df.index, df.index)
Explanation: Testing code
As far as code is concerned (when we implement operations to transform data), please refer to the lesson on testing, debugging, and profiling.
In the first notebook, we came across pd.testing.assert_frame_equal(); be aware that pd.testing.assert_series_equal() and pd.testing.assert_index_equal() are also available.
End of explanation
df['year'].dtype
assert df['year'].dtype == 'int'
df['sex'].dtype
assert df['sex'].dtype == 'object'
Explanation: Testing data
End of explanation
assert df['year'].max() <= 2017
assert df['cases'].min() == 0
Explanation: Testing statistics
End of explanation
df['cases'].describe()
Explanation: When datasets are large, it might be difficult to carry out exact tests (for example, using pd.testing.assert_series_equal()). It might then be reasonable to test for properties of a series, rather than element-wise equality.
End of explanation
assert df['sex'].nunique() > 1
Explanation: Make use of visual checks too: For example, it is generally a lot more straightforward to spot outliers if you plot your data!
End of explanation
df_sub = df[(df.country == 'Greece') & (df.year > 2014) & (df.age_range == 65)]
df_sub
df_sub['cases'].isnull()
df_sub['cases'].notnull()
df_sub['cases'].isnull().value_counts()
Explanation: Handling missing data
Some data are missing, either because they exist but were not collected or because they never existed. How can we detect missing data (null values)?
End of explanation
df_sub['cases'].sum()
df_sub.fillna('NA')
df_sub['cases'].fillna('0')
df_sub.dropna()
Explanation: When summing data, null (missing) values are treated as zero.
End of explanation |
12,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prediction Failed Movies
Loading the dataset
Step1: Feature Generation
Generating some additional basic features
Step2: The number of null values per column
Step3: Keeping all genre dummy variables
Step4: Dropping non-feature columns
Step5: Dropping all rows that still have null values
Step6: Now, making sure we have no null values
Step7: We end up with a dataset of size
Step8: Prediction
Step9: Predicting failed movies
We define a failed movie as a movie whose ROI (Return On Investment) is below zero, meaning the investors actually lost money making it
Step10: Running logistic regression over 5 folds of our dataset | Python Code:
import os
import pandas as pd
import sklearn as skl
import holcrawl.shared
dataset_dir = holcrawl.shared._get_dataset_dir_path()
dataset_path = os.path.join(dataset_dir, 'movies_dataset.csv')
df = pd.read_csv(dataset_path)
Explanation: Prediction Failed Movies
Loading the dataset
End of explanation
df['ROI'] = (df['gross_income'] - df['budget']) / df['budget']
df['name_length'] = df['name'].map(lambda name: len(name))
len(df)
Explanation: Feature Generation
Generating some additional basic features:
End of explanation
df.isnull().sum()
BASE_FEAT_TO_KEEP = [
'duration', 'budget', 'opening_month', 'opening_day', 'opening_day_of_year', 'year',
'avg_mc_critic_by_opening', 'num_mc_critic_by_opening', 'name_length', 'opening_weekend_income',
'num_imdb_user_by_opening', 'avg_imdb_user_by_opening', 'opening_weekend_screens'# 'avg_mc_user_by_opening'
]
Explanation: The number of null values per column:
End of explanation
FEAT_TO_KEEP = BASE_FEAT_TO_KEEP + [col for col in df.columns if 'genres' in col]
features = df.drop([col for col in df.columns if col not in BASE_FEAT_TO_KEEP], axis=1)
Explanation: Keeping all genre dummy variables:
End of explanation
dataset = df.drop([col for col in df.columns if col not in FEAT_TO_KEEP], axis=1)
Explanation: Dropping non-feature columns:
End of explanation
dataset = dataset.dropna(axis=0)
Explanation: Dropping all rows that still have null values:
End of explanation
dataset.isnull().sum().sum()
Explanation: Now, making sure we have no null values:
End of explanation
len(dataset)
Explanation: We end up with a dataset of size:
End of explanation
import numpy as np
from sklearn import linear_model
from sklearn.model_selection import cross_val_score
Explanation: Prediction
End of explanation
failed = df['ROI'].ix[dataset.index] < 0
X = dataset
Y = failed
Explanation: Predicting failed movies
We define a failed movie as a movie whose ROI (Return On Investment) is below zero, meaning the investors actually lost money making it:
End of explanation
logreg = linear_model.LogisticRegression()
acc_scores = cross_val_score(logreg, X, Y, cv=5, n_jobs=1)
mean_accuracy = np.mean(acc_scores)
accuracy_std = np.std(acc_scores)
print("Accuracy is {:.2f}% ± {:.2f}%.".format(mean_accuracy*100, accuracy_std*100))
recall_scores = cross_val_score(logreg, X, Y, cv=5, n_jobs=1, scoring='recall')
mean_recall = np.mean(recall_scores)
recall_std = np.std(recall_scores)
print("Recall = {:.2f}% ± {:.2f}".format(mean_recall*100, recall_std*100))
Explanation: Running logistic regression over 5 folds of our dataset:
End of explanation |
12,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Python to Access NCEI Archived NEXRAD Level 2 Data
This notebook shows how to access the THREDDS Data Server (TDS) instance that is serving up archived NEXRAD Level 2 data hosted on Amazon S3. The TDS provides a mechanism to query for available data files, as well as provides access to the data as native volume files, through OPeNDAP, and using its own CDMRemote protocol. Since we're using Python, we can take advantage of Unidata's Siphon package, which provides an easy API for talking to THREDDS servers.
NOTE
Step1: First we'll create an instance of RadarServer to point to the appropriate radar server access URL.
Step2: Next, we'll create a new query object to help request the data. Using the chaining methods, let's ask for the latest data at the radar KLVX (Louisville, KY). We see that when the query is represented as a string, it shows the encoded URL.
Step3: We can use the RadarServer instance to check our query, to make sure we have required parameters and that we have chosen valid station(s) and variable(s)
Step4: Make the request, which returns an instance of TDSCatalog; this handles parsing the returned XML information.
Step5: We can look at the datasets on the catalog to see what data we found by the query. We find one volume in the return, since we asked for the volume nearest to a single time.
Step6: We can pull that dataset out of the dictionary and look at the available access URLs. We see URLs for OPeNDAP, CDMRemote, and HTTPServer (direct download).
Step7: We'll use the CDMRemote reader in Siphon and pass it the appropriate access URL.
Step8: We define some helper functions to make working with the data easier. One takes the raw data and converts it to floating point values with the missing data points appropriately marked. The other helps with converting the polar coordinates (azimuth and range) to Cartesian (x and y).
Step9: The CDMRemote reader provides an interface that is almost identical to the usual python NetCDF interface. We pull out the variables we need for azimuth and range, as well as the data itself.
Step10: Then convert the raw data to floating point values and the polar coordinates to Cartesian.
Step11: MetPy is a Python package for meteorology (Documentation
Step12: Finally, we plot them up using matplotlib and cartopy. We create a helper function for making a map to keep things simpler later.
Step13: Download a collection of historical data
This time we'll make a query based on a longitude, latitude point and using a time range.
Step14: The specified longitude, latitude are in NY and the TDS helpfully finds the closest station to that point. We can see that for this time range we obtained multiple datasets.
Step15: Grab the first dataset so that we can get the longitude and latitude of the station and make a map for plotting. We'll go ahead and specify some longitude and latitude bounds for the map.
Step16: Use the function to make a new map and plot a colormapped view of the data
Step17: Now we can loop over the collection of returned datasets and plot them. As we plot, we collect the returned plot objects so that we can use them to make an animated plot. We also add a timestamp for each plot.
Step18: Using matplotlib, we can take a collection of Artists that have been plotted and turn them into an animation. With matplotlib 1.5 (1.5-rc2 is available now!), this animation can be converted to HTML5 video viewable in the notebook. | Python Code:
import matplotlib
import warnings
warnings.filterwarnings("ignore", category=matplotlib.cbook.MatplotlibDeprecationWarning)
%matplotlib inline
Explanation: Using Python to Access NCEI Archived NEXRAD Level 2 Data
This notebook shows how to access the THREDDS Data Server (TDS) instance that is serving up archived NEXRAD Level 2 data hosted on Amazon S3. The TDS provides a mechanism to query for available data files, as well as provides access to the data as native volume files, through OPeNDAP, and using its own CDMRemote protocol. Since we're using Python, we can take advantage of Unidata's Siphon package, which provides an easy API for talking to THREDDS servers.
NOTE: Due to data charges, the TDS instance in AWS only allows access to .edu domains. For other users interested in using Siphon to access radar data, you can access recent (2 weeks') data by changing the server URL below to: http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/
But first!
Bookmark these resources for when you want to use Siphon later!
+ latest Siphon documentation
+ Siphon github repo
+ TDS documentation
Downloading the single latest volume
Just a bit of initial set-up to use inline figures and quiet some warnings.
End of explanation
# The S3 URL did not work for me, despite .edu domain
#url = 'http://thredds-aws.unidata.ucar.edu/thredds/radarServer/nexrad/level2/S3/'
#Trying motherlode URL
url = 'http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/'
from siphon.radarserver import RadarServer
rs = RadarServer(url)
Explanation: First we'll create an instance of RadarServer to point to the appropriate radar server access URL.
End of explanation
from datetime import datetime, timedelta
query = rs.query()
query.stations('KLVX').time(datetime.utcnow())
Explanation: Next, we'll create a new query object to help request the data. Using the chaining methods, let's ask for the latest data at the radar KLVX (Louisville, KY). We see that when the query is represented as a string, it shows the encoded URL.
End of explanation
rs.validate_query(query)
Explanation: We can use the RadarServer instance to check our query, to make sure we have required parameters and that we have chosen valid station(s) and variable(s)
End of explanation
catalog = rs.get_catalog(query)
Explanation: Make the request, which returns an instance of TDSCatalog; this handles parsing the returned XML information.
End of explanation
catalog.datasets
Explanation: We can look at the datasets on the catalog to see what data we found by the query. We find one volume in the return, since we asked for the volume nearest to a single time.
End of explanation
ds = list(catalog.datasets.values())[0]
ds.access_urls
Explanation: We can pull that dataset out of the dictionary and look at the available access URLs. We see URLs for OPeNDAP, CDMRemote, and HTTPServer (direct download).
End of explanation
from siphon.cdmr import Dataset
data = Dataset(ds.access_urls['CdmRemote'])
Explanation: We'll use the CDMRemote reader in Siphon and pass it the appropriate access URL.
End of explanation
import numpy as np
def raw_to_masked_float(var, data):
# Values come back signed. If the _Unsigned attribute is set, we need to convert
# from the range [-127, 128] to [0, 255].
if var._Unsigned:
data = data & 255
# Mask missing points
data = np.ma.array(data, mask=data==0)
# Convert to float using the scale and offset
return data * var.scale_factor + var.add_offset
def polar_to_cartesian(az, rng):
az_rad = np.deg2rad(az)[:, None]
x = rng * np.sin(az_rad)
y = rng * np.cos(az_rad)
return x, y
Explanation: We define some helper functions to make working with the data easier. One takes the raw data and converts it to floating point values with the missing data points appropriately marked. The other helps with converting the polar coordinates (azimuth and range) to Cartesian (x and y).
End of explanation
sweep = 0
ref_var = data.variables['Reflectivity_HI']
ref_data = ref_var[sweep]
rng = data.variables['distanceR_HI'][:]
az = data.variables['azimuthR_HI'][sweep]
Explanation: The CDMRemote reader provides an interface that is almost identical to the usual python NetCDF interface. We pull out the variables we need for azimuth and range, as well as the data itself.
End of explanation
ref = raw_to_masked_float(ref_var, ref_data)
x, y = polar_to_cartesian(az, rng)
Explanation: Then convert the raw data to floating point values and the polar coordinates to Cartesian.
End of explanation
from metpy.plots import ctables # For NWS colortable
ref_norm, ref_cmap = ctables.registry.get_with_steps('NWSReflectivity', 5, 5)
Explanation: MetPy is a Python package for meteorology (Documentation: http://metpy.readthedocs.org and GitHub: http://github.com/MetPy/MetPy). We import MetPy and use it to get the colortable and value mapping information for the NWS Reflectivity data.
End of explanation
import matplotlib.pyplot as plt
import cartopy
def new_map(fig, lon, lat):
# Create projection centered on the radar. This allows us to use x
# and y relative to the radar.
proj = cartopy.crs.LambertConformal(central_longitude=lon, central_latitude=lat)
# New axes with the specified projection
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add coastlines
ax.coastlines('50m', 'black', linewidth=2, zorder=2)
# Grab state borders
state_borders = cartopy.feature.NaturalEarthFeature(
category='cultural', name='admin_1_states_provinces_lines',
scale='50m', facecolor='none')
ax.add_feature(state_borders, edgecolor='black', linewidth=1, zorder=3)
return ax
Explanation: Finally, we plot them up using matplotlib and cartopy. We create a helper function for making a map to keep things simpler later.
End of explanation
query = rs.query()
#dt = datetime(2012, 10, 29, 15) # Our specified time
dt = datetime(2016, 6, 8, 18) # Our specified time
query.lonlat_point(-73.687, 41.175).time_range(dt, dt + timedelta(hours=1))
Explanation: Download a collection of historical data
This time we'll make a query based on a longitude, latitude point and using a time range.
End of explanation
cat = rs.get_catalog(query)
cat.datasets
Explanation: The specified longitude, latitude are in NY and the TDS helpfully finds the closest station to that point. We can see that for this time range we obtained multiple datasets.
End of explanation
ds = list(cat.datasets.values())[0]
data = Dataset(ds.access_urls['CdmRemote'])
# Pull out the data of interest
sweep = 0
rng = data.variables['distanceR_HI'][:]
az = data.variables['azimuthR_HI'][sweep]
ref_var = data.variables['Reflectivity_HI']
# Convert data to float and coordinates to Cartesian
ref = raw_to_masked_float(ref_var, ref_var[sweep])
x, y = polar_to_cartesian(az, rng)
Explanation: Grab the first dataset so that we can get the longitude and latitude of the station and make a map for plotting. We'll go ahead and specify some longitude and latitude bounds for the map.
End of explanation
fig = plt.figure(figsize=(10, 10))
ax = new_map(fig, data.StationLongitude, data.StationLatitude)
# Set limits in lat/lon space
ax.set_extent([-77, -70, 38, 42])
# Add ocean and land background
ocean = cartopy.feature.NaturalEarthFeature('physical', 'ocean', scale='50m',
edgecolor='face',
facecolor=cartopy.feature.COLORS['water'])
land = cartopy.feature.NaturalEarthFeature('physical', 'land', scale='50m',
edgecolor='face',
facecolor=cartopy.feature.COLORS['land'])
ax.add_feature(ocean, zorder=-1)
ax.add_feature(land, zorder=-1)
#ax = new_map(fig, data.StationLongitude, data.StationLatitude)
ax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0);
Explanation: Use the function to make a new map and plot a colormapped view of the data
End of explanation
meshes = []
for item in sorted(cat.datasets.items()):
# After looping over the list of sorted datasets, pull the actual Dataset object out
# of our list of items and access over CDMRemote
ds = item[1]
data = Dataset(ds.access_urls['CdmRemote'])
# Pull out the data of interest
sweep = 0
rng = data.variables['distanceR_HI'][:]
az = data.variables['azimuthR_HI'][sweep]
ref_var = data.variables['Reflectivity_HI']
# Convert data to float and coordinates to Cartesian
ref = raw_to_masked_float(ref_var, ref_var[sweep])
x, y = polar_to_cartesian(az, rng)
# Plot the data and the timestamp
mesh = ax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0)
text = ax.text(0.65, 0.03, data.time_coverage_start, transform=ax.transAxes,
fontdict={'size':16})
# Collect the things we've plotted so we can animate
meshes.append((mesh, text))
Explanation: Now we can loop over the collection of returned datasets and plot them. As we plot, we collect the returned plot objects so that we can use them to make an animated plot. We also add a timestamp for each plot.
End of explanation
# Set up matplotlib to do the conversion to HTML5 video
import matplotlib
matplotlib.rcParams['animation.html'] = 'html5'
# Create an animation
from matplotlib.animation import ArtistAnimation
ArtistAnimation(fig, meshes)
Explanation: Using matplotlib, we can take a collection of Artists that have been plotted and turn them into an animation. With matplotlib 1.5 (1.5-rc2 is available now!), this animation can be converted to HTML5 video viewable in the notebook.
End of explanation |
12,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot bokeh graphs
The purpose of this notebook is to create a bokeh representation of the latest data (incl. QC) from socib mooring stations.
Define Imports
Step1: In case, the output wants to be seen within the jupyter notebook, this line must be un-commented. However, since the generated HTML file will be opened in a new window, this is not really necessary.
Step2: Define data sources
We define some basic data handling functions here. These will just enable us to e.g. access the nefCDF variable data as numpy array (significantly faster) or convert times to a joint base.
Step3: Note that these scripts differ from the socib mooring station report generation tool. Here, we use a simple web - scraping from the socib thredds server.
Step6: Here, we define the bokeh plotting parameters. Also, we create a javascript callback to automatically adjust the y-axis according to the current zoom-extend.
Step7: Also, we have to define the variables we want to plot. In this case, we just used the "List of important parameters" from the socib DataDiscovery service and added the relative humidity to it (since we will plot weather stations here).
Step8: Get latest data
Here, we will call our defined methods. Also, we will define the output filename and the desired timespan of the plotting. | Python Code:
import numpy as np
import pandas as pd
from urllib2 import Request, urlopen, URLError
from lxml import html
import time
from netCDF4 import Dataset
import datetime
import calendar
from collections import OrderedDict
from bokeh.plotting import figure, ColumnDataSource
from bokeh.models import HoverTool
from bokeh.models import LinearAxis, Range1d, CustomJS
from bokeh.models.widgets import Panel, Tabs
from bokeh.io import output_notebook, show, output_file, vplot, hplot
import bokeh
Explanation: Plot bokeh graphs
The purpose of this notebook is to create a bokeh representation of the latest data (incl. QC) from socib mooring stations.
Define Imports
End of explanation
#output_notebook()
Explanation: In case, the output wants to be seen within the jupyter notebook, this line must be un-commented. However, since the generated HTML file will be opened in a new window, this is not really necessary.
End of explanation
def get_data_array(data_array):
if type(data_array.__array__()) is np.ma.masked_array:
return data_array.__array__().data
else:
return data_array.__array__()
def get_qc_variable_name(variable):
try:
qc_variable_name = variable.ancillary_variables
except AttributeError:
# print "No QC variable found for " + variable.name
qc_variable_name = None
return qc_variable_name
def get_pandas_timestamp_series(datetime_array):
out = pd.Series(np.zeros(len(datetime_array)))
counter = 0
for i in datetime_array:
out[counter] = pd.tslib.Timestamp(i)
counter += 1
return out
def days_to_seconds(days):
return int(days) * 24 * 60 * 60
def get_str_time(x): return str(x)
def totimestamp(dt, epoch=datetime.datetime(1970,1,1)):
td = dt - epoch
# return td.total_seconds()
return (td.microseconds + (td.seconds + td.days * 86400) * 10**6) / 10**6
Explanation: Define data sources
We define some basic data handling functions here. These will just enable us to e.g. access the nefCDF variable data as numpy array (significantly faster) or convert times to a joint base.
End of explanation
def get_mooring_stations(url):
name_list = []
end_URLBuilder = []
req = Request(url)
try:
response = urlopen(req)
except URLError as e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
else:
URLBuilder = []
tree = html.fromstring(response.read())
link_path = tree.xpath('//a')
for x in range(1, len(link_path)):
URLBuilder.append(link_path[x].values())
URLLister = []
for n in range(0, len(URLBuilder) - 4):
string = str(URLBuilder[n])
idx = string.find("/")
url = "http://thredds.socib.es/thredds/catalog/mooring/weather_station/" + URLBuilder[n][0][0:idx - 1] + "/L1/catalog.html"
name = URLBuilder[n][0][0:idx - 2]
req = Request(url)
try:
response = urlopen(req)
except URLError as e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
else:
URLLister.append(url)
name_list.append(name)
for m in URLLister:
req = Request(m)
try:
response = urlopen(req)
except URLError as e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
else:
tree = html.fromstring(response.read())
link_path = tree.xpath('//a')
for x in range(1, len(link_path)):
string = str(link_path[x].values())
idx = string.find("=")
end_URLBuilder.append("http://thredds.socib.es/thredds/dodsC/" + str(
link_path[x].values()[0][idx - 1:len(string)]))
break
return name_list, end_URLBuilder
Explanation: Note that these scripts differ from the socib mooring station report generation tool. Here, we use a simple web - scraping from the socib thredds server.
End of explanation
def draw_data(links, desired_start_time, station_names):
global VARIABLES_OF_INTEREST
counter = 0
output_stations = []
for station in links:
root = Dataset(station)
time = get_data_array(root.variables["time"])
idx = time >= desired_start_time
if not np.any(idx):
counter += 1
continue
variables = root.get_variables_by_attributes(standard_name=lambda n: n in VARIABLES_OF_INTEREST)
time = time[idx]
subplot = []
variable_names = []
for v in variables:
try:
qc_data = get_data_array(root.variables[get_qc_variable_name(v)])
qc_data = qc_data[idx]
bad_idx = get_data_array(qc_data) != 1
except KeyError:
print "No QC found for " + v.name
v_name = v.name
variable_names.append(v_name)
v = get_data_array(v)
v = v[idx]
conv_time = get_pandas_timestamp_series([datetime.datetime.fromtimestamp(ts) for ts in time])
subplot.append(get_bokeh_grid_figure(v, qc_data, conv_time, station_names[counter]))
sub_counter = 0
my_tabs = []
for sp in subplot:
my_tabs.append(Panel(child=sp, title=variable_names[sub_counter]))
sub_counter += 1
p = Tabs(tabs=my_tabs)
output_stations.append(p)
counter += 1
amount_stations = len(output_stations)
rest = amount_stations % 2
verticals = []
if amount_stations >= 2:
verticals.append(hplot(output_stations[0], output_stations[1]))
elif amount_stations == 1:
verticals.append(hplot(output_stations[0]))
else:
print("No stations to plot (PerformQC.draw_bokeh()).")
return 1
for i in range(1, int(amount_stations/2)):
verticals.append(hplot(output_stations[i*2], output_stations[i*2+1]))
if rest > 0:
verticals.append(output_stations[-1])
show(vplot(*verticals))
def get_bokeh_grid_figure(data, qc, converted_time, variable_name):
time_strings = map(get_str_time, converted_time)
hover = HoverTool(names=["data"])
fig = figure(width=800, plot_height=300, title=variable_name, tools=["pan, box_zoom, xwheel_zoom, save, reset, resize", hover], x_axis_type="datetime")
source = ColumnDataSource(
data=dict(
time=time_strings,
data=data,
qc=qc
)
)
# data line
fig.line(converted_time, data, color="navy", alpha=0.5, name="data", source=source)
# data points
fig.square(converted_time, data, color="navy", alpha=0.5)
fig.extra_y_ranges = {"foo": Range1d(start=0, end=10)}
fig.add_layout(LinearAxis(y_range_name="foo"), 'right')
fig.line(converted_time, qc, color="green", alpha=0.5, y_range_name="foo")
jscode =
range.set('start', parseInt(%s));
range.set('end', parseInt(%s));
fig.extra_y_ranges['foo'].callback = CustomJS(
args=dict(range=fig.extra_y_ranges['foo']),
code=jscode % (fig.extra_y_ranges['foo'].start,
fig.extra_y_ranges['foo'].end)
)
pan_tool = fig.select(dict(type=bokeh.models.PanTool))
pan_tool.dimensions = ["width"]
hover = fig.select(dict(type=HoverTool))
hover.tooltips = OrderedDict([
('time', '@time'),
('value', '@data{0.0}'),
('qc', '@qc')
])
# check for ranges, if they are nan
if (np.isnan(np.nanmin(data)) & np.isnan(np.nanmax(data))) or (np.nanmin(data) == np.nanmax(data)):
bottom_y_range = 0
top_y_range = 10
else:
# add a 10% buffer to the max ranges
temp_min = np.nanmin(data)
temp_max = np.nanmax(data)
temp_diff = abs(temp_max-temp_min)
temp_thresh = round(temp_diff*0.1, 3)
bottom_y_range = temp_min - temp_thresh
top_y_range = temp_max + temp_thresh
fig.y_range = Range1d(bottom_y_range, top_y_range)
translate_time = converted_time.apply(lambda x: x.to_pydatetime())
converted_time_backward = map(totimestamp, translate_time)
source = ColumnDataSource({'x': converted_time_backward, 'y': data})
jscode =
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n);
}
var data = source.get('data');
var start = yrange.get('start');
var end = yrange.get('end');
var time_start = xrange.get('start')/1000;
var time_end = xrange.get('end')/1000;
var pre_max_old = end;
var pre_min_old = start;
var time = data['x'];
var pre = data['y'];
t_idx_start = time.filter(function(st){return st>=time_start})[0];
t_idx_start = time.indexOf(t_idx_start);
t_idx_end = time.filter(function(st){return st>=time_end})[0];
t_idx_end = time.indexOf(t_idx_end);
var pre_interval = pre.slice(t_idx_start, t_idx_end);
pre_interval = pre_interval.filter(function(st){return !isNaN(st)});
var pre_max = Math.max.apply(null, pre_interval);
var pre_min = Math.min.apply(null, pre_interval);
var ten_percent = (pre_max-pre_min)*0.1;
pre_max = pre_max + ten_percent;
pre_min = pre_min - ten_percent;
if((!isNumeric(pre_max)) || (!isNumeric(pre_min))) {
pre_max = pre_max_old;
pre_min = pre_min_old;
}
yrange.set('start', pre_min);
yrange.set('end', pre_max);
console.log(yrange.get('end'))
source.trigger('change');
fig.y_range.callback = CustomJS(
args=dict(source=source, yrange=fig.y_range, xrange=fig.x_range), code=jscode)
fig.x_range.callback = CustomJS(
args=dict(source=source, yrange=fig.y_range, xrange=fig.x_range), code=jscode)
return fig
Explanation: Here, we define the bokeh plotting parameters. Also, we create a javascript callback to automatically adjust the y-axis according to the current zoom-extend.
End of explanation
VARIABLES_OF_INTEREST = [
"sea_water_temperature",
"air_temperature",
"sea_surface_wave_from_direction",
"sea_surface_wave_significant_height",
"wind_speed",
"wind_from_direction",
"wind_speed_of_gust",
"water_surface_height_above_reference_datum",
"air_pressure",
"sea_water_speed",
"direction_of_sea_water_velocity",
"sea_water_salinity",
"relative_humidity"]
Explanation: Also, we have to define the variables we want to plot. In this case, we just used the "List of important parameters" from the socib DataDiscovery service and added the relative humidity to it (since we will plot weather stations here).
End of explanation
station_names, station_links = get_mooring_stations('http://thredds.socib.es/thredds/catalog/mooring/weather_station/catalog.html')
# get latest x days
days = 2
html_file = 'bokeh_latest_data.html'
seconds = days_to_seconds(days)
dt = datetime.datetime.now()
desired_start_time = calendar.timegm(dt.utctimetuple()) - seconds
output_file(html_file)
draw_data(station_links, desired_start_time, station_names)
Explanation: Get latest data
Here, we will call our defined methods. Also, we will define the output filename and the desired timespan of the plotting.
End of explanation |
12,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifying newswires
Step1: Like with the IMDB dataset, the argument num_words=10000 restricts the data to the 10,000 most frequently occurring words found in the
data.
We have 8,982 training examples and 2,246 test examples
Step2: As with the IMDB reviews, each example is a list of integers (word indices)
Step3: Here's how you can decode it back to words, in case you are curious
Step4: The label associated with an example is an integer between 0 and 45
Step5: Preparing the data
We can vectorize the data with the exact same code as in our previous example
Step6: To vectorize the labels, there are two possibilities
Step7: Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example
Step8: Building our network
This topic classification problem looks very similar to our previous movie review classification problem
Step9: There are two other things you should note about this architecture
Step10: Validating our approach
Let's set apart 1,000 samples in our training data to use as a validation set
Step11: Now let's train our network for 20 epochs
Step12: Let's display its loss and accuracy curves
Step13: It seems that the network starts overfitting after 8 epochs. Let's train a new network from scratch for 8 epochs, then let's evaluate it on
the test set
Step14: Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier
would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline
Step15: Generating predictions on new data
We can verify that the predict method of our model instance returns a probability distribution over all 46 topics. Let's generate topic
predictions for all of the test data
Step16: Each entry in predictions is a vector of length 46
Step17: The coefficients in this vector sum to 1
Step18: The largest entry is the predicted class, i.e. the class with the highest probability
Step19: A different way to handle the labels and the loss
We mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such
Step20: The only thing it would change is the choice of the loss function. Our previous loss, categorical_crossentropy, expects the labels to
follow a categorical encoding. With integer labels, we should use sparse_categorical_crossentropy
Step21: This new loss function is still mathematically the same as categorical_crossentropy; it just has a different interface.
On the importance of having sufficiently large intermediate layers
We mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 hidden
units. Now let's try to see what happens when we introduce an information bottleneck by having intermediate layers significantly less than
46-dimensional, e.g. 4-dimensional. | Python Code:
from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
Explanation: Classifying newswires: a multi-class classification example
This notebook contains the code samples found in Chapter 3, Section 5 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
In the previous section we saw how to classify vector inputs into two mutually exclusive classes using a densely-connected neural network.
But what happens when you have more than two classes?
In this section, we will build a network to classify Reuters newswires into 46 different mutually-exclusive topics. Since we have many
classes, this problem is an instance of "multi-class classification", and since each data point should be classified into only one
category, the problem is more specifically an instance of "single-label, multi-class classification". If each data point could have
belonged to multiple categories (in our case, topics) then we would be facing a "multi-label, multi-class classification" problem.
The Reuters dataset
We will be working with the Reuters dataset, a set of short newswires and their topics, published by Reuters in 1986. It's a very simple,
widely used toy dataset for text classification. There are 46 different topics; some topics are more represented than others, but each
topic has at least 10 examples in the training set.
Like IMDB and MNIST, the Reuters dataset comes packaged as part of Keras. Let's take a look right away:
End of explanation
len(train_data)
len(test_data)
Explanation: Like with the IMDB dataset, the argument num_words=10000 restricts the data to the 10,000 most frequently occurring words found in the
data.
We have 8,982 training examples and 2,246 test examples:
End of explanation
train_data[10]
Explanation: As with the IMDB reviews, each example is a list of integers (word indices):
End of explanation
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# Note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_newswire
Explanation: Here's how you can decode it back to words, in case you are curious:
End of explanation
train_labels[10]
Explanation: The label associated with an example is an integer between 0 and 45: a topic index.
End of explanation
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
Explanation: Preparing the data
We can vectorize the data with the exact same code as in our previous example:
End of explanation
def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
# Our vectorized training labels
one_hot_train_labels = to_one_hot(train_labels)
# Our vectorized test labels
one_hot_test_labels = to_one_hot(test_labels)
Explanation: To vectorize the labels, there are two possibilities: we could just cast the label list as an integer tensor, or we could use a "one-hot"
encoding. One-hot encoding is a widely used format for categorical data, also called "categorical encoding".
For a more detailed explanation of one-hot encoding, you can refer to Chapter 6, Section 1.
In our case, one-hot encoding of our labels consists in embedding each label as an all-zero vector with a 1 in the place of the label index, e.g.:
End of explanation
from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
Explanation: Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example:
End of explanation
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
Explanation: Building our network
This topic classification problem looks very similar to our previous movie review classification problem: in both cases, we are trying to
classify short snippets of text. There is however a new constraint here: the number of output classes has gone from 2 to 46, i.e. the
dimensionality of the output space is much larger.
In a stack of Dense layers like what we were using, each layer can only access information present in the output of the previous layer.
If one layer drops some information relevant to the classification problem, this information can never be recovered by later layers: each
layer can potentially become an "information bottleneck". In our previous example, we were using 16-dimensional intermediate layers, but a
16-dimensional space may be too limited to learn to separate 46 different classes: such small layers may act as information bottlenecks,
permanently dropping relevant information.
For this reason we will use larger layers. Let's go with 64 units:
End of explanation
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: There are two other things you should note about this architecture:
We are ending the network with a Dense layer of size 46. This means that for each input sample, our network will output a
46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class.
The last layer uses a softmax activation. You have already seen this pattern in the MNIST example. It means that the network will
output a probability distribution over the 46 different output classes, i.e. for every input sample, the network will produce a
46-dimensional output vector where output[i] is the probability that the sample belongs to class i. The 46 scores will sum to 1.
The best loss function to use in this case is categorical_crossentropy. It measures the distance between two probability distributions:
in our case, between the probability distribution output by our network, and the true distribution of the labels. By minimizing the
distance between these two distributions, we train our network to output something as close as possible to the true labels.
End of explanation
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]
Explanation: Validating our approach
Let's set apart 1,000 samples in our training data to use as a validation set:
End of explanation
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
Explanation: Now let's train our network for 20 epochs:
End of explanation
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc = history.history['acc']
val_acc = history.history['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Let's display its loss and accuracy curves:
End of explanation
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=8,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
results
Explanation: It seems that the network starts overfitting after 8 epochs. Let's train a new network from scratch for 8 epochs, then let's evaluate it on
the test set:
End of explanation
import copy
test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels)
Explanation: Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier
would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline:
End of explanation
predictions = model.predict(x_test)
Explanation: Generating predictions on new data
We can verify that the predict method of our model instance returns a probability distribution over all 46 topics. Let's generate topic
predictions for all of the test data:
End of explanation
predictions[0].shape
Explanation: Each entry in predictions is a vector of length 46:
End of explanation
np.sum(predictions[0])
Explanation: The coefficients in this vector sum to 1:
End of explanation
np.argmax(predictions[0])
Explanation: The largest entry is the predicted class, i.e. the class with the highest probability:
End of explanation
y_train = np.array(train_labels)
y_test = np.array(test_labels)
Explanation: A different way to handle the labels and the loss
We mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such:
End of explanation
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
Explanation: The only thing it would change is the choice of the loss function. Our previous loss, categorical_crossentropy, expects the labels to
follow a categorical encoding. With integer labels, we should use sparse_categorical_crossentropy:
End of explanation
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=128,
validation_data=(x_val, y_val))
Explanation: This new loss function is still mathematically the same as categorical_crossentropy; it just has a different interface.
On the importance of having sufficiently large intermediate layers
We mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 hidden
units. Now let's try to see what happens when we introduce an information bottleneck by having intermediate layers significantly less than
46-dimensional, e.g. 4-dimensional.
End of explanation |
12,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h3>Basic Recipe for Training a POS Tagger with SpaCy</h3>
<ol>
<li id="loaddatatitle"><a href="#-Load-Data-">Load Data </a>
<ol><li>We'll be using a sample from Web Treebank corpus, in ConllX format</ol>
<li><a href="#Prepare-Environment-for-New-Model">Prepare environment for a new model</a>
<ol><li>New model directory, with tagger and parser subdirectories. (Ensure you have permission)</ol>
<li><a href="#Build-a-Vocabulary">Build a vocabulary</a>
<ol>
<li>We are just going to load the default English Vocabulary
<li>Defines how we get attributes (like suffix) from a token string
<li>Includes brown cluster data on lexemes, we'll use as a feature for the parser
</ol>
<li> <a href="#Build-a-Tagger">Build a Tagger</a>
<ol><li>Ensure tagmap is provided if needed</ol>
<ol><li>Which features should be used to train tagger?</ol>
<li><a href="#Train-Tagger"> Train Tagger</a>
<ol><li>Averaged Perceptron algorithm
<li>For each epoch
Step1: <a href="#loaddatatitle">back</a>
<br>
Prepare Environment for New Model
Step2: <a href="#loaddatatitle">back</a>
<br>
Build a Vocabulary
Step3: <a href="#loaddatatitle">back</a>
<br>
Build a Tagger
Step4: <a href="#loaddatatitle">back</a>
<br>
Train Tagger
Step5: <a href="#loaddatatitle">back</a>
<br>
Save Tagger | Python Code:
import sys
sys.path.append('/home/jupyter/site-packages/')
import requests
from spacy.syntax.arc_eager import PseudoProjectivity
def read_conllx(text):
bad_lines = 0
#t = text.strip()
#print(type(t), type('\n\n'))
# u = t.split(b'\n\n')
n_sent = 0
n_line = 0
print('text=%d' % len(text))
# text = str(text)
# print('text=%d' % len(text))
for sent in text.strip().split('\n\n'):
n_sent += 1
lines = sent.strip().split('\n')
if lines:
while lines[0].startswith('#'):
lines.pop(0)
tokens = []
for line in lines:
n_line += 1
try:
id_, word, lemma, tag, pos, morph, head, dep, _1, _2 = line.split()
if '-' in id_:
continue
id_ = float(id_) - 1
try:
head = (int(head) - 1) if head != '0' else id_
except:
head = id_
dep = 'ROOT' if dep == 'root' else dep
tokens.append((id_, word, pos, int(head), dep, 'O'))
except:
bad_lines += 1
print('***', line)
raise
if not tokens:
continue
tuples = [list(t) for t in zip(*tokens)]
yield (None, [[tuples, []]])
print("Skipped %d malformed lines" % bad_lines)
print('n_sent=%d' % n_sent)
print('n_line=%d' % n_line)
def LoadData(url, path, make_projective=False):
if url:
conll_string = str(requests.get(url).content)
elif path:
conll_string = open(path).read()
print('conll_string=%d' % len(conll_string))
sents = list(read_conllx(conll_string))
if make_projective:
sents = PseudoProjectivity.preprocess_training_data(sents)
return sents
train_url = 'https://raw.githubusercontent.com/UniversalDependencies/UD_English/master/en-ud-train.conllu'
test_url = 'https://raw.githubusercontent.com/UniversalDependencies/UD_English/master/en-ud-test.conllu'
train_path = '/Users/pcadmin/code/spacy-examples/en-ud-train.conllu.txt'
train_sents = LoadData(None, train_path)
# test_sents = LoadData(test_url, None)
print('train=%d' % len(train_sents))
#print('test =%d' % len(test_sents))
def sent_iter(conll_corpus):
for _, doc_sents in conll_corpus:
# print(len(doc_sents))
# print(doc_sents[0])
for (ids, words, tags, heads, deps, ner), _ in doc_sents:
yield ids, words, tags, heads, deps, ner
print('train=%d' % len(train_sents))
sent_counter = 0
unique_tags = set()
for ids, words, tags, heads, deps, ner in sent_iter(train_sents):
unique_tags.update(tags)
sent_counter += 1
doc_counter = len(train_sents)
print("Training corpus metadata")
print()
print("Number of Sentences: %d" % sent_counter)
print("Number of Unique Tags: %d" % len(unique_tags))
print("Unique Tags: %s" % sorted(unique_tags))
Explanation: <h3>Basic Recipe for Training a POS Tagger with SpaCy</h3>
<ol>
<li id="loaddatatitle"><a href="#-Load-Data-">Load Data </a>
<ol><li>We'll be using a sample from Web Treebank corpus, in ConllX format</ol>
<li><a href="#Prepare-Environment-for-New-Model">Prepare environment for a new model</a>
<ol><li>New model directory, with tagger and parser subdirectories. (Ensure you have permission)</ol>
<li><a href="#Build-a-Vocabulary">Build a vocabulary</a>
<ol>
<li>We are just going to load the default English Vocabulary
<li>Defines how we get attributes (like suffix) from a token string
<li>Includes brown cluster data on lexemes, we'll use as a feature for the parser
</ol>
<li> <a href="#Build-a-Tagger">Build a Tagger</a>
<ol><li>Ensure tagmap is provided if needed</ol>
<ol><li>Which features should be used to train tagger?</ol>
<li><a href="#Train-Tagger"> Train Tagger</a>
<ol><li>Averaged Perceptron algorithm
<li>For each epoch:
<ol><li>For each document in training data:
<ol><li>For each sentence in document:
<ol>
<li>Create document with sentence words (tagger not yet applied)
<li>Create GoldParse object with annotated labels
<li>Apply the tagger to the document to get predictions
<li>Update the tagger with GoldParse, Document (actual v predicted)
</ol>
</ol>
<li> Score predictions on validation set
</ol>
</ol>
<li><a href="#Save-Tagger">Save Tagger</a>
<h3> Load Data </h3>
End of explanation
from pathlib import Path
import spacy
def prepare_environment_for_new_tagger(model_path, tagger_path):
if not model_dir.exists():
model_dir.mkdir()
if not tagger_path.exists():
tagger_path.mkdir()
data_dir = spacy.en.get_data_path()
model_dir = data_dir / 'en-1.1.0'
tagger_dir = model_dir / 'custom-pos-tagger'
prepare_environment_for_new_tagger(model_dir, tagger_dir)
Explanation: <a href="#loaddatatitle">back</a>
<br>
Prepare Environment for New Model
End of explanation
from spacy.vocab import Vocab
def build_vocab(model_dir, vec_path = None, lexeme_path = None):
vocab = Vocab.load(model_dir)
if lexeme_path:
vocab.load_lexemes(lexeme_path)
if vec_path:
vocab.load_vectors_from_bin_loc(vec_path)
return vocab
lexeme_path = model_dir / 'vocab' / 'lexemes.bin'
vocab = build_vocab(model_dir, lexeme_path=lexeme_path)
#test clusters are available
from spacy.tokens import Doc
doc = Doc(vocab, words=[u'He',u'ate',u'pizza',u'.'])
print "Cluster Value for '{}': {}".format(*[doc[0], doc[0].cluster])
Explanation: <a href="#loaddatatitle">back</a>
<br>
Build a Vocabulary
End of explanation
from spacy.tagger import Tagger
from spacy.tagger import *
features = [
(W_orth,),(W_shape,),(W_cluster,),(W_flags,),(W_suffix,),(W_prefix,), #current word attributes
(P1_pos,),(P1_cluster,),(P1_flags,),(P1_suffix,), #-1 word attributes
(P2_pos,),(P2_cluster,),(P2_flags,), #-2 word attributes
(N1_orth,),(N1_suffix,),(N1_cluster,),(N1_flags,), #+1 word attributes
(N2_orth,),(N2_cluster,),(N2_flags,), #+2 word attributes
(P1_lemma, P1_pos),(P2_lemma, P2_pos), (P1_pos, P2_pos),(P1_pos, W_orth) #combination attributes
]
features = spacy.en.English.Defaults.tagger_features
tag_map = spacy.en.tag_map
statistical_model = spacy.tagger.TaggerModel(features)
tagger = Tagger(vocab, tag_map=tag_map, statistical_model = statistical_model)
Explanation: <a href="#loaddatatitle">back</a>
<br>
Build a Tagger
End of explanation
from spacy.scorer import Scorer
from spacy.gold import GoldParse
import random
def score_model(vocab, tagger, gold_docs, verbose=False):
scorer = Scorer()
for _, gold_doc in gold_docs:
for (ids, words, tags, heads, deps, entities), _ in gold_doc:
doc = Doc(vocab, words=map(unicode,words))
tagger(doc)
gold = GoldParse(doc, tags=tags)
scorer.score(doc, gold, verbose=verbose)
return scorer
def train(tagger, vocab, train_sents, test_sents, model_dir, n_iter=20, seed = 0, feat_set = u'basic'):
scorer = score_model(vocab, tagger, test_sents)
print('%s:\t\t%s' % ("Iteration", "POS Tag Accuracy"))
print('%s:\t\t%.3f' % ("Pretraining", scorer.tags_acc))
#TRAINING STARTS HERE
for itn in range(n_iter):
for ids, words, tags, heads, deps, ner in sent_iter(train_sents):
doc = Doc(vocab, words=map(unicode,words))
gold = GoldParse(doc, tags=tags, heads=heads, deps=deps)
tagger(doc)
tagger.update(doc, gold)
random.shuffle(train_sents)
scorer = score_model(vocab, tagger, test_sents)
print('%d:\t\t\t%.3f' % (itn, scorer.tags_acc))
return tagger
trained_tagger = train(tagger, vocab, train_sents, test_sents, model_dir, n_iter = 10)
Explanation: <a href="#loaddatatitle">back</a>
<br>
Train Tagger
End of explanation
def ensure_dir(path):
if not path.exists():
path.mkdir()
ensure_dir(tagger_dir)
trained_tagger.model.dump(str(tagger_dir / 'model'))
Explanation: <a href="#loaddatatitle">back</a>
<br>
Save Tagger
End of explanation |
12,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP example
Step1: Colors
In quantum mechanics, complex numbers are as natual as real numbers.
Before going into details of particular plots, we show how complex_array_to_rgb maps $z = x + i y$ into colors.
There are two variants, theme='light' and theme='dark'. For both, we use hue for phase, with red for positive numbers and aqua for negative.
For a longer comment on coloring complex functions I recommend IPython Notebook Visualizing complex-valued functions with Matplotlib and Mayavi by Emilia Petrisor.
Step2: Schmidt plot
Arguably, the easiest way to show entanglement is to plot a wavefunction against two variables.
If the plot is a product of them, the state is a product state. If not - it is entangled.
As writing a wavefunction as a matrix $|\psi\rangle_{ij}$ is the the crucial step in Schmidt decomposition,
we call such plots Schmidt plots.
Let us consider two states
Step3: As we see, for separable state the plot is a product of x and y coordinates, while for the singlet state - is is not.
Let us now consider a product of two singlet states
Step4: As we see, we have a product, as the state is a product state with the respect to the splitting of first 2 vs last 2 particles.
But what if we shift particles, getting $|\psi^-\rangle_{23}|\psi^-\rangle_{41}$?
Step5: So we see that it is entangled.
plot_schmidt allows us to specify other splittings. With parameter splitting we decide how many particles we want to have as columns. In general, we can plot systems of various numbers of particles, each being of a different dimension.
For example
Step6: Qubism plot
Step7: That is, all amplitudes for states starting with
Step8: Or if we want to make sure how did we map amplitudes to particular regions in the plot
Step9: Or how about making it dark? (E.g. to fit out slides with black background).
Step10: The most important property of Qubism is the recursive structure. So that we can add more particles seamlessly.
For example, let's consider a plot of k copies of the singlet states, i.e. $|\psi^-\rangle^{\otimes k}$
Step11: OK, but once we can type the wavefunction by hand, plots offer little added value.
Let's see how we can plot ground states.
Before doing that, we define some functions to easy make a translationally-invariant Hamiltonian.
Step12: For example, let us consider Hamiltonian for $N$ particles, of the following form (a generalization of the Majumdar-Ghosh model)
Step13: We are not restricted to qubits. We can have it for other dimensions, e.g. qutrits.
Let us consider AKLT model for spin-1 particles
Step14: Qubism for qutrits works similarly as for qubits
Step15: Just in this case we interpret
Step16: The one above emphasis ferromagnetic (put on the left) vs antiferromagnetic (put on the right) states.
Another one how='before_after' (inspired by this) works in a bit different way
Step17: It is very similar to the Schmidt plot (for the default splitting), with the only difference being ordering of the y axis (particle order is reversed). All entanglement properties are the same.
So how does it work on the same example?
Well, let us take spin chain for (Majumdar-Ghosh model for $J=0$), i.e.
$$H = \sum_{i=1}^N \vec{S}i \cdot \vec{S}{i+1}$$
for qubits.
Step18: Seeing entanglement
Step19: Then entanglement (or exactly
Step20: In each plot squares are the same, up to a factor (which is visualized as intensity and hue).
You can lookup previous plots. Setting grid_iteration=2 would show splitting of the first 4 particles vs N-4 others.
And for how='before_after' it is the middle particles vs all others.
Versions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
Explanation: QuTiP example: Qubism visualizations
by Piotr Migdał, June 2014
For more information about QuTiP see http://qutip.org.
For more information about Qubism see:
* J. Rodriguez-Laguna, P. Migdał, M. Ibanez Berganza, M. Lewenstein, G. Sierra,
Qubism: self-similar visualization of many-body wavefunctions, New J. Phys. 14 053028 (2012), arXiv:1112.3560,
* its video abstract,
* C++ and Mathematica code on GitHub.
This note describes plotting functions plot_schmidt and plot_qubism, and additionally - complex_array_to_rgb, along with their applications.
End of explanation
compl_circ = np.array([[(x + 1j*y) if x**2 + y**2 <= 1 else 0j
for x in np.arange(-1,1,0.005)]
for y in np.arange(-1,1,0.005)])
fig = plt.figure(figsize=(6, 3))
for i, theme in enumerate(['light', 'dark']):
ax = plt.subplot(1, 2, i + 1)
ax.set_xlabel('x', fontsize=14)
ax.set_ylabel('y', fontsize=14)
ax.imshow(complex_array_to_rgb(compl_circ, rmax=1, theme=theme),
extent=(-1,1,-1,1))
plt.tight_layout()
Explanation: Colors
In quantum mechanics, complex numbers are as natual as real numbers.
Before going into details of particular plots, we show how complex_array_to_rgb maps $z = x + i y$ into colors.
There are two variants, theme='light' and theme='dark'. For both, we use hue for phase, with red for positive numbers and aqua for negative.
For a longer comment on coloring complex functions I recommend IPython Notebook Visualizing complex-valued functions with Matplotlib and Mayavi by Emilia Petrisor.
End of explanation
singlet = (ket('01') - ket('10')).unit()
separable = (ket('01') - ket('00')).unit()
plot_schmidt(singlet, figsize=(2,2));
plot_schmidt(separable, figsize=(2,2));
Explanation: Schmidt plot
Arguably, the easiest way to show entanglement is to plot a wavefunction against two variables.
If the plot is a product of them, the state is a product state. If not - it is entangled.
As writing a wavefunction as a matrix $|\psi\rangle_{ij}$ is the the crucial step in Schmidt decomposition,
we call such plots Schmidt plots.
Let us consider two states:
entangled: singlet state $|\psi^-\rangle = (|01\rangle - |10\rangle)/\sqrt{2}$,
product $(|01\rangle - |00\rangle)/\sqrt{2}$.
They may look seamingly similar, but the later can be decomposed into a product $|0\rangle(|1\rangle - |0\rangle)/\sqrt{2}$.
End of explanation
plot_schmidt(1j * tensor([singlet, singlet]), figsize=(2,2));
Explanation: As we see, for separable state the plot is a product of x and y coordinates, while for the singlet state - is is not.
Let us now consider a product of two singlet states: $|\psi^-\rangle|\psi^-\rangle$.
Schmidt plot, by default, makes spliting of equal numbers of particles.
(And just for fun, let's multiply it by the imaginary unit, to get diffeerent colors.)
End of explanation
plot_schmidt(1j * tensor([singlet, singlet]).permute([1,2,3,0]), figsize=(2,2));
Explanation: As we see, we have a product, as the state is a product state with the respect to the splitting of first 2 vs last 2 particles.
But what if we shift particles, getting $|\psi^-\rangle_{23}|\psi^-\rangle_{41}$?
End of explanation
plot_schmidt(1j * tensor([singlet, singlet]), splitting=1, labels_iteration=(1,3),
figsize=(4,2));
Explanation: So we see that it is entangled.
plot_schmidt allows us to specify other splittings. With parameter splitting we decide how many particles we want to have as columns. In general, we can plot systems of various numbers of particles, each being of a different dimension.
For example:
End of explanation
fig = plt.figure(figsize=(8, 4))
for i in [1, 2]:
ax = plt.subplot(1, 2, i)
plot_qubism(0 * ket('0000'),
legend_iteration=i, grid_iteration=i,
fig=fig, ax=ax)
Explanation: Qubism plot
End of explanation
state = ket('0010') + 0.5 * ket('1111') + 0.5j * ket('0101') - 1j * ket('1101') \
- 0.2 * ket('0110')
plot_qubism(state, figsize=(4,4));
Explanation: That is, all amplitudes for states starting with:
$|00\rangle$ go to the upper left quadrant,
$|01\rangle$ go to the upper right quadrant,
$|10\rangle$ go to the lower left quadrant,
$|11\rangle$ go to the lower right quadrant.
And we proceed recursively with the next particles. So, for example:
End of explanation
plot_qubism(state, legend_iteration=2, figsize=(4,4));
Explanation: Or if we want to make sure how did we map amplitudes to particular regions in the plot:
End of explanation
plot_qubism(state, legend_iteration=2, theme='dark', figsize=(4,4));
Explanation: Or how about making it dark? (E.g. to fit out slides with black background).
End of explanation
fig = plt.figure(figsize=(15, 3))
for k in range(1,6):
ax = plt.subplot(1, 5, k)
plot_qubism(tensor([singlet]*k),
fig=fig, ax=ax)
Explanation: The most important property of Qubism is the recursive structure. So that we can add more particles seamlessly.
For example, let's consider a plot of k copies of the singlet states, i.e. $|\psi^-\rangle^{\otimes k}$:
End of explanation
def spinchainize(op, n, bc='periodic'):
if isinstance(op, list):
return sum([spinchainize(each, n, bc=bc) for each in op])
k = len(op.dims[0])
d = op.dims[0][0]
expanded = tensor([op] + [qeye(d)]*(n - k))
if bc == 'periodic':
shifts = n
elif bc == 'open':
shifts = n - k + 1
shifteds = [expanded.permute([(i + j) % n for i in range(n)])
for j in range(shifts)]
return sum(shifteds)
def gs_of(ham):
gval, gstate = ham.groundstate()
return gstate
Explanation: OK, but once we can type the wavefunction by hand, plots offer little added value.
Let's see how we can plot ground states.
Before doing that, we define some functions to easy make a translationally-invariant Hamiltonian.
End of explanation
heis = sum([tensor([pauli]*2) for pauli in [sigmax(), sigmay(), sigmaz()]])
heis2 = sum([tensor([pauli, qeye(2), pauli]) for pauli in [sigmax(), sigmay(), sigmaz()]])
N = 10
Js = [0., 0.5, 1.]
fig = plt.figure(figsize=(2*len(Js), 4.4))
for b in [0, 1]:
for k, J in enumerate(Js):
ax = plt.subplot(2, len(Js), b*len(Js) + k + 1)
if b == 0:
spinchain = spinchainize([heis, J*heis2], N, bc='periodic')
elif b ==1:
spinchain = spinchainize([heis, J*heis2], N, bc='open')
plot_qubism(gs_of(spinchain), ax=ax)
if k == 0:
if b == 0:
ax.set_ylabel("periodic BC",
fontsize=16)
else:
ax.set_ylabel("open BC",
fontsize=16)
if b == 1:
ax.set_xlabel("$J={0:.1f}$".format(J),
fontsize=16)
plt.tight_layout()
Explanation: For example, let us consider Hamiltonian for $N$ particles, of the following form (a generalization of the Majumdar-Ghosh model):
$$H = \sum_{i=1}^N \vec{S}i \cdot \vec{S}{i+1} + J \sum_{i=1}^N \vec{S}i \cdot \vec{S}{i+2},$$
where $\vec{S}_i = \tfrac{1}{2} (\sigma^x, \sigma^y, \sigma^z)$ is the spin operator (with sigmas being Pauli matrices).
Moreover, we can set two different boundary conditions:
periodic - spin chain forms a loop ($N+1 \equiv 1$ and $N+2 \equiv 2$),
open - spin chain forms a line (we remove terms with $N+1$ and $N+2$).
End of explanation
ss = sum([tensor([jmat(1, s)]*2) for s in ['x', 'y', 'z']])
H = spinchainize([ss, (1./3.) * ss**2], n=6, bc='periodic')
plot_qubism(gs_of(H), figsize=(4,4));
Explanation: We are not restricted to qubits. We can have it for other dimensions, e.g. qutrits.
Let us consider AKLT model for spin-1 particles:
$$H = \sum_{i=1}^N \vec{S}i \cdot \vec{S}{i+1} + \tfrac{1}{3} \sum_{i=1}^N (\vec{S}i \cdot \vec{S}{i+1})^2.$$
where $\vec{S}_i$ is spin operator for spin-1 particles (or for qutip: jmat(1, 'x'), jmat(1, 'y') and jmat(1, 'z')).
End of explanation
fig = plt.figure(figsize=(10, 5))
for i in [1, 2]:
ax = plt.subplot(1, 2, i)
plot_qubism(0 * ket('0000', dim=3),
legend_iteration=i, grid_iteration=i,
fig=fig, ax=ax)
Explanation: Qubism for qutrits works similarly as for qubits:
End of explanation
fig = plt.figure(figsize=(8, 4))
for i in [1, 2]:
ax = plt.subplot(1, 2, i)
plot_qubism(0 * ket('0000'),
how='pairs_skewed',
legend_iteration=i, grid_iteration=i,
fig=fig, ax=ax)
Explanation: Just in this case we interpret:
0 as $s_z=-1$,
1 as $s_z=\ \ 0$,
2 as $s_z=+1$.
While qubism works best for translationally-invariants states (so in particular, all particles need to have the same dimension), we can do it for others.
Also, there are a few other Qubism-related plotting schemes. For example how='pairs_skewed':
End of explanation
fig = plt.figure(figsize=(8, 4))
for i in [1, 2]:
ax = plt.subplot(1, 2, i)
plot_qubism(0 * ket('0000'),
how='before_after',
legend_iteration=i, grid_iteration=i,
fig=fig, ax=ax)
Explanation: The one above emphasis ferromagnetic (put on the left) vs antiferromagnetic (put on the right) states.
Another one how='before_after' (inspired by this) works in a bit different way: it uses typical recursion, but starting from middle particles. For example, the top left quadrant correspons to $|00\rangle_{N/2,N/2+1}$:
End of explanation
heis = sum([tensor([pauli]*2) for pauli in [sigmax(), sigmay(), sigmaz()]])
N = 10
gs = gs_of(spinchainize(heis, N, bc='periodic'))
fig = plt.figure(figsize=(12, 4))
for i, how in enumerate(['schmidt_plot', 'pairs', 'pairs_skewed', 'before_after']):
ax = plt.subplot(1, 4, i + 1)
if how == 'schmidt_plot':
plot_schmidt(gs,
fig=fig, ax=ax)
else:
plot_qubism(gs,
how=how,
fig=fig, ax=ax)
ax.set_title(how)
plt.tight_layout()
Explanation: It is very similar to the Schmidt plot (for the default splitting), with the only difference being ordering of the y axis (particle order is reversed). All entanglement properties are the same.
So how does it work on the same example?
Well, let us take spin chain for (Majumdar-Ghosh model for $J=0$), i.e.
$$H = \sum_{i=1}^N \vec{S}i \cdot \vec{S}{i+1}$$
for qubits.
End of explanation
product_1 = ket('0000')
product_2 = tensor([(ket('0') + ket('1')).unit()]*4)
w = (ket('0001') + ket('0010') + ket('0100') + ket('1000')).unit()
dicke_2of4 = (ket('0011') + ket('0101') + ket('0110') + ket('1001') + ket('1010') + ket('1100')).unit()
ghz = (ket('0000') + ket('1111')).unit()
states = ['product_1', 'product_2', 'w', 'dicke_2of4', 'ghz']
fig = plt.figure(figsize=(2 * len(states), 2))
for i, state_str in enumerate(states):
ax = plt.subplot(1, len(states), i + 1)
plot_qubism(eval(state_str), fig=fig, ax=ax)
ax.set_title(state_str)
plt.tight_layout()
Explanation: Seeing entanglement
End of explanation
def product_state(theta, phi=0, n=1):
single = Qobj([[np.cos(theta/2.)], [np.sin(theta/2.) * np.exp(1j*phi)]])
return tensor([single]*n)
thetas = 0.5 * np.pi * np.array([0., 0.5, 0.75, 1.])
phis = np.pi * np.array([0., 0.1, 0.2, 0.3])
fig, axes2d = plt.subplots(nrows=len(phis), ncols=len(thetas),
figsize=(6,6))
for i, row in enumerate(axes2d):
for j, cell in enumerate(row):
plot_qubism(product_state(thetas[j], phi=phis[i], n=8),
grid_iteration=1,
ax=cell)
if i == len(axes2d) - 1:
cell.set_xlabel("$\\theta={0:s}\pi$".format(["0", "(1/4)", "(3/8)", "(1/2)"][j]),
fontsize=16)
if j == 0:
cell.set_ylabel("$\\varphi={0:.1f}\pi$".format(phis[i] / np.pi),
fontsize=16)
plt.tight_layout()
Explanation: Then entanglement (or exactly: Schmidt rank) for a given partition is equal to number to different, non-zero squares. (We don't allow rotations, we do allow multiplication by a factor and, what may be more tricky, linear superposition.)
Here we use partition of first 2 particles vs last 2, as indicated by lines.
That is,
* product_1 - only 1 non-zero square: Schmidt rank 1,
* product_2 - 4 non-zero squares, but they are the same: Schmidt rank 1,
* w - 3 non-zero quares, but two of them are the same: Schmidt rank 2,
* dicke_2of4 - 4 non-zero squares, but two of them are the same: Schmidt rank 3,
* ghz - 2 non-zero squares, each one different: Schmidt rank 2.
This is basis-independent, but it may be easier to work in one basis rather than another.
And for a comparision, let us see product states:
$$\left( \cos(\theta/2) |0\rangle + \sin(\theta/2) e^{i \varphi} |1\rangle \right)^N $$
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: In each plot squares are the same, up to a factor (which is visualized as intensity and hue).
You can lookup previous plots. Setting grid_iteration=2 would show splitting of the first 4 particles vs N-4 others.
And for how='before_after' it is the middle particles vs all others.
Versions
End of explanation |
12,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rotten Tomatoes movie review classifier using Keras and Tensorflow
Author
Step1: Download the Rotten Tomatoes movie reviews dataset
Step2: Import dependencies
Step3: Read the train data file
Step4: Summarize the training data
Get the unqiue label values in the training data
The sentiment labels are
Step5: Count the total number of training items
Step6: Summarize the distribution of the sentiment classes
Step7: Load test data
Step8: Load the sample submission file
Step9: Create sentiment column in the test dataset
Step10: Create a dataframe to store both train and test data
Step11: Pre-process the movie review string
Step12: Download NLTK datasets
Specify the NLTK corpus as 'punkt' or 'all'
Step13: Convert labels to categorical variables
Step14: Create train-validation split for training the model
Step15: Finding the maximum length of the review in the training corpus
Step16: Tokenize the input text
Tokenizing using Keras text pre-processor. This class allows to vectorize a text corpus, by turning each text into either a sequence of integers (each integer being the index of a token in a dictionary) or into a vector where the coefficient for each token could be binary, based on word count, based on tf-idf...
Step17: Padding the input text for a fixed input length
Step18: The role of embedding layer in a neural network
One-hot encoded vectors are high-dimensional and sparse. Let’s assume that we are doing Natural Language Processing (NLP) and have a dictionary of 2000 words. This means that, when using one-hot encoding, each word will be represented by a vector containing 2000 integers. And 1999 of these integers are zeros. In a big dataset this approach is not computationally efficient.
The vectors of each embedding get updated while training the neural network. If you have seen the image at the top of this post you can see how similarities between words can be found in a multi-dimensional space. This allows us to visualize relationships between words, but also between everything that can be turned into a vector through an embedding layer.
Read more about keras embedding layer
LSTM Model
Create a recurrent neural network model
Step19: Build and compile the LSTM model
Step20: Fetch saved model weights and load the weights file
Step21: Train the model
Step22: Save the model weights
Step23: Load model weights from weights file
Step24: Running model inference on the test data
Step25: Run inference for custom user input
Step26: CNN Model
Create a recurrent neural network model
Step27: Build and compile the CNN model
Step28: Fetch saved model weights and load the weights file
Step29: Train the model
Step30: Save the model weights
Step31: Load model weights from weights file
Step32: Running model inference on the test data
Step33: CNN +GRUModel
Create a recurrent neural network model
Step34: Build and compile the CNN +GRU model
Step35: Fetch saved model weights and load the weights file
Step36: Train the model
Step37: Save the model weights
Step38: Load model weights from weights file
Step39: Running model inference on the test data
Step40: Bidirectional GRU
Create a recurrent neural network model
Step41: Build and compile the Bidirectional GRU model
Step42: Fetch saved model weights and load the weights file
Step43: Train the model
Step44: Save the model weights
Step45: Load model weights from weights file
Step46: Running model inference on the test data
Step47: Glove word embedding
Step48: Create a recurrent neural network model
Step49: Build and compile the Bidirectional GRU model
Step50: Fetch saved model weights and load the weights file
Step51: Train the model
Step52: Save the model weights
Step53: Load model weights from weights file
Step54: Running model inference on the test data
Step55: Combine all | Python Code:
import os
colab_mode = True
download_rawData = True
setup = True
ROOT_DIR = '/content/'
WEIGHTS_FILENAME = 'RT_LSTM.h5'
WEIGHTS_FILE = os.path.join(ROOT_DIR, WEIGHTS_FILENAME)
from google.colab import files
if colab_mode and download_rawData:
files.upload()
if colab_mode and download_rawData:
! mkdir /root/.kaggle/
! mv /content/kaggle.json /root/.kaggle/
if setup:
! pip install kaggle
Explanation: Rotten Tomatoes movie review classifier using Keras and Tensorflow
Author:
Dr. Rahul Remanan {[email protected]}
Dr. Jesse Kanter {[email protected]}
Kaggle Rotten Tomatoes datasets
This is a modified fork of the Kaggle kernel here
The dataset is comprised of tab-separated files with phrases from the Rotten Tomatoes dataset. The train/test split has been preserved for the purposes of benchmarking, but the sentences have been shuffled from their original order. Each Sentence has been parsed into many phrases by the Stanford parser. Each phrase has a PhraseId. Each sentence has a SentenceId. Phrases that are repeated (such as short/common words) are only included once in the data.
Open this notebook in Google CoLab
Upload Kaggle authentication token
Before downloading the data, ensure that the terms of the competition is accepted.
End of explanation
! kaggle competitions download -c movie-review-sentiment-analysis-kernels-only
! kaggle datasets download -d terenceliu4444/glove6b100dtxt
! rm /root/.kaggle/kaggle.json
if setup:
! unzip -q /content/train.tsv.zip
! unzip -q /content/test.tsv.zip
if setup:
! unzip -q /content/glove6b100dtxt.zip
Explanation: Download the Rotten Tomatoes movie reviews dataset
End of explanation
import nltk
import os
import gc
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
from keras.preprocessing import sequence,text
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense,Dropout,Embedding,LSTM,Conv1D,GlobalMaxPooling1D,Flatten,MaxPooling1D,GRU,SpatialDropout1D,Bidirectional
from keras.callbacks import EarlyStopping
from keras.utils import to_categorical
from keras.losses import categorical_crossentropy
from keras.optimizers import Adam
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report,f1_score
warnings.filterwarnings("ignore")
#pd.set_option('display.max_colwidth',100)
pd.set_option('display.max_colwidth', -1)
Explanation: Import dependencies
End of explanation
train=pd.read_csv('/content/train.tsv',sep='\t')
print(train.shape)
train.head()
Explanation: Read the train data file
End of explanation
train['Sentiment'].unique()
Sent_dic={0:'negative', 1:'somewhat negative', 2:'neutral', 3:'somewhat positive', 4:'positive'}
Explanation: Summarize the training data
Get the unqiue label values in the training data
The sentiment labels are:
0 - negative
1 - somewhat negative
2 - neutral
3 - somewhat positive
4 - positive
End of explanation
len(train['Sentiment'])
Explanation: Count the total number of training items
End of explanation
train.groupby('Sentiment')['PhraseId'].nunique()
import seaborn as sns
sns.countplot(data=train,x='Sentiment',)
Explanation: Summarize the distribution of the sentiment classes
End of explanation
test=pd.read_csv('/content/test.tsv',sep='\t')
print(test.shape)
test.head()
Explanation: Load test data
End of explanation
sub=pd.read_csv('/content/sampleSubmission.csv')
sub.head()
Explanation: Load the sample submission file
End of explanation
test['Sentiment']=-999
test.head()
Explanation: Create sentiment column in the test dataset
End of explanation
df=pd.concat([train,
test], ignore_index=True)
print(df.shape)
df.tail()
del train,test
gc.collect()
Explanation: Create a dataframe to store both train and test data
End of explanation
from nltk.tokenize import word_tokenize
from nltk import FreqDist
from nltk.stem import SnowballStemmer,WordNetLemmatizer
stemmer=SnowballStemmer('english')
lemma=WordNetLemmatizer()
from string import punctuation
import re
Explanation: Pre-process the movie review string
End of explanation
if setup:
nltk.download()
def clean_review(review_col):
review_corpus=[]
for i in range(0,len(review_col)):
review=str(review_col[i])
review=re.sub('[^a-zA-Z]',' ',review)
#review=[stemmer.stem(w) for w in word_tokenize(str(review).lower())]
review=[lemma.lemmatize(w) for w in word_tokenize(str(review).lower())]
review=' '.join(review)
review_corpus.append(review)
return review_corpus
df['clean_review']=clean_review(df.Phrase.values)
df.head()
df_train=df[df.Sentiment!=-999]
print (df_train.shape)
df_train.head()
df_test=df[df.Sentiment==-999]
df_test.drop('Sentiment',axis=1,inplace=True)
print(df_test.shape)
df_test.head()
del df
gc.collect()
train_text=df_train.clean_review.values
test_text=df_test.clean_review.values
target=df_train.Sentiment.values
Explanation: Download NLTK datasets
Specify the NLTK corpus as 'punkt' or 'all'
End of explanation
y=to_categorical(target)
print(train_text.shape,target.shape,y.shape)
Explanation: Convert labels to categorical variables
End of explanation
X_train_text,X_val_text,y_train,y_val=train_test_split(train_text,y,test_size=0.2,stratify=y,random_state=123)
print(X_train_text.shape,y_train.shape)
print(X_val_text.shape,y_val.shape)
all_words=' '.join(X_train_text)
all_words=word_tokenize(all_words)
dist=FreqDist(all_words)
num_unique_word=len(dist)
num_unique_word
Explanation: Create train-validation split for training the model
End of explanation
r_len=[]
for text in X_train_text:
word=word_tokenize(text)
l=len(word)
r_len.append(l)
MAX_REVIEW_LEN=np.max(r_len)
MAX_REVIEW_LEN
max_features = num_unique_word
max_words = MAX_REVIEW_LEN
batch_size = 128
epochs = 3
num_classes=y.shape[1]
print ('Total number of sentiment classes: {} ...'.format(num_classes))
Explanation: Finding the maximum length of the review in the training corpus
End of explanation
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(X_train_text))
X_train = tokenizer.texts_to_sequences(X_train_text)
X_val = tokenizer.texts_to_sequences(X_val_text)
X_test = tokenizer.texts_to_sequences(test_text)
Explanation: Tokenize the input text
Tokenizing using Keras text pre-processor. This class allows to vectorize a text corpus, by turning each text into either a sequence of integers (each integer being the index of a token in a dictionary) or into a vector where the coefficient for each token could be binary, based on word count, based on tf-idf...
End of explanation
X_train = sequence.pad_sequences(X_train, maxlen=max_words)
X_val = sequence.pad_sequences(X_val, maxlen=max_words)
X_test = sequence.pad_sequences(X_test, maxlen=max_words)
print(X_train.shape,X_val.shape,X_test.shape)
Explanation: Padding the input text for a fixed input length
End of explanation
def model_LSTM():
model=Sequential()
model.add(Embedding(max_features,100,mask_zero=True))
model.add(LSTM(64,dropout=0.4, recurrent_dropout=0.4,return_sequences=True))
model.add(LSTM(32,dropout=0.5, recurrent_dropout=0.5,return_sequences=False))
model.add(Dense(4096, activation='tanh'))
model.add(Dense(num_classes,activation='softmax'))
return model
Explanation: The role of embedding layer in a neural network
One-hot encoded vectors are high-dimensional and sparse. Let’s assume that we are doing Natural Language Processing (NLP) and have a dictionary of 2000 words. This means that, when using one-hot encoding, each word will be represented by a vector containing 2000 integers. And 1999 of these integers are zeros. In a big dataset this approach is not computationally efficient.
The vectors of each embedding get updated while training the neural network. If you have seen the image at the top of this post you can see how similarities between words can be found in a multi-dimensional space. This allows us to visualize relationships between words, but also between everything that can be turned into a vector through an embedding layer.
Read more about keras embedding layer
LSTM Model
Create a recurrent neural network model
End of explanation
model1 = model_LSTM()
model1.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.001),
metrics=['accuracy'])
model1.summary()
Explanation: Build and compile the LSTM model
End of explanation
! wget https://github.com/rahulremanan/python_tutorial/raw/master/NLP/10-Sentiment_analysis/weights/RT_LSTM.h5
try:
model1.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Fetch saved model weights and load the weights file
End of explanation
%%time
history1=model1.fit(X_train,
y_train,
validation_data=(X_val, y_val),
epochs=epochs,
batch_size=batch_size,
verbose=1)
Explanation: Train the model
End of explanation
model1.save_weights(WEIGHTS_FILE)
files.download(WEIGHTS_FILE)
Explanation: Save the model weights
End of explanation
try:
model1.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Load model weights from weights file
End of explanation
test_element=5
input_sequence = np.asarray([list(X_test[test_element])])
y_pred_LSTM=model1.predict(input_sequence,verbose=1)
print ('Input string: {} ...'.format(test_text[test_element]))
print ('Sentiment for the input string: {} ...'.format(Sent_dic[np.argmax(y_pred_LSTM)]))
print(y_pred_LSTM)
Sent_dic[np.argmax(y_pred_LSTM)]
Explanation: Running model inference on the test data
End of explanation
input_string = ['This movie was horrible']
input_text = tokenizer.texts_to_sequences(input_string)
input_sequence = sequence.pad_sequences(input_text, maxlen=max_words)
y_pred_LSTM=model1.predict(input_sequence,verbose=1)
print ('Input string: {} ...'.format(input_string))
print ('Sentiment for the input string: {} ...'.format(Sent_dic[np.argmax(y_pred_LSTM)]))
input_string = ['this movie was great']
input_text = tokenizer.texts_to_sequences(input_string)
input_sequence = sequence.pad_sequences(input_text, maxlen=max_words)
y_pred_LSTM=model1.predict(input_sequence,verbose=1)
print ('Input string: {} ...'.format(input_string))
print ('Sentiment for the input string: {} ...'.format(Sent_dic[np.argmax(y_pred_LSTM)]))
Explanation: Run inference for custom user input
End of explanation
def model_CNN():
model= Sequential()
model.add(Embedding(max_features,100,input_length=max_words))
model.add(Dropout(0.2))
model.add(Conv1D(64,kernel_size=3,padding='same',activation='relu',strides=1))
model.add(GlobalMaxPooling1D())
model.add(Dense(128,activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes,activation='softmax'))
return model
Explanation: CNN Model
Create a recurrent neural network model
End of explanation
model2 = model_CNN()
model2.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model2.summary()
Explanation: Build and compile the CNN model
End of explanation
#! wget https://github.com/rahulremanan/python_tutorial/raw/master/NLP/10-Sentiment_analysis/weights/RT_LSTM.h5
try:
model2.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Fetch saved model weights and load the weights file
End of explanation
%%time
history2=model2.fit(X_train,
y_train,
validation_data=(X_val, y_val),
epochs=epochs,
batch_size=batch_size,
verbose=1)
Explanation: Train the model
End of explanation
model2.save_weights(WEIGHTS_FILE)
files.download(WEIGHTS_FILE)
Explanation: Save the model weights
End of explanation
try:
model2.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Load model weights from weights file
End of explanation
test_element=5
input_sequence = np.asarray([list(X_test[test_element])])
y_pred=model2.predict(input_sequence,verbose=1)
print ('Input string: {} ...'.format(test_text[test_element]))
print ('Sentiment for the input string: {} ...'.format(Sent_dic[np.argmax(y_pred)]))
Explanation: Running model inference on the test data
End of explanation
def model_CNN_GRU():
model= Sequential()
model.add(Embedding(max_features,100,input_length=max_words))
model.add(Conv1D(64,kernel_size=3,padding='same',activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(GRU(128,return_sequences=True))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(128,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(5,activation='softmax'))
return model
Explanation: CNN +GRUModel
Create a recurrent neural network model
End of explanation
model3 = model_CNN_GRU()
model3.compile(loss='categorical_crossentropy',optimizer=Adam(lr=0.001),metrics=['accuracy'])
model3.summary()
Explanation: Build and compile the CNN +GRU model
End of explanation
#! wget https://github.com/rahulremanan/python_tutorial/raw/master/NLP/10-Sentiment_analysis/weights/RT_LSTM.h5
try:
model3.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Fetch saved model weights and load the weights file
End of explanation
%%time
history3=model3.fit(X_train,
y_train,
validation_data=(X_val, y_val),
epochs=epochs,
batch_size=batch_size,
verbose=1)
Explanation: Train the model
End of explanation
model3.save_weights(WEIGHTS_FILE)
files.download(WEIGHTS_FILE)
Explanation: Save the model weights
End of explanation
try:
model3.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Load model weights from weights file
End of explanation
test_element=5
input_sequence = np.asarray([list(X_test[test_element])])
y_pred=model3.predict(input_sequence,verbose=1)
print ('Input string: {} ...'.format(test_text[test_element]))
print ('Sentiment for the input string: {} ...'.format(Sent_dic[np.argmax(y_pred)]))
Explanation: Running model inference on the test data
End of explanation
def model_BiDir_GRU():
model = Sequential()
model.add(Embedding(max_features, 100, input_length=max_words))
model.add(SpatialDropout1D(0.25))
model.add(Bidirectional(GRU(128)))
model.add(Dropout(0.5))
model.add(Dense(5, activation='softmax'))
return model
Explanation: Bidirectional GRU
Create a recurrent neural network model
End of explanation
model4 = model_BiDir_GRU()
model4.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model4.summary()
Explanation: Build and compile the Bidirectional GRU model
End of explanation
#! wget https://github.com/rahulremanan/python_tutorial/raw/master/NLP/10-Sentiment_analysis/weights/RT_LSTM.h5
try:
model4.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Fetch saved model weights and load the weights file
End of explanation
%%time
history4=model4.fit(X_train,
y_train,
validation_data=(X_val, y_val),
epochs=epochs,
batch_size=batch_size,
verbose=1)
Explanation: Train the model
End of explanation
model4.save_weights(WEIGHTS_FILE)
files.download(WEIGHTS_FILE)
Explanation: Save the model weights
End of explanation
try:
model4.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Load model weights from weights file
End of explanation
test_element=5
input_sequence = np.asarray([list(X_test[test_element])])
y_pred=model4.predict(input_sequence,verbose=1)
print ('Input string: {} ...'.format(test_text[test_element]))
print ('Sentiment for the input string: {} ...'.format(Sent_dic[np.argmax(y_pred)]))
Explanation: Running model inference on the test data
End of explanation
def get_coefs(word, *arr):
return word, np.asarray(arr, dtype='float32')
def get_embed_mat(EMBEDDING_FILE, max_features,embed_dim):
# word vectors
embeddings_index = dict(get_coefs(*o.rstrip().rsplit(' ')) for o in open(EMBEDDING_FILE, encoding='utf8'))
print('Found %s word vectors.' % len(embeddings_index))
# embedding matrix
word_index = tokenizer.word_index
num_words = min(max_features, len(word_index) + 1)
all_embs = np.stack(embeddings_index.values()) #for random init
embedding_matrix = np.random.normal(all_embs.mean(), all_embs.std(),
(num_words, embed_dim))
for word, i in word_index.items():
if i >= max_features:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
max_features = embedding_matrix.shape[0]
return embedding_matrix
# embedding matrix
EMBEDDING_FILE = '/content/glove.6B.100d.txt'
embed_dim = 100 #word vector dim
embedding_matrix = get_embed_mat(EMBEDDING_FILE,max_features,embed_dim)
print(embedding_matrix.shape)
Explanation: Glove word embedding
End of explanation
def model_Glove():
model = Sequential()
model.add(Embedding(max_features, embed_dim, input_length=X_train.shape[1],weights=[embedding_matrix],trainable=True))
model.add(SpatialDropout1D(0.25))
model.add(Bidirectional(GRU(128,return_sequences=True)))
model.add(Bidirectional(GRU(64,return_sequences=False)))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
return model
Explanation: Create a recurrent neural network model
End of explanation
model5 = model_Glove()
model5.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model5.summary()
Explanation: Build and compile the Bidirectional GRU model
End of explanation
#! wget https://github.com/rahulremanan/python_tutorial/raw/master/NLP/10-Sentiment_analysis/weights/RT_LSTM.h5
try:
model5.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Fetch saved model weights and load the weights file
End of explanation
%%time
history5=model5.fit(X_train,
y_train,
validation_data=(X_val, y_val),
epochs=4,
batch_size=batch_size,
verbose=1)
Explanation: Train the model
End of explanation
model5.save_weights(WEIGHTS_FILE)
files.download(WEIGHTS_FILE)
Explanation: Save the model weights
End of explanation
try:
model5.load_weights(WEIGHTS_FILE)
print ('Loaded model weights from: {} ...'.format(WEIGHTS_FILE))
except:
print ('No model weights file: {} found ...'.format(WEIGHTS_FILE))
Explanation: Load model weights from weights file
End of explanation
test_element=5
input_sequence = np.asarray([list(X_test[test_element])])
y_pred=model5.predict(input_sequence,verbose=1)
print ('Input string: {} ...'.format(test_text[test_element]))
print ('Sentiment for the input string: {} ...'.format(Sent_dic[np.argmax(y_pred)]))
Explanation: Running model inference on the test data
End of explanation
test_element=5
input_sequence = np.asarray([list(X_test[test_element])])
y_pred1=model1.predict(input_sequence,verbose=1)
y_pred2=model2.predict(input_sequence,verbose=1)
y_pred3=model3.predict(input_sequence,verbose=1)
y_pred4=model4.predict(input_sequence,verbose=1)
y_pred5=model5.predict(input_sequence,verbose=1)
pred1=np.argmax(y_pred1)
pred2=np.argmax(y_pred2)
pred3=np.argmax(y_pred3)
pred4=np.argmax(y_pred4)
pred5=np.argmax(y_pred5)
Sent_all=stats.mode([pred1,pred2,pred3,pred4,pred5],axis=0)[0][0]
print ('Input string: {} ...'.format(test_text[test_element]))
print ('Sentiment for the input string: {} ...'.format(Sent_dic[Sent_all]))
y_pred
np.mod()
Explanation: Combine all
End of explanation |
12,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hand-crafted features for GTZAN
The goal of this notebook is to create several audio features descriptors for the GTZAN dataset, as proposed for many year as input for machine learning algorithms. We are going to use timbral texture based features and tempo based features for this. The main goal is to produce this features, classify and then compare with our proposed deep learning approach, using CNNs on the raw audio.
Step1: Visualization
Linear (and nonlinear) dimensionality reduction of the GTZAN features for visualization purposes
Step3: Classical Machine Learning
Step4: Logistic Regression
Step5: ElasticNet
Step6: Decision Tree
Step7: Random Forest
Step8: SVM
Step9: Results and save the model | Python Code:
import os
import librosa
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import kurtosis
from scipy.stats import skew
import sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.pipeline import Pipeline
from sklearn.feature_selection import SelectKBest
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.feature_selection import VarianceThreshold
from sklearn.feature_selection import SelectFromModel
import lightgbm as lgbm
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
# Set the seed
np.random.seed(42)
gtzan_dir = '../data/genres/'
# Parameters
song_samples = 22050*30
genres = {'metal': 0, 'disco': 1, 'classical': 2, 'hiphop': 3, 'jazz': 4,
'country': 5, 'pop': 6, 'blues': 7, 'reggae': 8, 'rock': 9}
def get_features(y, sr, n_fft = 1024, hop_length = 512):
# Features to concatenate in the final dictionary
features = {'centroid': None, 'roloff': None, 'flux': None, 'rmse': None,
'zcr': None, 'contrast': None, 'bandwidth': None, 'flatness': None}
# Count silence
if 0 < len(y):
y_sound, _ = librosa.effects.trim(y, frame_length=n_fft, hop_length=hop_length)
features['sample_silence'] = len(y) - len(y_sound)
# Using librosa to calculate the features
features['centroid'] = librosa.feature.spectral_centroid(y, sr=sr, n_fft=n_fft, hop_length=hop_length).ravel()
features['roloff'] = librosa.feature.spectral_rolloff(y, sr=sr, n_fft=n_fft, hop_length=hop_length).ravel()
features['zcr'] = librosa.feature.zero_crossing_rate(y, frame_length=n_fft, hop_length=hop_length).ravel()
features['rmse'] = librosa.feature.rms(y, frame_length=n_fft, hop_length=hop_length).ravel()
features['flux'] = librosa.onset.onset_strength(y=y, sr=sr).ravel()
features['contrast'] = librosa.feature.spectral_contrast(y, sr=sr).ravel()
features['bandwidth'] = librosa.feature.spectral_bandwidth(y, sr=sr, n_fft=n_fft, hop_length=hop_length).ravel()
features['flatness'] = librosa.feature.spectral_flatness(y, n_fft=n_fft, hop_length=hop_length).ravel()
# MFCC treatment
mfcc = librosa.feature.mfcc(y, n_fft = n_fft, hop_length = hop_length, n_mfcc=13)
for idx, v_mfcc in enumerate(mfcc):
features['mfcc_{}'.format(idx)] = v_mfcc.ravel()
# Get statistics from the vectors
def get_moments(descriptors):
result = {}
for k, v in descriptors.items():
result['{}_max'.format(k)] = np.max(v)
result['{}_min'.format(k)] = np.min(v)
result['{}_mean'.format(k)] = np.mean(v)
result['{}_std'.format(k)] = np.std(v)
result['{}_kurtosis'.format(k)] = kurtosis(v)
result['{}_skew'.format(k)] = skew(v)
return result
dict_agg_features = get_moments(features)
dict_agg_features['tempo'] = librosa.beat.tempo(y, sr=sr)[0]
return dict_agg_features
def read_process_songs(src_dir, debug = True):
# Empty array of dicts with the processed features from all files
arr_features = []
# Read files from the folders
for x,_ in genres.items():
folder = src_dir + x
for root, subdirs, files in os.walk(folder):
for file in files:
# Read the audio file
file_name = folder + "/" + file
signal, sr = librosa.load(file_name)
# Debug process
if debug:
print("Reading file: {}".format(file_name))
# Append the result to the data structure
features = get_features(signal, sr)
features['genre'] = genres[x]
arr_features.append(features)
return arr_features
%%time
# Get list of dicts with features and convert to dataframe
features = read_process_songs(gtzan_dir, debug=False)
df_features = pd.DataFrame(features)
df_features.shape
df_features.head()
df_features.to_csv('../data/gtzan_features.csv', index=False)
X = df_features.drop(['genre'], axis=1).values
y = df_features['genre'].values
Explanation: Hand-crafted features for GTZAN
The goal of this notebook is to create several audio features descriptors for the GTZAN dataset, as proposed for many year as input for machine learning algorithms. We are going to use timbral texture based features and tempo based features for this. The main goal is to produce this features, classify and then compare with our proposed deep learning approach, using CNNs on the raw audio.
End of explanation
# Standartize the dataset
scale = StandardScaler()
x_scaled = scale.fit_transform(X)
# Use PCA only for visualization
pca = PCA(n_components=35, whiten=True)
x_pca = pca.fit_transform(x_scaled)
print("cumulative explained variance ratio = {:.4f}".format(np.sum(pca.explained_variance_ratio_)))
# Use LDA only for visualization
lda = LDA()
x_lda = lda.fit_transform(x_scaled, y)
# Using tsne
tsne = TSNE(n_components=2, verbose=1, learning_rate=250)
x_tsne = tsne.fit_transform(x_scaled)
plt.figure(figsize=(18, 4))
plt.subplot(131)
plt.scatter(x_pca[:,0], x_pca[:,1], c=y)
plt.colorbar()
plt.title("Embedded space with PCA")
plt.subplot(132)
plt.scatter(x_lda[:,0], x_lda[:,1], c=y)
plt.colorbar()
plt.title("Embedded space with LDA")
plt.subplot(133)
plt.scatter(x_tsne[:,0], x_tsne[:,1], c=y)
plt.colorbar()
plt.title("Embedded space with TSNE")
plt.show()
Explanation: Visualization
Linear (and nonlinear) dimensionality reduction of the GTZAN features for visualization purposes
End of explanation
# Helper to plot confusion matrix -- from Scikit-learn website
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
Explanation: Classical Machine Learning
End of explanation
params = {
"cls__penalty": ["l1", "l2"],
"cls__C": [0.5, 1, 2, 5],
"cls__max_iter": [500]
}
pipe_lr = Pipeline([
('scale', StandardScaler()),
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', LogisticRegression())
])
grid_lr = GridSearchCV(pipe_lr, params, scoring='accuracy', n_jobs=6, cv=5)
grid_lr.fit(X_train, y_train)
preds = grid_lr.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_lr.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
Explanation: Logistic Regression
End of explanation
params = {
"cls__loss": ['log'],
"cls__penalty": ["elasticnet"],
"cls__l1_ratio": [0.15, 0.25, 0.5, 0.75],
}
pipe_en = Pipeline([
('scale', StandardScaler()),
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', SGDClassifier())
])
grid_en = GridSearchCV(pipe_en, params, scoring='accuracy', n_jobs=6, cv=5)
grid_en.fit(X_train, y_train)
preds = grid_en.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_en.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
Explanation: ElasticNet
End of explanation
params = {
"cls__criterion": ["gini", "entropy"],
"cls__splitter": ["best", "random"],
}
pipe_cart = Pipeline([
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', DecisionTreeClassifier())
])
grid_cart = GridSearchCV(pipe_cart, params, scoring='accuracy', n_jobs=6, cv=5)
grid_cart.fit(X_train, y_train)
preds = grid_cart.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_cart.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
Explanation: Decision Tree
End of explanation
params = {
"cls__n_estimators": [100, 250, 500, 1000],
"cls__criterion": ["gini", "entropy"],
"cls__max_depth": [5, 7, None]
}
pipe_rf = Pipeline([
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', RandomForestClassifier())
])
grid_rf = GridSearchCV(pipe_rf, params, scoring='accuracy', n_jobs=6, cv=5)
grid_rf.fit(X_train, y_train)
preds = grid_rf.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_rf.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
Explanation: Random Forest
End of explanation
params = {
"cls__C": [0.5, 1, 2, 5],
"cls__kernel": ['rbf', 'linear', 'sigmoid'],
}
pipe_svm = Pipeline([
('scale', StandardScaler()),
('var_tresh', VarianceThreshold(threshold=(.8 * (1 - .8)))),
('feature_selection', SelectFromModel(lgbm.LGBMClassifier())),
('cls', SVC())
])
grid_svm = GridSearchCV(pipe_svm, params, scoring='accuracy', n_jobs=6, cv=5)
grid_svm.fit(X_train, y_train)
preds = grid_svm.predict(X_test)
print("best score on validation set (accuracy) = {:.4f}".format(grid_svm.best_score_))
print("best score on test set (accuracy) = {:.4f}".format(accuracy_score(y_test, preds)))
Explanation: SVM
End of explanation
cm = confusion_matrix(y_test, preds)
classes = ['metal', 'disco', 'classical', 'hiphop', 'jazz', 'country', 'pop', 'blues', 'reggae', 'rock']
plt.figure(figsize=(10,10))
plot_confusion_matrix(cm, classes, normalize=True)
from sklearn.externals import joblib
joblib.dump(grid_svm, "../models/pipe_svm.joblib")
Explanation: Results and save the model
End of explanation |
12,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Score functions
Original environment score function
The environment score depends on
Step2: Using cube root instead of log
Step3: Shannon index, based on number of individuals
Step4: Shannon index, based on biomass
Step5: Biomass-based Shannon index times total biomass
Moved shannonIndexBiomassProduct into create_feature_file.py
Average of Shannon index by trophic level
Step6: Net production
moved to create_feature_file.netProduction
Select score function and plot
Step7: Shannon index test
Checking against first timestep of avgShannonIndexByTrophicLevel, set 1, sim 1, trophic level 2
Step8: Net production trend
It seems strange that the net production trend tends to be slightly positive.
The plot below demonstrates how the derivative of an exponentially decaying function has an upward trend.
For net production, a simple average is more appropriate than a trend. It is negative, as expected.
Step9: Splitting biomass between multiple species in the top trophic level lowers score | Python Code:
def environmentScoreNoRounding(speciesData, nodeConfig, biomassData):
numTimesteps = len(biomassData[nodeConfig[0]['nodeId']])
scores = np.empty(numTimesteps)
for timestep in range(numTimesteps):
# Calculate the Ecosystem Score for this timestep
biomass = 0
numSpecies = 0
for node in nodeConfig:
nodeId = node['nodeId']
perUnitBiomass = node['perUnitBiomass']
# Sometimes biomass can go slightly negative.
# Clip to 0 to avoid complex numbers in score calculation.
totalBiomass = max(0, biomassData[nodeId][timestep])
if totalBiomass > 0:
numSpecies += 1
biomass += perUnitBiomass * pow(totalBiomass / perUnitBiomass,
speciesData[nodeId]['trophicLevel'])
if biomass > 0:
biomass = log2(biomass) * 5
scores[timestep] = pow(biomass, 2) + pow(numSpecies, 2)
return scores
Explanation: Score functions
Original environment score function
The environment score depends on:
- the number of species
- the following species-specific attributes:
- total biomass
- individual biomass
- number of individuals (total biomass / individual biomass)
- trophic level
(see create_feature_file.environmentScore())
Original environment score, without rounding
$$Score_t = \left( 5 log_2 \left( \sum_{i=1}^N b_i \left(\frac{B_{it}}{b_i}\right)^{T_i} \right) \right)^2 + N^2$$
End of explanation
def environmentScoreCubeRoot(speciesData, nodeConfig, biomassData):
Compute the Ecosystem Score for all timesteps for the given data and return
the score time series. The calculations are taken from
model.Ecosystem.updateEcosystemScore() in WoB_Server.
numTimesteps = len(biomassData[nodeConfig[0]['nodeId']])
scores = np.empty(numTimesteps)
for timestep in range(numTimesteps):
# Calculate the Ecosystem Score for this timestep
biomass = 0
numSpecies = 0
for node in nodeConfig:
nodeId = node['nodeId']
perUnitBiomass = node['perUnitBiomass']
# Sometimes biomass can go slightly negative.
# Clip to 0 to avoid complex numbers in score calculation.
totalBiomass = max(0, biomassData[nodeId][timestep])
if totalBiomass > 0:
numSpecies += 1
biomass += perUnitBiomass * pow(totalBiomass / perUnitBiomass,
speciesData[nodeId]['trophicLevel'])
if biomass > 0:
biomass = pow(biomass, 1/3) * 5
scores[timestep] = pow(biomass, 2) + pow(numSpecies, 2)
return scores
Explanation: Using cube root instead of log
End of explanation
def shannonIndex(speciesData, nodeConfig, biomassData):
numTimesteps = len(biomassData[nodeConfig[0]['nodeId']])
scores = np.zeros(numTimesteps)
for timestep in range(numTimesteps):
individualCount = np.empty(len(nodeConfig))
for i, node in enumerate(nodeConfig):
speciesBiomass = max(0, biomassData[node['nodeId']][timestep])
individualBiomass = node['perUnitBiomass']
individualCount[i] = speciesBiomass / individualBiomass
totalIndividuals = individualCount.sum()
for i, node in enumerate(nodeConfig):
if individualCount[i] == 0:
continue
proportion = individualCount[i] / totalIndividuals
scores[timestep] -= proportion * log2(proportion)
return scores
Explanation: Shannon index, based on number of individuals
End of explanation
def shannonIndexBiomass(speciesData, nodeConfig, biomassData):
numTimesteps = len(biomassData[nodeConfig[0]['nodeId']])
scores = np.zeros(numTimesteps)
for timestep in range(numTimesteps):
speciesBiomass = np.empty(len(nodeConfig))
for i, node in enumerate(nodeConfig):
speciesBiomass[i] = max(0, biomassData[node['nodeId']][timestep])
totalBiomass = speciesBiomass.sum()
for i, node in enumerate(nodeConfig):
if speciesBiomass[i] <= 0:
continue
proportion = speciesBiomass[i] / totalBiomass
scores[timestep] -= proportion * log2(proportion)
return scores
Explanation: Shannon index, based on biomass
End of explanation
def avgShannonIndexByTrophicLevel(speciesData, nodeConfig, biomassData):
numTimesteps = len(biomassData[nodeConfig[0]['nodeId']])
scores = np.zeros(numTimesteps)
for timestep in range(numTimesteps):
# Organize species biomass values into lists by trophic level
sb = {} # species biomass by trophic level
for i, node in enumerate(nodeConfig):
trophicLevel = round(speciesData[node['nodeId']]['trophicLevel'])
biomass = max(0, biomassData[node['nodeId']][timestep])
if trophicLevel not in sb:
sb[trophicLevel] = [biomass]
else:
sb[trophicLevel].append(biomass)
# Calculate Shannon index for each trophic level
shannon = np.zeros(len(sb)) # note: index is not trophic level, which is not relevent at this point
for i, biomassList in enumerate(sb.values()):
totalBiomass = sum(biomassList)
for biomass in biomassList:
if biomass <= 0:
continue
proportion = biomass / totalBiomass
shannon[i] -= proportion * log2(proportion)
scores[timestep] = shannon.mean()
if timestep % 100 == 0:
print("timestep {}".format(timestep))
print("sb = {}".format(sb))
print("shannon = {}".format(shannon))
return scores
Explanation: Biomass-based Shannon index times total biomass
Moved shannonIndexBiomassProduct into create_feature_file.py
Average of Shannon index by trophic level
End of explanation
#score_function = environment_score
score_function = None
csvDir = '/Users/ben/SFSU/thesis/test-data/steadystate-search/3-4-5-7-13-30-31-42-45-49-50-51-52-53-57-65-72-74-75-85/biomass-data'
filenames = glob.glob(os.path.join(csvDir, '*.csv*')) + glob.glob(os.path.join(csvDir, '*.h5'))
# sort by sim number
filenames.sort(key=lambda f: (get_sim_number(f), f))
# sort descending by file size
#filenames.sort(key=lambda f: (-os.path.getsize(f), get_sim_number(f)))
file_basenames = list(map(os.path.basename, filenames))
def plotFile(file_basename):
global last_selected_file
last_selected_file = file_basename
filename = os.path.join(csvDir, file_basename)
plot_biomass_data(filename, score_function, show_legend=True, #figsize=(12,8),
#xlim=(15990, 16010),
ylim=(1e-12, 1e5), logx=False, logy=True)
#ylim=(0, 20000),
#log_scale=False)
plt.show()
try:
selectWidget = interactive(plotFile, file_basename=widgets.Select(
description="File", options=file_basenames, value=last_selected_file))
except:
selectWidget = interactive(plotFile, file_basename=widgets.Select(
description="File", options=file_basenames))
display(selectWidget)
Explanation: Net production
moved to create_feature_file.netProduction
Select score function and plot
End of explanation
blist = [1751.0, 1415.0]
total = sum(blist)
s = 0
for b in blist:
proportion = b / total
s -= proportion * log2(proportion)
print(s)
Explanation: Shannon index test
Checking against first timestep of avgShannonIndexByTrophicLevel, set 1, sim 1, trophic level 2
End of explanation
y = np.array([32, 16, 8, 4, 2, 1, 1, 1])
x = np.arange(len(y))
dy = (y - np.roll(y, 1))[1:]
plt.plot(x, y, label='y')
plt.plot(x[1:], dy, label='dy')
slope, intercept, r_value, p_value, std_err = stats.linregress(x[1:], dy)
plt.plot(x[1:], x[1:] * slope + intercept, label='linear regression')
plt.legend()
print("slope = {}".format(slope))
print("average = {}".format(np.mean(dy)))
Explanation: Net production trend
It seems strange that the net production trend tends to be slightly positive.
The plot below demonstrates how the derivative of an exponentially decaying function has an upward trend.
For net production, a simple average is more appropriate than a trend. It is negative, as expected.
End of explanation
print(20**3)
print(10**3 + 10**3)
Explanation: Splitting biomass between multiple species in the top trophic level lowers score
End of explanation |
12,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this guided project, you'll practice recreating some of the plots using Matplotlib that Seaborn and Pandas allow you to generate using high-level functions. This deliberate practice will help prepare you for creating new kinds of plots in the future that these libraries don't provide.
We'll continue to work with the dataset from the American Community Survey on job outcomes for recent college graduates. Here are some of the columns in the dataset
Step1: Scatter Matrix with Pandas
Step2: Scatter Matrix with Matplotlib
Step3: Grouped Bar Plot with Pandas
Since the dataset on recent college graduates contains information on the number of males and females included in the study, you can create a grouped bar plot to compare the gender ratios across majors.
Step4: Grouped Bar Plot using Matplotlib
Step5: Next Steps
Here are some ideas to continue practicing what you've learned | Python Code:
# Setup the environment by importing the libraries we need
import pandas as pd
import matplotlib.pyplot as plt
# And run the necessary Jupyter magic so plots are displayed inline
%matplotlib notebook
# Read the dataset into a DataFrame
recent_grads = pd.read_csv('../data/recent-grads.csv')
# Start exploring the beginning and end of the data
recent_grads.head()
recent_grads.tail()
# Look at some summary statistics
recent_grads.describe()
# Use shape to see how many rows and columns we have
recent_grads.shape
# Create a new DataFrame with rows containing NaN values dropped
filtered_recent = recent_grads.dropna()
# And make sure we didn't drop too many rows
filtered_recent.shape
Explanation: In this guided project, you'll practice recreating some of the plots using Matplotlib that Seaborn and Pandas allow you to generate using high-level functions. This deliberate practice will help prepare you for creating new kinds of plots in the future that these libraries don't provide.
We'll continue to work with the dataset from the American Community Survey on job outcomes for recent college graduates. Here are some of the columns in the dataset:
* Rank - Rank by median earnings
* Major_code - Major code
* Major - Major description
* Major_category - Category of major
* Total - Total number of people with major
* Sample_size - Sample size (unweighted) of full-time
* Men - Male graduates
* Women - Female graduates
* ShareWomen - Women as share of total
* Employed - Number employed
Before we start creating data visualizations, let's import the libraries we need and remove rows contain null values.
End of explanation
# Create a scatter matrix with pandas
pd.scatter_matrix(recent_grads[['ShareWomen', 'Unemployment_rate']], figsize=(8,8))
Explanation: Scatter Matrix with Pandas
End of explanation
# Create a Figure instance and create 4 Axes instances
fig = plt.figure(figsize=(8,8))
ax11 = fig.add_subplot(2,2,1)
ax12 = fig.add_subplot(2,2,2)
ax21 = fig.add_subplot(2,2,3)
ax22 = fig.add_subplot(2,2,4)
# Now that we have 4 Axes instances, we can generate graphs for each
ax11.hist(filtered_recent['ShareWomen'])
ax22.hist(filtered_recent['Unemployment_rate'])
ax12.scatter(filtered_recent['Unemployment_rate'], filtered_recent['ShareWomen'])
ax21.scatter(filtered_recent['ShareWomen'], filtered_recent['Unemployment_rate'])
# Now let's tweak the appearance.
# To tweak how the axis ticks look, you need to grab a subplot's XAxis
# or YAxis instance and call specific methods.
# Use the Axes methods get_xaxis() and get_yaxis() to get these axes.
# Hide the x-axis ticks for the 2 subplots on the top row
ax11.xaxis.set_visible(False)
ax12.xaxis.set_visible(False)
ax12.yaxis.set_visible(False)
ax22.yaxis.set_visible(False)
# Assign the column names as the x-aix and y-axis labels
ax11.set_ylabel('ShareWomen')
ax21.set_ylabel('Unemployment_rate')
ax21.set_xlabel('ShareWomen')
ax22.set_xlabel('Unemployment_rate')
# Remove the spacing between subplots to match the Pandas scatter matrix
fig.subplots_adjust(wspace=0, hspace=0)
# The last remaining piece is to customize the x-axis and y-axis ticks
# Use the Axes methods set_xlim() and set_ylim to set data limits
ax11.set(ylim=(0,30))
ax12.set(ylim=(0.0,1.0))
ax21.set(xlim=(0.0,1.0), ylim=(0.0,0.20))
ax22.set(xlim=(0.0,0.20))
# Use the Axes metods set_xticklabels and set_yticklabels()
ax11.set_yticklabels([0, 5, 10, 15, 20, 25, 30])
ax21.set_yticklabels([0.0, 0.05, 0.10, 0.15])
ax21.set_xticklabels([0.0, 0.2, 0.4, 0.6, 0.8], rotation=90)
ax22.set_xticklabels([0.00, 0.05, 0.10, 0.15, 0.20], rotation=90)
Explanation: Scatter Matrix with Matplotlib
End of explanation
# Add a ShareMen column
recent_grads['ShareMen'] = 1 - recent_grads['ShareWomen']
# First filter the DataFrame down to the columns you want visualized
arts = recent_grads[recent_grads['Major_category'] == 'Arts']
arts.set_index("Major", inplace=True)
arts.head()
# Create a Grouped Bar Plot using Pandas
arts[['ShareMen', 'ShareWomen']].plot(kind='bar', figsize=(8,8))
Explanation: Grouped Bar Plot with Pandas
Since the dataset on recent college graduates contains information on the number of males and females included in the study, you can create a grouped bar plot to compare the gender ratios across majors.
End of explanation
# import NumPy and use arange to generate a list of integer values
import numpy as np
locs = np.arange(len(arts))
locs
# Create a Figure instance and add a single subplotplot
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(1,1,1)
# Generate the bars for the ShareMen column
bar_width = 0.35
bar_1 = ax.bar(left=locs, height=arts['ShareMen'], width=bar_width)
# Use the Axes method set_xticklabels() to assign the major names
ax.set_xticklabels(arts.index, rotation=90)
# We need a list of placement values for the new bars that are offset
offset_locs = locs + bar_width
# Generate the bars for the ShareWomen column
bar_2 = ax.bar(left=offset_locs, height=arts['ShareWomen'], width=bar_width, color='green')
# Align the x-asis labels better with the grouped bars
ax.set_xticks(offset_locs)
# Create a legend
plt.legend((bar_1, bar_2), ('ShareMen', 'ShareWomen'), loc='upper left')
# Display the baground grid
plt.grid(True)
Explanation: Grouped Bar Plot using Matplotlib
End of explanation
# Gender ratios stacked bar plot
arts[['ShareMen', 'ShareWomen']].plot.bar(figsize=(8,8), stacked=True)
# Box plot
arts[['ShareMen', 'ShareWomen']].plot.box(figsize=(8,8))
Explanation: Next Steps
Here are some ideas to continue practicing what you've learned:
* Visualize the gender ratios for each major by creating a stacked box plot instead of a grouped bar plot
* Practice generating histograms from scratch without relying on the Matplotlib method hist()
* Practice generating box plots using Matplotlib only
* Structure your Matplotlib code as a function so you can reuse the code.
* Write a function that takes in a DataFrame, takes in a list of column names, and generates a scatter matrix for combinations of columns.
* While the scatter matrix you generated in this guided project used 2 columns, how can you generalize the code to handle n columns.
* As n gets larger, how do you dynamically specify the figsize parameter when creating the Plot instance so the data visualization is legible with more subplots.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.