Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
9,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducing the Keras Sequential API
Learning Objectives
1. Learn how to use feature columns in a Keras model
1. Build a DNN model using the Keras Sequential API
1. Learn how to train a model with Keras
1. Learn how to save/load, and deploy a Keras model on GCP
1. Learn how to deploy and make predictions with at Keras model
Introduction
The Keras sequential API allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs.
In this lab, we'll see how to build a simple deep neural network model using the keras sequential api and feature columns. Once we have trained our model, we will deploy it using AI Platform and see how to call our model for online prediciton.
Step1: Start by importing the necessary libraries for this lab.
Step2: Load raw data
We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
Step3: Use tf.data to read the CSV files
We wrote these functions for reading data from the csv files above in the previous notebook.
Step4: Build a simple keras DNN model
We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide.
In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column()
We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop.
Lab Task #1
Step5: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
Lab Task #2a
Step6: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments
Step7: Train the model
To train your model, Keras provides three functions that can be used
Step8: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training.
Lab Task #3
Step9: High-level model evaluation
Once we've run data through the model, we can call .summary() on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
Step10: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
Step11: Making predictions with our model
To make predictions with our trained model, we can call the predict method, passing to it a dictionary of values. The steps parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set steps=1 (setting steps=None would also work). Note, however, that if x is a tf.data dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
Step12: Export and deploy our model
Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
Lab Task #4
Step13: Deploy our model to AI Platform
Finally, we will deploy our trained model to AI Platform and see how we can make online predicitons.
Lab Task #5a
Step14: Lab Task #5b | Python Code:
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0
Explanation: Introducing the Keras Sequential API
Learning Objectives
1. Learn how to use feature columns in a Keras model
1. Build a DNN model using the Keras Sequential API
1. Learn how to train a model with Keras
1. Learn how to save/load, and deploy a Keras model on GCP
1. Learn how to deploy and make predictions with at Keras model
Introduction
The Keras sequential API allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs.
In this lab, we'll see how to build a simple deep neural network model using the keras sequential api and feature columns. Once we have trained our model, we will deploy it using AI Platform and see how to call our model for online prediciton.
End of explanation
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.callbacks import TensorBoard
print(tf.__version__)
%matplotlib inline
Explanation: Start by importing the necessary libraries for this lab.
End of explanation
!ls -l ../data/*.csv
!head ../data/taxi*.csv
Explanation: Load raw data
We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
End of explanation
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
UNWANTED_COLS = ['pickup_datetime', 'key']
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
Explanation: Use tf.data to read the CSV files
We wrote these functions for reading data from the csv files above in the previous notebook.
End of explanation
INPUT_COLS = [
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
]
# Create input layer of feature columns
# TODO 1
feature_columns = # TODO: Your code goes here.
Explanation: Build a simple keras DNN model
We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide.
In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column()
We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop.
Lab Task #1: Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the INPUT_COLS list, while the values should be numeric feature columns.
End of explanation
# Build a keras DNN model using Sequential API
# TODO 2a
model = # TODO: Your code goes here.
Explanation: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
Lab Task #2a: Create a deep neural network using Keras's Sequential API. In the cell below, use the tf.keras.layers library to create all the layers for your deep neural network.
End of explanation
# TODO 2b
# Create a custom evalution metric
def rmse(y_true, y_pred):
return # TODO: Your code goes here
# Compile the keras model
# TODO: Your code goes here.
Explanation: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:
An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class.
A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the Losses class (such as categorical_crossentropy or mse), or it can be a custom objective function.
A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.
We will add an additional custom metric called rmse to our list of metrics which will return the root mean square error.
Lab Task #2b: Compile the model you created above. Create a custom loss function called rmse which computes the root mean squared error between y_true and y_pred. Pass this function to the model as an evaluation metric.
End of explanation
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern='../data/taxi-train*',
batch_size=TRAIN_BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN)
evalds = create_dataset(
pattern='../data/taxi-valid*',
batch_size=1000,
mode=tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
Explanation: Train the model
To train your model, Keras provides three functions that can be used:
1. .fit() for training a model for a fixed number of epochs (iterations on a dataset).
2. .fit_generator() for training a model on data yielded batch-by-batch by a generator
3. .train_on_batch() runs a single gradient update on a single batch of data.
The .fit() function works well for small datasets which can fit entirely in memory. However, for large datasets (or if you need to manipulate the training data on the fly via data augmentation, etc) you will need to use .fit_generator() instead. The .train_on_batch() method is for more fine-grained control over training and accepts only a single batch of data.
The taxifare dataset we sampled is small enough to fit in memory, so can we could use .fit to train our model. Our create_dataset function above generates batches of training examples, so we could also use .fit_generator. In fact, when calling .fit the method inspects the data, and if it's a generator (as our dataset is) it will invoke automatically .fit_generator for training.
We start by setting up some parameters for our training job and create the data generators for the training and validation data.
We refer you the the blog post ML Design Pattern #3: Virtual Epochs for further details on why express the training in terms of NUM_TRAIN_EXAMPLES and NUM_EVALS and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.
End of explanation
# TODO 3
%time
steps_per_epoch = # TODO: Your code goes here.
LOGDIR = "./taxi_trained"
history = # TODO: Your code goes here.
Explanation: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training.
Lab Task #3: In the cell below, you will train your model. First, define the steps_per_epoch then train your model using .fit(), saving the model training output to a variable called history.
End of explanation
model.summary()
Explanation: High-level model evaluation
Once we've run data through the model, we can call .summary() on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
End of explanation
RMSE_COLS = ['rmse', 'val_rmse']
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ['loss', 'val_loss']
pd.DataFrame(history.history)[LOSS_COLS].plot()
Explanation: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
End of explanation
model.predict(x={"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0])},
steps=1)
Explanation: Making predictions with our model
To make predictions with our trained model, we can call the predict method, passing to it a dictionary of values. The steps parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set steps=1 (setting steps=None would also work). Note, however, that if x is a tf.data dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
End of explanation
# TODO 4a
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR,
datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save( # TODO: Your code goes here.
# TODO 4b
!saved_model_cli show \
--tag_set # TODO: Your code goes here.
--signature_def # TODO: Your code goes here.
--dir # TODO: Your code goes here.
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
Explanation: Export and deploy our model
Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
Lab Task #4: Use tf.saved_model.save to export the trained model to a Tensorflow SavedModel format. Reference the documentation for tf.saved_model.save as you fill in the code for the cell below.
Next, print the signature of your saved model using the SavedModel Command Line Interface command saved_model_cli. You can read more about the command line interface and the show and run commands it supports in the documentation here.
End of explanation
%%bash
# TODO 5a
PROJECT= #TODO: Change this to your PROJECT
BUCKET=${PROJECT}
REGION=us-east1
MODEL_NAME=taxifare
VERSION_NAME=dnn
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "$MODEL_NAME already exists"
else
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create \
--model= #TODO: Your code goes here.
--framework= #TODO: Your code goes here.
--python-version= #TODO: Your code goes here.
--runtime-version= #TODO: Your code goes here.
--origin= #TODO: Your code goes here.
--staging-bucket= #TODO: Your code goes here.
%%writefile input.json
{"pickup_longitude": -73.982683, "pickup_latitude": 40.742104,"dropoff_longitude": -73.983766,"dropoff_latitude": 40.755174,"passenger_count": 3.0}
Explanation: Deploy our model to AI Platform
Finally, we will deploy our trained model to AI Platform and see how we can make online predicitons.
Lab Task #5a: Complete the code in the cell below to deploy your trained model to AI Platform using the gcloud ai-platform versions create command. Have a look at the documentation for how to create model version with gcloud.
End of explanation
# TODO 5b
!gcloud ai-platform predict \
--model #TODO: Your code goes here.
--json-instances #TODO: Your code goes here.
--version #TODO: Your code goes here.
Explanation: Lab Task #5b: Complete the code in the cell below to call prediction on your deployed model for the example you just created in the input.json file above.
End of explanation |
9,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create the siamese net feature extraction model
Step1: Restore from checkpoint and calc the features from all of train data
Step2: Searching for similar test images from trainset based on siamese feature | Python Code:
img_placeholder = tf.placeholder(tf.float32, [None, 28, 28, 1], name='img')
net = mnist_model(img_placeholder, reuse=False)
Explanation: Create the siamese net feature extraction model
End of explanation
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
ckpt = tf.train.get_checkpoint_state("model")
saver.restore(sess, "model/model.ckpt")
train_feat = sess.run(net, feed_dict={img_placeholder:train_images[:10000]})
Explanation: Restore from checkpoint and calc the features from all of train data
End of explanation
#generate new random test image
idx = np.random.randint(0, len_test)
im = test_images[idx]
#show the test image
show_image(idx, test_images)
print("This is image from id:", idx)
#run the test image through the network to get the test features
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
ckpt = tf.train.get_checkpoint_state("model")
saver.restore(sess, "model/model.ckpt")
search_feat = sess.run(net, feed_dict={img_placeholder:[im]})
#calculate the cosine similarity and sort
dist = cdist(train_feat, search_feat, 'cosine')
rank = np.argsort(dist.ravel())
#show the top n similar image from train data
n = 7
show_image(rank[:n], train_images)
print("retrieved ids:", rank[:n])
Explanation: Searching for similar test images from trainset based on siamese feature
End of explanation |
9,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Python Tour of Data Science
Step1: 2 Vectorization
First step
Step2: Exploration question
Step3: 3 Pre-processing
The independant variables $X$ are the bags of words.
The target $y$ is the number of likes.
Split in half for training and testing sets.
Step4: 4 Linear regression
Using numpy, fit and evaluate the linear model $$\hat{w}, \hat{b} = \operatorname*{arg min}_{w,b} \| Xw + b - y \|_2^2.$$
Please define a class LinearRegression with two methods
Step5: Interpretation
Step6: 5 Interactivity
Create a slider for the number of words, i.e. the dimensionality of the samples $x$.
Print the accuracy for each change on the slider.
Step7: 6 Scikit learn
Fit and evaluate the linear regression model using sklearn.
Evaluate the model with the mean squared error metric provided by sklearn.
Compare with your implementation.
Step8: 7 Deep Learning
Try a simple deep learning model !
Another modeling choice would be to use a Recurrent Neural Network (RNN) and feed it the sentence words after words.
Step9: 8 Evaluation
Use matplotlib to plot a performance visualization. E.g. the true number of likes and the real number of likes for all posts / tweets.
What do you observe ? What are your suggestions to improve the performance ? | Python Code:
import pandas as pd
import numpy as np
from IPython.display import display
import os.path
folder = os.path.join('..', 'data', 'social_media')
# Your code here.
Explanation: A Python Tour of Data Science: Data Acquisition & Exploration
Michaël Defferrard, PhD student, EPFL LTS2
Exercise: problem definition
Theme of the exercise: understand the impact of your communication on social networks. A real life situation: the marketing team needs help in identifying which were the most engaging posts they made on social platforms to prepare their next AdWords campaign.
This notebook is the second part of the exercise. Given the data we collected from Facebook an Twitter in the last exercise, we will construct an ML model and evaluate how good it is to predict the number of likes of a post / tweet given the content.
1 Data importation
Use pandas to import the facebook.sqlite and twitter.sqlite databases.
Print the 5 first rows of both tables.
The facebook.sqlite and twitter.sqlite SQLite databases can be created by running the data acquisition and exploration exercise.
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
nwords = 100
# Your code here.
Explanation: 2 Vectorization
First step: transform the data into a format understandable by the machine. What to do with text ? A common choice is the so-called bag-of-word model, where we represent each word a an integer and simply count the number of appearances of a word into a document.
Example
Let's say we have a vocabulary represented by the following correspondance table.
| Integer | Word |
|:-------:|---------|
| 0 | unknown |
| 1 | dog |
| 2 | school |
| 3 | cat |
| 4 | house |
| 5 | work |
| 6 | animal |
Then we can represent the following document
I have a cat. Cats are my preferred animals.
by the vector $x = [6, 0, 0, 2, 0, 0, 1]^T$.
Tasks
Construct a vocabulary of the 100 most occuring words in your dataset.
Build a vector $x \in \mathbb{R}^{100}$ for each document (post or tweet).
Tip: the natural language modeling libraries nltk and gensim are useful for advanced operations. You don't need them here.
Arise a first data cleaning question. We may have some text in french and other in english. What do we do ?
End of explanation
# Your code here.
Explanation: Exploration question: what are the 5 most used words ? Exploring your data while playing with it is a useful sanity check.
End of explanation
# Your code here.
Explanation: 3 Pre-processing
The independant variables $X$ are the bags of words.
The target $y$ is the number of likes.
Split in half for training and testing sets.
End of explanation
import scipy.sparse
# Your code here.
Explanation: 4 Linear regression
Using numpy, fit and evaluate the linear model $$\hat{w}, \hat{b} = \operatorname*{arg min}_{w,b} \| Xw + b - y \|_2^2.$$
Please define a class LinearRegression with two methods:
1. fit learn the parameters $w$ and $b$ of the model given the training examples.
2. predict gives the estimated number of likes of a post / tweet. That will be used to evaluate the model on the testing set.
To evaluate the classifier, create an accuracy(y_pred, y_true) function which computes the mean squared error $\frac1n \| \hat{y} - y \|_2^2$.
Hint: you may want to use the function scipy.sparse.linalg.spsolve().
End of explanation
# Your code here.
Explanation: Interpretation: what are the most important words a post / tweet should include ?
End of explanation
import ipywidgets
from IPython.display import clear_output
# Your code here.
Explanation: 5 Interactivity
Create a slider for the number of words, i.e. the dimensionality of the samples $x$.
Print the accuracy for each change on the slider.
End of explanation
from sklearn import linear_model, metrics
# Your code here.
Explanation: 6 Scikit learn
Fit and evaluate the linear regression model using sklearn.
Evaluate the model with the mean squared error metric provided by sklearn.
Compare with your implementation.
End of explanation
import os
os.environ['KERAS_BACKEND'] = 'theano' # tensorflow
import keras
# Your code here.
Explanation: 7 Deep Learning
Try a simple deep learning model !
Another modeling choice would be to use a Recurrent Neural Network (RNN) and feed it the sentence words after words.
End of explanation
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
# Your code here.
Explanation: 8 Evaluation
Use matplotlib to plot a performance visualization. E.g. the true number of likes and the real number of likes for all posts / tweets.
What do you observe ? What are your suggestions to improve the performance ?
End of explanation |
9,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup
Locally store your Planet API key and start a session. Create a funciton to print json objects.
Step1: Stats
Here you will perform a statistics search of planets database, while getting familiar with the various filtering options.
Step2: Set up the date filter to a time of your choice.
Step3: Pick a coordinate over your AOI. You can select it from google maps.
Step4: Set up a cloud filter. Remember the lt stands for "less than" and gt stands for "greater than".
Step5: Now join all of the filters together.
Step6: Search the database to see how many results fall in this category. Insert the Item Type of your choice.
Step7: Congratulations, you have performed your fist statistics search using the Data API!
Quick Search
Here you will perform a search for specific image ID's, using the search criterias you defined above, in order to Download them!
All the code you need is below in small chuncks, however they are in the wrong order! Re-order them correctly to Download your image. | Python Code:
import os
import json
import requests
PLANET_API_KEY = os.getenv('PL_API_KEY')
# Setup Planet Data API base URL
URL = "https://api.planet.com/data/v1"
# Setup the session
session = requests.Session()
# Authenticate
session.auth = (PLANET_API_KEY, "")
res = session.get(URL)
res.status_code
# Helper function to print formatted JSON using the json module
def p(data):
print(json.dumps(data, indent=2))
Explanation: Setup
Locally store your Planet API key and start a session. Create a funciton to print json objects.
End of explanation
# Setup the stats URL
stats_url = "{}/stats".format(URL)
Explanation: Stats
Here you will perform a statistics search of planets database, while getting familiar with the various filtering options.
End of explanation
date_filter = {
"type": "DateRangeFilter", # Type of filter -> Date Range
"field_name": "acquired", # The field to filter on: "acquired" -> Date on which the "image was taken"
"config": {
"gte": "2000-01-01T00:00:00.000Z", # "gte" -> Greater than or equal to
}
}
Explanation: Set up the date filter to a time of your choice.
End of explanation
geometry = {
"type": "GeometryFilter",
"field_name": "geometry",
"config": {
"type": "Point",
"coordinates": [
0,
0
]
}
}
Explanation: Pick a coordinate over your AOI. You can select it from google maps.
End of explanation
# Setup Cloud Filter
cloud_filter = {
"type": "RangeFilter",
"field_name": "cloud_cover",
"config": {
"lt": 0.0,
"gt": 0.0
}
}
Explanation: Set up a cloud filter. Remember the lt stands for "less than" and gt stands for "greater than".
End of explanation
and_filter = {
"type": "AndFilter",
"config": [geometry, date_filter, cloud_filter]
}
p(and_filter)
Explanation: Now join all of the filters together.
End of explanation
item_types = ["PSScene4Band"]
# Setup the request
request = {
"item_types" : item_types,
"interval" : "year",
"filter" : and_filter
}
# Send the POST request to the API stats endpoint
res=session.post(stats_url, json=request)
# Print response
p(res.json())
Explanation: Search the database to see how many results fall in this category. Insert the Item Type of your choice.
End of explanation
#Send a request to the item's asset url in order to activate it for download
#This step might take some time
asset_activated = False
while asset_activated == False:
res = session.get(assets_url)
assets = res.json()
asset_status = image["status"]
if asset_status == 'active':
asset_activated = True
print("Asset is active and ready to download")
p(image)
# Get the links for the item and find out what asset types are available
assets_url = feature["_links"]["assets"]
res = session.get(assets_url)
assets = res.json()
print(assets.keys())
# Setup the quick search endpoint url
# Create a request
quick_url = "{}/quick-search".format(URL)
item_types = ["PSScene4Band"]
request = {
"item_types" : item_types,
"filter" : and_filter
}
# Print the assets location endpoint for download
# Clicking on this url will download the image
location_url = image["location"]
print(location_url)
# Send the POST request to the API quick search endpoint
# Select the first feature from the search results and print its ID
# print the result
res = session.post(quick_url, json=request)
geojson = res.json()
feature = geojson["features"][0]
p(feature["id"])
# Pick an asset type
# Send a request to the activation url to activate the item
image = assets["analytic"]
activation_url = image["_links"]["activate"]
res = session.get(activation_url)
p(res.status_code)
Explanation: Congratulations, you have performed your fist statistics search using the Data API!
Quick Search
Here you will perform a search for specific image ID's, using the search criterias you defined above, in order to Download them!
All the code you need is below in small chuncks, however they are in the wrong order! Re-order them correctly to Download your image.
End of explanation |
9,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get the results of a single run
Step1: Done. Let's test the reshape_by_symbol function
Step2: So, the reshape_by_symbol function seems to work with run_single_val. It could be added to it. Let's test the roll_evaluate function.
Step3: Let's do some previous filtering to avoid problems | Python Code:
from predictor import evaluation as ev
from predictor.dummy_mean_predictor import DummyPredictor
predictor = DummyPredictor()
y_train_true_df, y_train_pred_df, y_val_true_df, y_val_pred_df = ev.run_single_val(x, y, ahead_days, predictor)
print(y_train_true_df.shape)
print(y_train_pred_df.shape)
print(y_val_true_df.shape)
print(y_val_pred_df.shape)
y_train_true_df.head()
y_train_pred_df.head()
y_val_true_df.head()
y_val_pred_df.head()
Explanation: Get the results of a single run
End of explanation
y_train_true_rs = ev.reshape_by_symbol(y_train_true_df)
print(y_train_true_rs.shape)
y_train_true_rs.head()
y_train_pred_rs = ev.reshape_by_symbol(y_train_pred_df)
print(y_train_pred_rs.shape)
y_train_pred_rs.head()
y_val_true_rs = ev.reshape_by_symbol(y_val_true_df)
print(y_val_true_rs.shape)
y_val_true_rs.head()
Explanation: Done. Let's test the reshape_by_symbol function
End of explanation
u = x.index.levels[0][0]
print(u)
fe.SPY_DF.sort_index().index.unique()
md = fe.SPY_DF.index.unique()
u in md
fe.add_market_days(u,6)
Explanation: So, the reshape_by_symbol function seems to work with run_single_val. It could be added to it. Let's test the roll_evaluate function.
End of explanation
# Getting the data
GOOD_DATA_RATIO = 0.99
data_df = pd.read_pickle('../../data/data_train_val_df.pkl')
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
data_df = pp.drop_irrelevant_symbols(data_df, GOOD_DATA_RATIO)
train_time = -1 # In real time days
base_days = 7 # In market days
step_days = 7 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
tic = time()
x, y = fe.generate_train_intervals(data_df,
train_time,
base_days,
step_days,
ahead_days,
today,
fe.feature_close_one_to_one)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
print(data_df.shape)
data_df.head()
SAMPLES_GOOD_DATA_RATIO = 0.9
x_y_df = pd.concat([x, y], axis=1)
x_y_df = pp.drop_irrelevant_samples(x_y_df, SAMPLES_GOOD_DATA_RATIO)
x = x_y_df.iloc[:, :-1]
y = x_y_df.iloc[:, -1]
x = pp.fill_missing(x)
x_y_df.isnull().sum()
x.isnull().sum().sum()
y.isnull().sum()
x_reshaped = ev.reshape_by_symbol(x)
x_reshaped.head()
x_reshaped.isnull().sum().max()
x.shape
x_reshaped.shape
x_reshaped[x_reshaped.notnull()]
y_train_true_df, y_train_pred_df, y_val_true_df, y_val_pred_df = ev.run_single_val(x, y, ahead_days, predictor)
from sklearn.metrics import r2_score
r2_score(y_train_true_df, y_train_pred_df, multioutput='raw_values')
tickers = y_train_true_df.index.levels[1]
tickers
y_train_true_df.loc[(slice(None), 'AAPL'),:]
from sklearn.metrics import r2_score
r2_train_score = []
mre_train = []
for ticker in tickers:
y_true = y_train_true_df.loc[(slice(None), ticker),:]
y_pred = y_train_pred_df.loc[(slice(None), ticker),:]
r2_train_score.append(r2_score(y_true, y_pred))
mre_train.append(ev.mre(y_true, y_pred))
np.mean(r2_train_score)
np.mean(mre_train)
plt.plot(mre_train)
ev.get_metrics(y_train_true_df, y_train_pred_df)
train_days = 252
x_y_sorted = pd.concat([x, y], axis=1).sort_index()
start_date = x_y_sorted.index.levels[0][0]
end_date = fe.add_market_days(start_date, train_days)
start_date
end_date
start_date + ((end_date - start_date) / 2)
train_days = 252
step_eval_days = 30
r2, mre, y_val_true_df, y_val_pred_df, mean_dates = ev.roll_evaluate(x,
y,
train_days,
step_eval_days,
ahead_days,
predictor,
verbose=True)
print(r2.shape)
print(mre.shape)
print(y_val_true_df.shape)
print(y_val_pred_df.shape)
print(mean_dates.shape)
plt.plot(mean_dates, r2[:, 0], 'b', label='Mean r2 score')
plt.plot(mean_dates, r2[:, 0] + 2*r2[:, 1], 'r')
plt.plot(mean_dates, r2[:, 0] - 2*r2[:, 1], 'r')
plt.xlabel('Mean date of the training period')
plt.legend()
plt.grid()
plt.plot(mean_dates, mre[:, 0], 'b', label='Mean MRE')
plt.plot(mean_dates, mre[:, 0] + 2*mre[:, 1], 'r')
plt.plot(mean_dates, mre[:, 0] - 2*mre[:, 1], 'r')
plt.xlabel('Mean date of the training period')
plt.legend()
plt.grid()
y_val_true_df.head()
y_val_pred_df.head()
r2_scores, mre_scores, tickers = ev.get_metrics(y_val_true_df, y_val_pred_df)
eval_df = pd.DataFrame(np.array([r2_scores, mre_scores]).T, index=tickers, columns=['r2', 'mre'])
eval_df.head()
eval_df['mre'].plot()
eval_df['r2'].plot()
eval_df.sort_values(by='mre', ascending=False)
plt.scatter(eval_df['r2'], eval_df['mre'])
eval2_df = ev.get_metrics_df(y_val_true_df, y_val_pred_df)
eval2_df.head()
Explanation: Let's do some previous filtering to avoid problems
End of explanation |
9,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
General Imports
!! IMPORTANT !!
If you did NOT install opengrid with pip,
make sure the path to the opengrid folder is added to your PYTHONPATH
Step1: Houseprint
Step2: A Houseprint object can be saved as a pickle. It loses its tmpo session however (connections cannot be pickled)
Step3: TMPO
The houseprint, sites, devices and sensors all have a get_data method. In order to get these working for the fluksosensors, the houseprint creates a tmpo session.
Step4: Lookup sites, devices, sensors based on key
These methods return a single object
Step5: Lookup sites, devices, sensors based on search criteria
These methods return a list with objects satisfying the criteria
Step6: Get Data
Step7: Site
Step8: Device
Step9: Sensor
Step10: Getting data for a selection of sensors | Python Code:
import os
import inspect
import sys
import pandas as pd
import charts
from opengrid.library import houseprint
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 16,8
Explanation: General Imports
!! IMPORTANT !!
If you did NOT install opengrid with pip,
make sure the path to the opengrid folder is added to your PYTHONPATH
End of explanation
hp = houseprint.Houseprint()
# for testing:
# hp = houseprint.Houseprint(spreadsheet='unit and integration test houseprint')
hp
hp.sites[:5]
hp.get_devices()[:4]
hp.get_sensors('water')[:3]
Explanation: Houseprint
End of explanation
hp.save('new_houseprint.pkl')
hp = houseprint.load_houseprint_from_file('new_houseprint.pkl')
Explanation: A Houseprint object can be saved as a pickle. It loses its tmpo session however (connections cannot be pickled)
End of explanation
hp.init_tmpo()
hp._tmpos.debug = False
hp.sync_tmpos()
Explanation: TMPO
The houseprint, sites, devices and sensors all have a get_data method. In order to get these working for the fluksosensors, the houseprint creates a tmpo session.
End of explanation
hp.find_site(1)
hp.find_device('FL03001556')
sensor = hp.find_sensor('d5a747b86224834f745f4c9775d70241')
print(sensor.site)
print(sensor.unit)
Explanation: Lookup sites, devices, sensors based on key
These methods return a single object
End of explanation
hp.search_sites(inhabitants=5)
hp.search_sensors(type='electricity', direction='Import')
Explanation: Lookup sites, devices, sensors based on search criteria
These methods return a list with objects satisfying the criteria
End of explanation
head = pd.Timestamp('20151102')
tail = pd.Timestamp('20151103')
df = hp.get_data(sensortype='water', head=head,tail=tail, diff=True, resample='min', unit='l/min')
charts.plot(df, stock=True, show='inline')
Explanation: Get Data
End of explanation
site = hp.find_site(1)
site
print(site.size)
print(site.inhabitants)
print(site.postcode)
print(site.construction_year)
print(site.k_level)
print(site.e_level)
print(site.epc_cert)
site.devices
site.get_sensors('electricity')
head = pd.Timestamp('20150617')
tail = pd.Timestamp('20150628')
df=site.get_data(sensortype='electricity', head=head,tail=tail, diff=True, unit='kW')
charts.plot(df, stock=True, show='inline')
Explanation: Site
End of explanation
device = hp.find_device('FL03001552')
device
device.key
device.get_sensors('gas')
head = pd.Timestamp('20151101')
tail = pd.Timestamp('20151104')
df = hp.get_data(sensortype='gas', head=head,tail=tail, diff=True, unit='kW')
charts.plot(df, stock=True, show='inline')
Explanation: Device
End of explanation
sensor = hp.find_sensor('53b1eb0479c83dee927fff10b0cb0fe6')
sensor
sensor.key
sensor.type
sensor.description
sensor.system
sensor.unit
head = pd.Timestamp('20150617')
tail = pd.Timestamp('20150618')
df=sensor.get_data(head,tail,diff=True, unit='W')
charts.plot(df, stock=True, show='inline')
Explanation: Sensor
End of explanation
sensors = hp.search_sensors(type='electricity', system='solar')
print(sensors)
df = hp.get_data(sensors=sensors, head=head, tail=tail, diff=True, unit='W')
charts.plot(df, stock=True, show='inline')
Explanation: Getting data for a selection of sensors
End of explanation |
9,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='mediumblue'> Lists
<font color='midnightblue'> Example
Step1: <font color='midnightblue'> Example
Step2: <font color='midnightblue'> Example
Step3: <font color='mediumblue'> Tuples
<font color='midnightblue'> Example
Step4: <font color='midnightblue'> Example
Step5: <font color='midnightblue'> Example
Step6: <font color='midnightblue'> Example
Step7: <font color='mediumblue'> Numpy Arrays
<font color='midnightblue'> Example
Step8: <font color='midnightblue'> Example
Step9: #### <font color='midnightblue'> Example
Step10: <font color='midnightblue'> Example
Step11: <font color='mediumblue'> Dictionaries
<font color='midnightblue'> Example
Step12: <font color='midnightblue'> Example
Step13: <font color='midnightblue'> Example
Step15: <font color='dodgerblue'> Functions
<font color='midnightblue'> Example
Step16: <font color='midnightblue'> Example
Step17: <font color='dodgerblue'> Plotting
<font color='mediumblue'> Matplotlib
<font color='midnightblue'> Example
Step18: <font color='midnightblue'> Example
Step19: <font color='dodgerblue'> Reading from and writing to files | Python Code:
list1 = [10, 12, 14, 16, 18]
print(list1[0]) # Index starts at 0
print(list1[-1]) # Last index at -1
Explanation: <font color='mediumblue'> Lists
<font color='midnightblue'> Example: Indexed
End of explanation
print(list1[0:3]) # Slicing: exclusive of end value
# i.e. get i=(0, 1, .. n-1)
print(list1[3:]) # "slice from i=3 to end"
Explanation: <font color='midnightblue'> Example: Slicable
End of explanation
list1.append(20)
print(list1)
list1.extend([22,24,26])
print(list1)
list1[3]='squirrel'
print(list1)
list1.remove('squirrel')
print(list1)
list1.insert(3,16)
print(list1)
Explanation: <font color='midnightblue'> Example: Mutable & Mixed Data Types
End of explanation
tuple1 = (10, 12, 14, 16, 18)
print(tuple1)
print(tuple1[0])
print(tuple1[1:3])
print(tuple1[3:])
tuple1.append(20)
print(tuple1)
Explanation: <font color='mediumblue'> Tuples
<font color='midnightblue'> Example: Immutable
End of explanation
%timeit tuple1=(10,12,14,16,18)
%timeit list1=[10,12,14,16,18]
#%timeit tuple1[3:]
#%timeit list1[3:]
Explanation: <font color='midnightblue'> Example: Tuples are faster
End of explanation
tuple2 = 'Lucy','Ryan'
a, b = tuple2
print('{} is OK, {} is amazing!'.format(a, b))
b, a = a, b
print('{} is OK, {} is amazing!'.format(a, b))
ages = [('Lucy', 25), ('Ryan', 24)]
for name, age in ages:
print('{} is {}.'.format(name, age))
Explanation: <font color='midnightblue'> Example: Unpacking Tuples
End of explanation
list2 = [list1, tuple1]
list1 = [1, 2 , 3]
print(list2)
Explanation: <font color='midnightblue'> Example: Python is Call by value
End of explanation
# To use numpy, we first have to import the package
import numpy as np
# Can convert a list to an array:
array1=np.array(list1)
print(array1)
# Can make an evenly spaced array between 2 values using linspace or arange.
# linspace takes the number of points to use as an argument and returns floats by default
print(np.linspace(0, 10, 11))
# arange takes the spacing as an argument and returns the type given as the spacing, e.g.
print(np.arange(0, 11, 1.))
print(np.arange(0, 11, 1))
Explanation: <font color='mediumblue'> Numpy Arrays
<font color='midnightblue'> Example: How to use
End of explanation
print('The average of array1 is', np.average(array1))
print('The sum of array1 is', np.sum(array1))
# Apply functions
print(np.exp(array1))
print(np.reciprocal(array1))
array2=np.array([float(array1[i]) for i in range(len(array1))])
a=np.reciprocal(array2)
print(np.reciprocal([float(array1[i]) for i in range(len(array1))]))
angles=np.array([0, np.pi/2., np.pi, 3*np.pi/4.])
np.sin(angles)
Explanation: <font color='midnightblue'> Example: Useful functions
End of explanation
M1 = np.array([[2,3],[6,3]])
M2 = np.array([[5,6],[2,9]])
print('M1:')
print(M1)
print('M2:')
print(M2)
M3 = M1 * M2 # Element-wise multiplication
print(M3, '\n')
M4 = np.dot(M1, M2) # Matrix multiplication
print(M4)
Explanation: #### <font color='midnightblue'> Example: 2d Arrays
End of explanation
premier_league_data = np.loadtxt('example.csv')
print(premier_league_data)
print(type(premier_league_data[0][0]))
Explanation: <font color='midnightblue'> Example: Creating an array from a file
End of explanation
price_table = {'apples': 50, 'pears': 60, 'bananas': 20}
print(price_table)
fruit = [('apples', 50), ('bananas', 20), ('pears', 60)]
price_table1 = dict(fruit)
print(price_table==price_table1)
# NOTE: the order when you define a dictionary doesn't matter, it's ordered with a hashtable not
# with indexing lists and tuples
# To get a value out, you use square brackets but instead of an index, you use the key:
akey = 'apples'
print("The price of {} is {}p.".format(akey, price_table[akey]))
# Trying to use an index wouldn't work:
print(price_table[0])
price_table.keys()
# Example usage:
shopping_list = [('apples', 50), ('bananas', 20)]
total = 0
for item, quantity in shopping_list:
price = price_table[item]
print('Adding {} {} at {}p each.'.format(quantity, item, price))
total += price * quantity
print('Total shopping cost is £%.2f.' %(total/100.))
Explanation: <font color='mediumblue'> Dictionaries
<font color='midnightblue'> Example: How to use
End of explanation
price_table['kiwis']=30
print(price_table)
del price_table['bananas']
print(price_table)
price_table['apples']=25
print(price_table)
Explanation: <font color='midnightblue'> Example: Mutable
End of explanation
# Iterating over the dictionary will iterate over its keys
for key in price_table:
print("{} cost {}p".format(key, price_table[key]))
# Or use the items method:
for key, val in price_table.items():
print("{} cost {}p".format(key, val))
Explanation: <font color='midnightblue'> Example: Iterating
End of explanation
def square_root(x):
Useful docstring: Calculates and returns square root of x
i = x ** 0.5
return i
x = 10
y = square_root(x)
print('The square root of {} is {}'.format(x, y))
# We can set a default value to the function
def square_root(x=20):
i = x ** 0.5
return i
print(square_root())
# Loops, functions and appending
mylist = []
for i in range(1,5):
mylist.append(square_root(i))
print(mylist)
Explanation: <font color='dodgerblue'> Functions
<font color='midnightblue'> Example: Simple function
End of explanation
def update_integer(i):
# attempt to update i (integers) are immutable
i += 1
def update_list_end(arglist):
arglist[-1] = 50 # Lists are mutable: updates args directly!
a = 1
update_integer(a)
print(a)
mylist = [0, 1, 2, 3, 4]
update_list_end(mylist)
print(mylist)
Explanation: <font color='midnightblue'> Example: Arguments and mutability
End of explanation
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi)
y = np.sin(x)
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
ax.plot(x, y,'o-')
ax.margins(0.1)
ax.set_title('2D plot')
ax.set_xlabel('$x$')
ax.set_ylabel(r'$sin(x)$')
ax.plot()
Explanation: <font color='dodgerblue'> Plotting
<font color='mediumblue'> Matplotlib
<font color='midnightblue'> Example: Simple Plot
End of explanation
xtick_values = np.linspace(0, 2*np.pi, 5)
xtick_labels = ['$0$', r'$\frac{\pi}{2}$', r'$\pi$', r'$\frac{3\pi}{2}$',
r'$2\pi$']
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111); ax.plot(x, y,'-o')
ax.set_title('2D plot')
ax.margins(0.1)
ax.set_xlabel('$x$'); ax.set_ylabel(r'$sin(x)$')
ax.set_xticks(xtick_values)
ax.set_xticklabels(xtick_labels, fontsize=25);
Explanation: <font color='midnightblue'> Example: Labels, ticks and other appenditories
End of explanation
f1 = open('textfile.txt', 'r+')
print(f1.read())
f1.close()
with open('textfile.txt', 'r+') as f1:
print(f1.readline())
print(f1.readline())
with open('textfile.txt', 'r+') as f1:
print(f1.readlines())
with open('textfile.txt', 'r+') as f1:
print(list(f1))
with open('textfile.txt', 'r+') as f1:
for line in f1:
print(line)
with open('textfile.txt', 'r+') as f1:
f1.write('Hello')
print(f1.readline())
f1.write('Second Hello')
print(f1.read())
with open('textfile.txt', 'r+') as f1:
print(f1.read())
with open('textfile.txt', 'r+') as f1:
lines = f1.readlines()
del lines[-1]
lines[2] = 'I have changed the third line\n'
with open('textfile.txt', 'w') as f1:
f1.writelines(lines)
f1.seek(0)
with open('textfile.txt') as f1:
print(f1.read())
Explanation: <font color='dodgerblue'> Reading from and writing to files
End of explanation |
9,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Disaggregation - Hart Active data only
Customary imports
Step1: show versions for any diagnostics
Step2: Load dataset
Step3: Use 4 working days for training
Step4: Training
We'll now do the training from the aggregate data. The algorithm segments the time series data into steady and transient states. Thus, we'll first figure out the transient and the steady states. Next, we'll try and pair the on and the off transitions based on their proximity in time and value.
Step5: Set two days for Disaggregation period of interest
Inspect the data during a quiet period when we were on holiday, should only be autonomous
appliances such as fidge, freeze and water heating + any standby devices not unplugged.
Step6: Disaggregate using Hart (Active data only) | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from os.path import join
from pylab import rcParams
import matplotlib.pyplot as plt
rcParams['figure.figsize'] = (13, 6)
plt.style.use('ggplot')
#import nilmtk
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from nilmtk.disaggregate.hart_85 import Hart85
from nilmtk.disaggregate import CombinatorialOptimisation
from nilmtk.utils import print_dict, show_versions
from nilmtk.metrics import f1_score
#import seaborn as sns
#sns.set_palette("Set3", n_colors=12)
import warnings
warnings.filterwarnings("ignore") #suppress warnings, comment out if warnings required
Explanation: Disaggregation - Hart Active data only
Customary imports
End of explanation
#uncomment if required
#show_versions()
Explanation: show versions for any diagnostics
End of explanation
data_dir = '/Users/GJWood/nilm_gjw_data/HDF5/'
gjw = DataSet(join(data_dir, 'nilm_gjw_data.hdf5'))
print('loaded ' + str(len(gjw.buildings)) + ' buildings')
building_number=1
Explanation: Load dataset
End of explanation
gjw.set_window('2015-06-01 00:00:00', '2015-06-05 00:00:00')
elec = gjw.buildings[building_number].elec
mains = elec.mains()
mains.plot()
#plt.show()
house = elec['fridge'] #only one meter so any selection will do
df = house.load().next() #load the first chunk of data into a dataframe
#df.info() #check that the data is what we want (optional)
#note the data has two columns and a time index
Explanation: Use 4 working days for training
End of explanation
df.ix['2015-06-01 10:00:00+01:00':'2015-06-05 12:00:00+01:00'].plot()# select a time range and plot it
#plt.show()
h = Hart85()
h.train(mains,cols=[('power','active')])
h.steady_states.head()
h.steady_states.tail()
h.centroids
h.model
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
#plt.show()
h.pair_df.head()
pair_shape_df = pd.DataFrame(columns=['Height','Duration'])
pair_shape_df['Height']= (h.pair_df['T1 Active'].abs()+h.pair_df['T2 Active'].abs())/2
pair_shape_df['Duration']= pd.to_timedelta(h.pair_df['T2 Time']-h.pair_df['T1 Time'],unit='s').dt.seconds
pair_shape_df.head()
fig = plt.figure(figsize=(13,6))
ax = fig.add_subplot(1, 1, 1)
ax.set_yscale('log')
ax.scatter(pair_shape_df['Height'],pair_shape_df['Duration'])
plt.title("Paired event - Signature Space")
plt.ylabel("Log Duration (sec)")
plt.xlabel("Transition (W)");
Explanation: Training
We'll now do the training from the aggregate data. The algorithm segments the time series data into steady and transient states. Thus, we'll first figure out the transient and the steady states. Next, we'll try and pair the on and the off transitions based on their proximity in time and value.
End of explanation
gjw.set_window('2015-06-08 00:00:00','2015-06-10 00:00:00')
elec = gjw.buildings[building_number].elec
mains = elec.mains()
mains.plot()
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
plt.hist(h.steady_states['active average'],250)
plt.ylabel("Frequency")
plt.xlabel("Power (w)")
plt.title("Active Average distribution");
Explanation: Set two days for Disaggregation period of interest
Inspect the data during a quiet period when we were on holiday, should only be autonomous
appliances such as fidge, freeze and water heating + any standby devices not unplugged.
End of explanation
disag_filename = join(data_dir, 'disag_gjw_hart.hdf5')
output = HDFDataStore(disag_filename, 'w')
h.disaggregate(mains,output,sample_period=1)
output.close()
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
disag_hart = DataSet(disag_filename)
disag_hart
disag_hart_elec = disag_hart.buildings[building_number].elec
disag_hart_elec
disag_hart_elec.mains()
h.centroids
h.model
h.steady_states
from nilmtk.metrics import f1_score
f1_hart= f1_score(disag_hart_elec, test_elec)
f1_hart.index = disag_hart_elec.get_labels(f1_hart.index)
f1_hart.plot(kind='barh')
plt.ylabel('appliance');
plt.xlabel('f-score');
plt.title("Hart");
Explanation: Disaggregate using Hart (Active data only)
End of explanation |
9,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Arbitrary number of channels parametrization
This notebook uses the new param.image parametrization that takes any number of channels.
Step2: Testing params
The following params are introduced to test the new param.imag parametrization by going back to three channels for the existing modelzoo models
Step3: Arbitrary channels parametrization
param.arbitrary_channels calls param.image and then reduces the arbitrary number of channels to 3 for visualizing with modelzoo models.
Step4: Grayscale parametrization
param.grayscale_image creates param.image with a single channel and then tiles them 3 times for visualizing with modelzoo models.
Step5: Testing different objectives
Different objectives applied to both parametrizations. | Python Code:
import numpy as np
import tensorflow as tf
import lucid.modelzoo.vision_models as models
from lucid.misc.io import show
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform
model = models.InceptionV1()
model.load_graphdef()
Explanation: Arbitrary number of channels parametrization
This notebook uses the new param.image parametrization that takes any number of channels.
End of explanation
def arbitrary_channels_to_rgb(*args, channels=None, **kwargs):
channels = channels or 10
full_im = param.image(*args, channels=channels, **kwargs)
r = tf.reduce_mean(full_im[...,:channels//3]**2, axis=-1)
g = tf.reduce_mean(full_im[...,channels//3:2*channels//3]**2, axis=-1)
b = tf.reduce_mean(full_im[...,2*channels//3:]**2, axis=-1)
return tf.stack([r,g,b], axis=-1)
def grayscale_image_to_rgb(*args, **kwargs):
Takes same arguments as image
output = param.image(*args, channels=1, **kwargs)
return tf.tile(output, (1,1,1,3))
Explanation: Testing params
The following params are introduced to test the new param.imag parametrization by going back to three channels for the existing modelzoo models
End of explanation
_ = render.render_vis(model, "mixed4a_pre_relu:476", param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))
Explanation: Arbitrary channels parametrization
param.arbitrary_channels calls param.image and then reduces the arbitrary number of channels to 3 for visualizing with modelzoo models.
End of explanation
_ = render.render_vis(model, "mixed4a_pre_relu:476", param_f=lambda:grayscale_image_to_rgb(128))
Explanation: Grayscale parametrization
param.grayscale_image creates param.image with a single channel and then tiles them 3 times for visualizing with modelzoo models.
End of explanation
_ = render.render_vis(model, objectives.deepdream("mixed4a_pre_relu"), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))
_ = render.render_vis(model, objectives.channel("mixed4a_pre_relu", 360), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))
_ = render.render_vis(model, objectives.neuron("mixed4a_pre_relu", 476), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))
_ = render.render_vis(model, objectives.deepdream("mixed4a_pre_relu"), param_f=lambda:grayscale_image_to_rgb(128))
_ = render.render_vis(model, objectives.channel("mixed4a_pre_relu", 360), param_f=lambda:grayscale_image_to_rgb(128))
_ = render.render_vis(model, objectives.neuron("mixed4a_pre_relu", 476), param_f=lambda:grayscale_image_to_rgb(128))
Explanation: Testing different objectives
Different objectives applied to both parametrizations.
End of explanation |
9,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table class="ee-notebook-buttons" align="left"><td>
<a target="_blank" href="http
Step1: Authenticate and initialize
Run the ee.Authenticate function to authenticate your access to Earth Engine servers and ee.Initialize to initialize it. Upon running the following cell you'll be asked to grant Earth Engine access to your Google account. Follow the instructions printed to the cell.
Step2: Test the API
Test the API by printing the elevation of Mount Everest.
Step3: Map visualization
ee.Image objects can be displayed to notebook output cells. The following two
examples demonstrate displaying a static image and an interactive map.
Static image
The IPython.display module contains the Image function, which can display
the results of a URL representing an image generated from a call to the Earth
Engine getThumbUrl function. The following cell will display a thumbnail
of the global elevation model.
Step4: Interactive map
The folium
library can be used to display ee.Image objects on an interactive
Leaflet map. Folium has no default
method for handling tiles from Earth Engine, so one must be defined
and added to the folium.Map module before use.
The following cell provides an example of adding a method for handing Earth Engine
tiles and using it to display an elevation model to a Leaflet map.
Step5: Chart visualization
Some Earth Engine functions produce tabular data that can be plotted by
data visualization packages such as matplotlib. The following example
demonstrates the display of tabular data from Earth Engine as a scatter
plot. See Charting in Colaboratory
for more information. | Python Code:
import ee
Explanation: <table class="ee-notebook-buttons" align="left"><td>
<a target="_blank" href="http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/ee-api-colab-setup.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/ee-api-colab-setup.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td></table>
Earth Engine Python API Colab Setup
This notebook demonstrates how to setup the Earth Engine Python API in Colab and provides several examples of how to print and visualize Earth Engine processed data.
Import API and get credentials
The Earth Engine API is installed by default in Google Colaboratory so requires only importing and authenticating. These steps must be completed for each new Colab session, if you restart your Colab kernel, or if your Colab virtual machine is recycled due to inactivity.
Import the API
Run the following cell to import the API into your session.
End of explanation
# Trigger the authentication flow.
ee.Authenticate()
# Initialize the library.
ee.Initialize()
Explanation: Authenticate and initialize
Run the ee.Authenticate function to authenticate your access to Earth Engine servers and ee.Initialize to initialize it. Upon running the following cell you'll be asked to grant Earth Engine access to your Google account. Follow the instructions printed to the cell.
End of explanation
# Print the elevation of Mount Everest.
dem = ee.Image('USGS/SRTMGL1_003')
xy = ee.Geometry.Point([86.9250, 27.9881])
elev = dem.sample(xy, 30).first().get('elevation').getInfo()
print('Mount Everest elevation (m):', elev)
Explanation: Test the API
Test the API by printing the elevation of Mount Everest.
End of explanation
# Import the Image function from the IPython.display module.
from IPython.display import Image
# Display a thumbnail of global elevation.
Image(url = dem.updateMask(dem.gt(0))
.getThumbURL({'min': 0, 'max': 4000, 'dimensions': 512,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}))
Explanation: Map visualization
ee.Image objects can be displayed to notebook output cells. The following two
examples demonstrate displaying a static image and an interactive map.
Static image
The IPython.display module contains the Image function, which can display
the results of a URL representing an image generated from a call to the Earth
Engine getThumbUrl function. The following cell will display a thumbnail
of the global elevation model.
End of explanation
# Import the Folium library.
import folium
# Define a method for displaying Earth Engine image tiles to folium map.
def add_ee_layer(self, ee_image_object, vis_params, name):
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles = map_id_dict['tile_fetcher'].url_format,
attr = 'Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
name = name,
overlay = True,
control = True
).add_to(self)
# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
# Set visualization parameters.
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}
# Create a folium map object.
my_map = folium.Map(location=[20, 0], zoom_start=3)
# Add the elevation model to the map object.
my_map.add_ee_layer(dem.updateMask(dem.gt(0)), vis_params, 'DEM')
# Add a layer control panel to the map.
my_map.add_child(folium.LayerControl())
# Display the map.
display(my_map)
Explanation: Interactive map
The folium
library can be used to display ee.Image objects on an interactive
Leaflet map. Folium has no default
method for handling tiles from Earth Engine, so one must be defined
and added to the folium.Map module before use.
The following cell provides an example of adding a method for handing Earth Engine
tiles and using it to display an elevation model to a Leaflet map.
End of explanation
# Import the matplotlib.pyplot module.
import matplotlib.pyplot as plt
# Fetch a Landsat image.
img = ee.Image('LANDSAT/LT05/C01/T1_SR/LT05_034033_20000913')
# Select Red and NIR bands, scale them, and sample 500 points.
samp_fc = img.select(['B3','B4']).divide(10000).sample(scale=30, numPixels=500)
# Arrange the sample as a list of lists.
samp_dict = samp_fc.reduceColumns(ee.Reducer.toList().repeat(2), ['B3', 'B4'])
samp_list = ee.List(samp_dict.get('list'))
# Save server-side ee.List as a client-side Python list.
samp_data = samp_list.getInfo()
# Display a scatter plot of Red-NIR sample pairs using matplotlib.
plt.scatter(samp_data[0], samp_data[1], alpha=0.2)
plt.xlabel('Red', fontsize=12)
plt.ylabel('NIR', fontsize=12)
plt.show()
Explanation: Chart visualization
Some Earth Engine functions produce tabular data that can be plotted by
data visualization packages such as matplotlib. The following example
demonstrates the display of tabular data from Earth Engine as a scatter
plot. See Charting in Colaboratory
for more information.
End of explanation |
9,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating MNE's data structures from scratch
MNE provides mechanisms for creating various core objects directly from
NumPy arrays.
Step1: Creating
Step2: You can also supply more extensive metadata
Step3: <div class="alert alert-info"><h4>Note</h4><p>When assigning new values to the fields of an
Step4: Creating
Step5: It is necessary to supply an "events" array in order to create an Epochs
object. This is of shape(n_events, 3) where the first column is the sample
number (time) of the event, the second column indicates the value from which
the transition is made from (only used when the new value is bigger than the
old one), and the third column is the new event value.
Step6: More information about the event codes
Step7: Finally, we must specify the beginning of an epoch (the end will be inferred
from the sampling frequency and n_samples)
Step8: Now we can create the
Step9: Creating | Python Code:
import mne
import numpy as np
Explanation: Creating MNE's data structures from scratch
MNE provides mechanisms for creating various core objects directly from
NumPy arrays.
End of explanation
# Create some dummy metadata
n_channels = 32
sampling_rate = 200
info = mne.create_info(n_channels, sampling_rate)
print(info)
Explanation: Creating :class:~mne.Info objects
<div class="alert alert-info"><h4>Note</h4><p>for full documentation on the :class:`~mne.Info` object, see
`tut-info-class`. See also `ex-array-classes`.</p></div>
Normally, :class:mne.Info objects are created by the various
data import functions <ch_convert>.
However, if you wish to create one from scratch, you can use the
:func:mne.create_info function to initialize the minimally required
fields. Further fields can be assigned later as one would with a regular
dictionary.
The following creates the absolute minimum info structure:
End of explanation
# Names for each channel
channel_names = ['MEG1', 'MEG2', 'Cz', 'Pz', 'EOG']
# The type (mag, grad, eeg, eog, misc, ...) of each channel
channel_types = ['grad', 'grad', 'eeg', 'eeg', 'eog']
# The sampling rate of the recording
sfreq = 1000 # in Hertz
# The EEG channels use the standard naming strategy.
# By supplying the 'montage' parameter, approximate locations
# will be added for them
montage = 'standard_1005'
# Initialize required fields
info = mne.create_info(channel_names, sfreq, channel_types, montage)
# Add some more information
info['description'] = 'My custom dataset'
info['bads'] = ['Pz'] # Names of bad channels
print(info)
Explanation: You can also supply more extensive metadata:
End of explanation
# Generate some random data
data = np.random.randn(5, 1000)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=100
)
custom_raw = mne.io.RawArray(data, info)
print(custom_raw)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>When assigning new values to the fields of an
:class:`mne.Info` object, it is important that the
fields are consistent:
- The length of the channel information field `chs` must be
`nchan`.
- The length of the `ch_names` field must be `nchan`.
- The `ch_names` field should be consistent with the `name` field
of the channel information contained in `chs`.</p></div>
Creating :class:~mne.io.Raw objects
To create a :class:mne.io.Raw object from scratch, you can use the
:class:mne.io.RawArray class, which implements raw data that is backed by a
numpy array. The correct units for the data are:
V: eeg, eog, seeg, emg, ecg, bio, ecog
T: mag
T/m: grad
M: hbo, hbr
Am: dipole
AU: misc
The :class:mne.io.RawArray constructor simply takes the data matrix and
:class:mne.Info object:
End of explanation
# Generate some random data: 10 epochs, 5 channels, 2 seconds per epoch
sfreq = 100
data = np.random.randn(10, 5, sfreq * 2)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=sfreq
)
Explanation: Creating :class:~mne.Epochs objects
To create an :class:mne.Epochs object from scratch, you can use the
:class:mne.EpochsArray class, which uses a numpy array directly without
wrapping a raw object. The array must be of shape(n_epochs, n_chans,
n_times). The proper units of measure are listed above.
End of explanation
# Create an event matrix: 10 events with alternating event codes
events = np.array([
[0, 0, 1],
[1, 0, 2],
[2, 0, 1],
[3, 0, 2],
[4, 0, 1],
[5, 0, 2],
[6, 0, 1],
[7, 0, 2],
[8, 0, 1],
[9, 0, 2],
])
Explanation: It is necessary to supply an "events" array in order to create an Epochs
object. This is of shape(n_events, 3) where the first column is the sample
number (time) of the event, the second column indicates the value from which
the transition is made from (only used when the new value is bigger than the
old one), and the third column is the new event value.
End of explanation
event_id = dict(smiling=1, frowning=2)
Explanation: More information about the event codes: subject was either smiling or
frowning
End of explanation
# Trials were cut from -0.1 to 1.0 seconds
tmin = -0.1
Explanation: Finally, we must specify the beginning of an epoch (the end will be inferred
from the sampling frequency and n_samples)
End of explanation
custom_epochs = mne.EpochsArray(data, info, events, tmin, event_id)
print(custom_epochs)
# We can treat the epochs object as we would any other
_ = custom_epochs['smiling'].average().plot(time_unit='s')
Explanation: Now we can create the :class:mne.EpochsArray object
End of explanation
# The averaged data
data_evoked = data.mean(0)
# The number of epochs that were averaged
nave = data.shape[0]
# A comment to describe to evoked (usually the condition name)
comment = "Smiley faces"
# Create the Evoked object
evoked_array = mne.EvokedArray(data_evoked, info, tmin,
comment=comment, nave=nave)
print(evoked_array)
_ = evoked_array.plot(time_unit='s')
Explanation: Creating :class:~mne.Evoked Objects
If you already have data that is collapsed across trials, you may also
directly create an evoked array. Its constructor accepts an array of
shape(n_chans, n_times) in addition to some bookkeeping parameters.
The proper units of measure for the data are listed above.
End of explanation |
9,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Neural Network for Image Classification
Step1: 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
Problem Statement
Step2: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
Step3: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width
Step5: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models
Step6: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
Step7: Expected Output
Step8: Expected Output
Step10: Expected Output
Step11: You will now train the model as a 5-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
Step12: Expected Output
Step13: <table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.985645933014
</td>
</tr>
</table>
Step14: Expected Output
Step15: A few type of images the model tends to do poorly on include | Python Code:
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v2 import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Deep Neural Network for Image Classification: Application
When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course!
You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation.
After this assignment you will be able to:
- Build and apply a deep neural network to supervised learning.
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- PIL and scipy are used here to test your model with your own picture at the end.
- dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
End of explanation
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
Explanation: 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labelled as cat (1) or non-cat (0)
- a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
Let's get more familiar with the dataset. Load the data by running the cell below.
End of explanation
# Example of a picture
index = 10
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
Explanation: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
End of explanation
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
Explanation: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width:450px;height:300px;">
<caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption>
End of explanation
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, "relu")
A2, cache2 = linear_activation_forward(A1, W2, b2, "sigmoid")
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (≈ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, "sigmoid")
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, "relu")
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models:
- A 2-layer neural network
- An L-layer deep neural network
You will then compare the performance of these models, and also try out different values for $L$.
Let's look at the two architectures.
3.1 - 2-layer neural network
<img src="images/2layerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption>
<u>Detailed Architecture of figure 2</u>:
- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$.
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.
- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.
- You then repeat the same process.
- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias).
- Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.
3.2 - L-layer deep neural network
It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:
<img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID</center></caption>
<u>Detailed Architecture of figure 3</u>:
- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.
- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.
- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.
3.3 - General methodology
As usual you will follow the Deep Learning methodology to build the model:
1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:
a. Forward propagation
b. Compute cost function
c. Backward propagation
d. Update parameters (using parameters, and grads from backprop)
4. Use trained parameters to predict labels
Let's now implement those two models!
4 - Two-layer neural network
Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters(n_x, n_h, n_y):
...
return parameters
def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
def compute_cost(AL, Y):
...
return cost
def linear_activation_backward(dA, cache, activation):
...
return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
Explanation: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
predictions_train = predict(train_x, train_y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.6930497356599888 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.6464320953428849 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.048554785628770206 </td>
</tr>
</table>
Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.
Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
End of explanation
predictions_test = predict(test_x, test_y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 1.0 </td>
</tr>
</table>
End of explanation
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 5-layer model
# GRADED FUNCTION: n_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization.
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 0.72 </td>
</tr>
</table>
Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting.
Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.
5 - L-layer Neural Network
Question: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters_deep(layer_dims):
...
return parameters
def L_model_forward(X, parameters):
...
return AL, caches
def compute_cost(AL, Y):
...
return cost
def L_model_backward(AL, Y, caches):
...
return grads
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
Explanation: You will now train the model as a 5-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
pred_train = predict(train_x, train_y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.771749 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.672053 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.092878 </td>
</tr>
</table>
End of explanation
pred_test = predict(test_x, test_y, parameters)
Explanation: <table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.985645933014
</td>
</tr>
</table>
End of explanation
print_mislabeled_images(classes, test_x, test_y, pred_test)
Explanation: Expected Output:
<table>
<tr>
<td> **Test Accuracy**</td>
<td> 0.8 </td>
</tr>
</table>
Congrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set.
This is good performance for this task. Nice job!
Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course).
6) Results Analysis
First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
End of explanation
## START CODE HERE ##
my_image = "nico01.jpg" # change this to the name of your image file
my_label_y = [0] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
Explanation: A few type of images the model tends to do poorly on include:
- Cat body in an unusual position
- Cat appears against a background of a similar color
- Unusual cat color and species
- Camera Angle
- Brightness of the picture
- Scale variation (cat is very large or small in image)
7) Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation |
9,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing IMDB Data in Keras - Solution
Step1: 1. Loading the data
This dataset comes preloaded with Keras, so one simple command will get us training and testing data. There is a parameter for how many words we want to look at. We've set it at 1000, but feel free to experiment.
Step2: 2. Examining the data
Notice that the data has been already pre-processed, where all the words have numbers, and the reviews come in as a vector with the words that the review contains. For example, if the word 'the' is the first one in our dictionary, and a review contains the word 'the', then there is a 1 in the corresponding vector.
The output comes as a vector of 1's and 0's, where 1 is a positive sentiment for the review, and 0 is negative.
Step3: 3. One-hot encoding the output
Here, we'll turn the input vectors into (0,1)-vectors. For example, if the pre-processed vector contains the number 14, then in the processed vector, the 14th entry will be 1.
Step4: And we'll one-hot encode the output.
Step5: 4. Building the model architecture
Build a model here using sequential. Feel free to experiment with different layers and sizes! Also, experiment adding dropout to reduce overfitting.
Step6: 5. Training the model
Run the model here. Experiment with different batch_size, and number of epochs!
Step7: 6. Evaluating the model
This will give you the accuracy of the model. Can you get something over 85%? | Python Code:
# Imports
import numpy as np
import keras
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(42)
Explanation: Analyzing IMDB Data in Keras - Solution
End of explanation
# Loading the data (it's preloaded in Keras)
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000)
print(x_train.shape)
print(x_test.shape)
Explanation: 1. Loading the data
This dataset comes preloaded with Keras, so one simple command will get us training and testing data. There is a parameter for how many words we want to look at. We've set it at 1000, but feel free to experiment.
End of explanation
print(x_train[0])
print(y_train[0])
Explanation: 2. Examining the data
Notice that the data has been already pre-processed, where all the words have numbers, and the reviews come in as a vector with the words that the review contains. For example, if the word 'the' is the first one in our dictionary, and a review contains the word 'the', then there is a 1 in the corresponding vector.
The output comes as a vector of 1's and 0's, where 1 is a positive sentiment for the review, and 0 is negative.
End of explanation
# Turning the output into vector mode, each of length 1000
tokenizer = Tokenizer(num_words=1000)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
print(x_train.shape)
print(x_test.shape)
Explanation: 3. One-hot encoding the output
Here, we'll turn the input vectors into (0,1)-vectors. For example, if the pre-processed vector contains the number 14, then in the processed vector, the 14th entry will be 1.
End of explanation
# One-hot encoding the output
num_classes = 2
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train.shape)
print(y_test.shape)
Explanation: And we'll one-hot encode the output.
End of explanation
# Building the model architecture with one layer of length 100
model = Sequential()
model.add(Dense(512, activation='relu', input_dim=1000))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
# Compiling the model using categorical_crossentropy loss, and rmsprop optimizer.
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
Explanation: 4. Building the model architecture
Build a model here using sequential. Feel free to experiment with different layers and sizes! Also, experiment adding dropout to reduce overfitting.
End of explanation
# Running and evaluating the model
hist = model.fit(x_train, y_train,
batch_size=32,
epochs=10,
validation_data=(x_test, y_test),
verbose=2)
Explanation: 5. Training the model
Run the model here. Experiment with different batch_size, and number of epochs!
End of explanation
score = model.evaluate(x_test, y_test, verbose=0)
print("accuracy: ", score[1])
Explanation: 6. Evaluating the model
This will give you the accuracy of the model. Can you get something over 85%?
End of explanation |
9,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulating Diffusion on Surfaces
The simulation scripts described in this chapter is available at STEPS_Example repository.
This chapter introduces how to model and simulate surface diffusion systems. In STEPS, surface diffusion means
movement of molecules between triangle elements within a patch and is used to model e.g. mobility of surface receptors
within membranes.
In practice, simulating diffusion in surfaces is analogous to simulating diffusion in volumes as introduced in Simulating Diffusion in Volumes,
replacing volume system with surface system and compartment with patch. This section will demonstrate the simple case of free diffusion from
a source on a large circular surface, which is somewhat analogous to free diffusion in a sphere described in Simulating Diffusion in Volumes. Therefore the
code in this chapter has obvious similarities to that in Simulating Diffusion in Volumes and we will not dwell on familiar concepts, but will focus on the key differences instead.
After running the spatial stochastic simulation in familiar solver 'Tetexact', we are then introduced to spatial deterministic solver 'TetODE' (steps.solver.TetODE).
Analytical solution
In the deterministic limit, C, the number of diffusing molecules per unit area, at a distance
r from a point source on an infinite plane surface
at time t is (Crank, J. (1975) The Mathematics of Diffusion. Oxford
Step1: Then we set some constants. This time we will run 100 iterations of the simulation, running each iteration to 21 seconds recording data every 1 second,
each time injecting 1000 molecules which will diffuse by diffusion constant of 0.08 square microns per second
Step2: Model specification
Now we move on to constructing the biochemical model which, as in Simulating Diffusion in Volumes, we will organise into a small function that returns the
important steps.model.Model container object. First we'll look at the complete function
Step3: Which looks remarkably similar to the model specification function in Simulating Diffusion in Volumes, but with one important difference. If we look closely at these lines we see that this time the diffusion rule belongs to a surface system. This is the only difference between creating a diffusion rule for a volume (compartment)
or a surface (patch)
Step4: Then we create a compartment (we’ll see why this is necessary soon) comprising all mesh tetrahedrons, which is simply a sequence of indices from 0 up to the total number of tetrahedrons in the mesh
Step5: Next we go on to creating the surface 'patch' object for this mesh-based simulation, a steps.geom.TmPatch object. This object is comprised of a
collection of triangles, which can be at any
location within the tetrahedral mesh. It's important to realise that triangles appear internally in tetrahedral meshes, not only on the exterior surface, and
internal patches comprising a collection of interior triangles (or even a combination of interior and exterior triangles) are supported in STEPS. This is
essential to allow multiple compartment modelling by one tetrahedral mesh. For example, if a mesh represented a section of a cell separated into cytosol and
intracellular organelle compartments, then the collection of triangles on the exterior surface might represent the cell membrane, with collections of interior
triangles representing organelle membranes. However, in this simple example we wish to create one steps.geom.TmPatch object consisting of the triangles
that make up one of the circular faces in the mesh (we choose the positive z side of this mesh which is centred on 0,0,0), and therefore consists of part of the
exterior surface. The problem is then to find all exterior surface triangles, then keep only those triangles where all 3 vertices of the triangle have a positive
z value. The functions we need to use are the useful function steps.geom.Tetmesh.getSurfTris, which returns all exterior surface triangles on the
mesh, steps.geom.Tetmesh.getTri, which returns the indices of the 3 vertices for any triangle, and steps.geom.Tetmesh.getVertex, which
returns the x,y,z coordinates of a vertex in a Python tuple
Step6: Now that we have sorted the triangles and have a list patch_tris containing the collection of circular face triangles we are interested in, we can create the
patch for our spatial simulation
Step7: The required 'inner' compartment concept requires a little further explanation. In STEPS only tetrahedral meshes are supported (and not, for example, purely
triangular meshes) and therefore all patches are necessarily connected to volumes. However, a patch may be connected to only one compartment, as is the
case here, or two compartments, as would be the case for any internal patches. Following on from convention introduced in well-mixed systems (see Surface-Volume Reactions (Example
Step8: Returning to our rather more simple model, next we create two empty NumPy arrays for which we will store, for each triangle in the patch, the radial distance to the centre of the patch and the area of the triangle
Step9: To find the radial distances we need a reference central point. Because the mesh is not perfectly regular the triangle that encompasses coordinates x=0.0, y=0.0 will not
necessarily be centred exactly on x=0.0 y=0.0, so we will instead define the central point as the barycenter of that triangle. To find this triangle we take advantage of
the fact that the triangle is connected to a tetrahedron which is easier to find by coordinates (using function steps.geom.Tetmesh.findTetByPoint), and then
we need to find the triangle 'neighbours' of that tetrahedron (using function steps.geom.Tetmesh.getTetTriNeighb)
Step10: Then we record for each triangle by position in our patch_tris list, the distance of the barycentre of the triangle to the centre
point (in microns) using function steps.geom.Tetmesh.getTriBarycenter, and the triangle area (in square microns) using function steps.geom.Tetmesh.getTriArea.
We will record this information in the arrays trirads and triareas respectively
Step11: Finally, we return the important information found within this function.
Our complete geometry function is
Step12: Simulation with Tetexact
We're now ready to run the simulation and collect data. First we call the two functions gen_model and gen_geom and store references to the important objects
they return, the steps.model.Model and steps.geom.Tetmesh container objects, and the patch triangles information (being consistent with their
names in the gen_geom function to minimise confusion)
Step13: We then need to create our random number generator object, as usual for stochastic simulations
Step14: And then we can create a Tetexact spatial stochastic solver object
Step15: Finally before running a simulation we, similar to previous chapters, create arrays to help run the simulation and to record data
Step16: And we are ready to run the simulation. This will be very similar to the simulation loop in Simulating Diffusion in Volumes, but instead of injecting and recording molecules from
tetrahedrons, we inject and record from patch triangles with solver methods steps.solver.Tetexact.setTriCount and steps.solver.Tetexact.getTriCount.
We will inject all molecules in the central triangle (to approximate a point source) and record molecules from each triangle in the patch. We'll use the triareas data
to record, in the res results array, molecule number per square micron. The complete simulation loop is
Step17: To look at the mean molecule density over all iterations for ease of comparison to the analytical solution we use the numpy.mean function as in
previous chapters
Step18: Plotting simulation output
As we are quite familiar with plotting STEPS output now we won't dwell on the details. As in Simulating Diffusion in Volumes we wish to also plot the analytical solution alongside our
STEPS output for comparison, where the solution is of a similar form to that for unbounded volume diffusion. We create two functions
to take care of all our plotting needs. The first function plots the results
Step19: And the second function (which is called within the plotres function) plots the analytical solution, this time as number of molecules per square micron
Step20: And that is everything that we need to run our surface diffusion simulation, plot the results and compare to the analytical solution. After running the simulation simply
by importing this module, we can then plot data at any of the "time points" with a function call such as
Step21: Here we plotted the mean surface density of diffusing species A in individualtriangles in STEPS (black dots) is plotted with the analytical solution from the above equation (red).There is a small discrepancy near the centre of the surface due to the injection of molecules into a finite area inSTEPS whereas a point source is assumed for the analytical solution. STEPS output shows some noise due to the effects of stochastic surface diffusion.
Simulation with TetODE
Another option for spatial simulation is to use the deterministic solver 'TetODE' (steps.solver.TetODE). TetODE shares many similarities with Tetexact
in terms of model and geometry construction operating on the same tetrahedral meshes, but solutions are deterministic. TetODE uses
CVODE (http
Step22: Nothing needs to change for the model and geometry descriptions, and we can go on to creating the steps.solver.TetODE solver
object. As a deterministic solver, TetODE does not require a random number generator so that does not need to be created and can be omitted
from the object construction step.
Finally, the reset solver function is not available for TetODE (in part because only one simulation iteration need be run per model), so we remove the call to sim.reset().
from our simulation loop. Theses changes are all we minimally need to do to switch this simulation to deterministic mode using solver TetODE. However,
there are two important additions to this solver, the functions steps.solver.TetODE.setTolerances and steps.solver.TetODE.setMaxNumSteps.
To understand what these functions do requires a little background of how CVODE works. Although there will only be a brief explanation here, thorough descriptions
are available in CVODE documentation (http | Python Code:
import steps.model as smodel
import steps.geom as stetmesh
import steps.utilities.meshio as smeshio
import steps.rng as srng
import steps.solver as solvmod
import pylab
import math
Explanation: Simulating Diffusion on Surfaces
The simulation scripts described in this chapter is available at STEPS_Example repository.
This chapter introduces how to model and simulate surface diffusion systems. In STEPS, surface diffusion means
movement of molecules between triangle elements within a patch and is used to model e.g. mobility of surface receptors
within membranes.
In practice, simulating diffusion in surfaces is analogous to simulating diffusion in volumes as introduced in Simulating Diffusion in Volumes,
replacing volume system with surface system and compartment with patch. This section will demonstrate the simple case of free diffusion from
a source on a large circular surface, which is somewhat analogous to free diffusion in a sphere described in Simulating Diffusion in Volumes. Therefore the
code in this chapter has obvious similarities to that in Simulating Diffusion in Volumes and we will not dwell on familiar concepts, but will focus on the key differences instead.
After running the spatial stochastic simulation in familiar solver 'Tetexact', we are then introduced to spatial deterministic solver 'TetODE' (steps.solver.TetODE).
Analytical solution
In the deterministic limit, C, the number of diffusing molecules per unit area, at a distance
r from a point source on an infinite plane surface
at time t is (Crank, J. (1975) The Mathematics of Diffusion. Oxford: Clarendon Press):
\begin{equation}
C(r,t)=\frac{N}{4\pi Dt}\exp\left(\frac{-r^{2}}{4Dt}\right)
\end{equation}
where N is the total number of injected molecules and D is the diffusion constant (in units $m^{\text{2}}/s$ if length units are metres and time units seconds).
Modelling solution
Organisation of code
As for previous models we will create a Python script to run the surface diffusion model. Since we're now familiar with the concept of
building Python scripts to run STEPS models we will no longer use the Python prompt syntax.
First we import STEPS modules and other modules pylab and math:
End of explanation
# Number of iterations; plotting dt; sim endtime:
NITER = 100
# The data collection time increment (s)
DT = 1.0
# The simulation endtime (s)
INT = 21.0
# Number of molecules injected in centre
NINJECT = 1000
# The diffusion constant for our diffusing species (m^2/s)
DCST = 0.08e-12
Explanation: Then we set some constants. This time we will run 100 iterations of the simulation, running each iteration to 21 seconds recording data every 1 second,
each time injecting 1000 molecules which will diffuse by diffusion constant of 0.08 square microns per second:
End of explanation
def gen_model():
mdl = smodel.Model()
A = smodel.Spec('A', mdl)
ssys = smodel.Surfsys('ssys', mdl)
diff_A = smodel.Diff('diffA', ssys, A, DCST)
return mdl
Explanation: Model specification
Now we move on to constructing the biochemical model which, as in Simulating Diffusion in Volumes, we will organise into a small function that returns the
important steps.model.Model container object. First we'll look at the complete function:
End of explanation
mesh = smeshio.loadMesh('meshes/coin_10r_1h_13861')[0]
Explanation: Which looks remarkably similar to the model specification function in Simulating Diffusion in Volumes, but with one important difference. If we look closely at these lines we see that this time the diffusion rule belongs to a surface system. This is the only difference between creating a diffusion rule for a volume (compartment)
or a surface (patch): if the diffusion rule belongs to a volume system (as in Simulating Diffusion in Volumes) the diffusion rule will determine how the specified molecular
species diffuse in any compartments to which that volume system is added, and if the diffusion rule belongs to a surface system (steps.model.Surfsys ),
as in this example, it specifies how the species will diffuse in any patch to which that surface system is added.
There is only one steps.model.Diff object that takes care of both eventualities (volume or surface diffusion), adapting behaviour depending on whether
it belongs to a volume system or surface system. Any given species may of course diffuse in both volumes and surfaces (by different diffusion coefficients) and
therefore may appear in both volume and surface diffusion rules in the same model.
Since this is the same steps.model.Diff object as introduced in Simulating Diffusion in Volumes its construction is similar and the same general behaviour applies.
To reiterate, the arguments to the constructor are, in order: the usual identifier string,
a reference to the parent surface system, a reference to the molecular species to which this diffusion
rule applies, the diffusion constant (which is an optional parameter to the object constructor) given in s.i. units ($m^{2}/s$). This will become the
value for this diffusion rule wherever it appears in the model. Function steps.model.Diff.setDcst is another option for setting the diffusion constant.
Geometry specification
The next step, as usual, is to create the geometry for the simulation which, as in the previous chapter Simulating Diffusion in Volumes, will require a tetrahedral mesh
because this is a diffusion model. This time we will use a flat disk-shaped mesh (of radius 10 microns) and run the diffusion on one of the circular faces.
Please see previous chapter Simulating Diffusion in Volumes for a more detailed discussion of tetrahedral mesh generation and import. Functions that have already
been described in that chapter will not be described here in detail.
The first step, within a function to group all geometry creation code, is to import the tetrahedral mesh shown in the above figure using
function steps.utilities.meshio.loadMesh, which returns a tuple with a steps.geom.Tetmesh object as the 0th element.
End of explanation
ntets = mesh.countTets()
comp = stetmesh.TmComp('cyto', mesh, range(ntets))
Explanation: Then we create a compartment (we’ll see why this is necessary soon) comprising all mesh tetrahedrons, which is simply a sequence of indices from 0 up to the total number of tetrahedrons in the mesh:
End of explanation
# Find the indices of the exterior surface triangles
alltris = mesh.getSurfTris()
# Sort patch triangles as those of positive z
patch_tris = []
for t in alltris:
vert0, vert1, vert2 = mesh.getTri(t)
if (mesh.getVertex(vert0)[2] > 0.0 \
and mesh.getVertex(vert1)[2] > 0.0 \
and mesh.getVertex(vert2)[2] > 0.0):
patch_tris.append(t)
Explanation: Next we go on to creating the surface 'patch' object for this mesh-based simulation, a steps.geom.TmPatch object. This object is comprised of a
collection of triangles, which can be at any
location within the tetrahedral mesh. It's important to realise that triangles appear internally in tetrahedral meshes, not only on the exterior surface, and
internal patches comprising a collection of interior triangles (or even a combination of interior and exterior triangles) are supported in STEPS. This is
essential to allow multiple compartment modelling by one tetrahedral mesh. For example, if a mesh represented a section of a cell separated into cytosol and
intracellular organelle compartments, then the collection of triangles on the exterior surface might represent the cell membrane, with collections of interior
triangles representing organelle membranes. However, in this simple example we wish to create one steps.geom.TmPatch object consisting of the triangles
that make up one of the circular faces in the mesh (we choose the positive z side of this mesh which is centred on 0,0,0), and therefore consists of part of the
exterior surface. The problem is then to find all exterior surface triangles, then keep only those triangles where all 3 vertices of the triangle have a positive
z value. The functions we need to use are the useful function steps.geom.Tetmesh.getSurfTris, which returns all exterior surface triangles on the
mesh, steps.geom.Tetmesh.getTri, which returns the indices of the 3 vertices for any triangle, and steps.geom.Tetmesh.getVertex, which
returns the x,y,z coordinates of a vertex in a Python tuple
End of explanation
patch = stetmesh.TmPatch('patch', mesh, patch_tris, icomp = comp)
patch.addSurfsys('ssys')
Explanation: Now that we have sorted the triangles and have a list patch_tris containing the collection of circular face triangles we are interested in, we can create the
patch for our spatial simulation: a steps.geom.TmPatch object. The required arguments to the constructor are, in order: a string identifier, a
reference to the steps.geom.Tetmesh container object, a sequence of the triangles that comprise this patch, and an 'inner' compartment.
End of explanation
# ER_membrane = stetmesh.TmPatch('ER_membrane', mesh, Er_memb_tris, icomp = endoplasmic_reticulum, ocomp = cytosol)
Explanation: The required 'inner' compartment concept requires a little further explanation. In STEPS only tetrahedral meshes are supported (and not, for example, purely
triangular meshes) and therefore all patches are necessarily connected to volumes. However, a patch may be connected to only one compartment, as is the
case here, or two compartments, as would be the case for any internal patches. Following on from convention introduced in well-mixed systems (see Surface-Volume Reactions (Example: IP3 Model)),
since distinction is necessary between the two (possible) compartments
connected to a patch they are arbitrarily labelled as 'inner' and 'outer'. If the patch is connected to only one compartment that compartment must
be labelled as the 'inner' compartment to the patch since icomp is a required argument to the steps.geom.TmPatch constructor. If the patch is connected to two compartments
one must be labelled the 'inner' compartment (constructor argument icomp) and one the 'outer' compartment (constructor argument ocomp) and the object construction might look something like
End of explanation
# Find out how many triangles are in the patch
patch_tris_n = len(patch_tris)
# Create array to store radial distances
trirads = pylab.zeros(patch_tris_n)
# Create array to store triangle areas
triareas = pylab.zeros(patch_tris_n)
Explanation: Returning to our rather more simple model, next we create two empty NumPy arrays for which we will store, for each triangle in the patch, the radial distance to the centre of the patch and the area of the triangle:
End of explanation
# Find the central triangle
# First find the tetrahedron connected to the central triangle
ctetidx = mesh.findTetByPoint([0.0, 0.0, 0.5e-6])
# Find this tetrahedron's neighbours
ctet_trineighbs = mesh.getTetTriNeighb(ctetidx)
# Find which of these (4) neighbours is in the patch
ctri_idx=-1
for t in ctet_trineighbs:
if t in patch_tris:
ctri_idx = t
Explanation: To find the radial distances we need a reference central point. Because the mesh is not perfectly regular the triangle that encompasses coordinates x=0.0, y=0.0 will not
necessarily be centred exactly on x=0.0 y=0.0, so we will instead define the central point as the barycenter of that triangle. To find this triangle we take advantage of
the fact that the triangle is connected to a tetrahedron which is easier to find by coordinates (using function steps.geom.Tetmesh.findTetByPoint), and then
we need to find the triangle 'neighbours' of that tetrahedron (using function steps.geom.Tetmesh.getTetTriNeighb): the triangle neighbour of the tetrahedron encompassing
coordinate x=0.0, y=0.0, z=0.5 (microns) that also belongs to the patch is the triangle that we want.
End of explanation
# Now find the distance of the centre of each tri to the central tri
cbaryc = mesh.getTriBarycenter(ctri_idx)
for i in range(patch_tris_n):
baryc = mesh.getTriBarycenter(patch_tris[i])
r2 = math.pow((baryc[0]-cbaryc[0]),2) + \
math.pow((baryc[1]-cbaryc[1]),2) + \
math.pow((baryc[2]-cbaryc[2]),2)
r = math.sqrt(r2)
# Convert to microns
trirads[i] = r*1.0e6
triareas[i] = mesh.getTriArea(patch_tris[i])*1.0e12
Explanation: Then we record for each triangle by position in our patch_tris list, the distance of the barycentre of the triangle to the centre
point (in microns) using function steps.geom.Tetmesh.getTriBarycenter, and the triangle area (in square microns) using function steps.geom.Tetmesh.getTriArea.
We will record this information in the arrays trirads and triareas respectively:
End of explanation
def gen_geom():
mesh = smeshio.loadMesh('meshes/coin_10r_1h_13861')[0]
ntets = mesh.countTets()
comp = stetmesh.TmComp('cyto', mesh, range(ntets))
alltris = mesh.getSurfTris()
# Sort patch triangles as those of positive z
patch_tris = []
for t in alltris:
vert0, vert1, vert2 = mesh.getTri(t)
if (mesh.getVertex(vert0)[2] > 0.0 and mesh.getVertex(vert1)[2] > 0.0 and mesh.getVertex(vert2)[2] > 0.0):
patch_tris.append(t)
# Create the patch
patch = stetmesh.TmPatch('patch', mesh, patch_tris, icomp = comp)
patch.addSurfsys('ssys')
patch_tris_n = len(patch_tris)
trirads = pylab.zeros(patch_tris_n)
triareas = pylab.zeros(patch_tris_n)
# Find the central tri
ctetidx = mesh.findTetByPoint([0.0, 0.0, 0.5e-6])
ctet_trineighbs = mesh.getTetTriNeighb(ctetidx)
ctri_idx=-1
for t in ctet_trineighbs:
if t in patch_tris:
ctri_idx = t
# Now find the distance of the centre of each tri to the central tri
cbaryc = mesh.getTriBarycenter(ctri_idx)
for i in range(patch_tris_n):
baryc = mesh.getTriBarycenter(patch_tris[i])
r2 = math.pow((baryc[0]-cbaryc[0]),2) + math.pow((baryc[1]-cbaryc[1]),2) + math.pow((baryc[2]-cbaryc[2]),2)
r = math.sqrt(r2)
# Convert to microns and square microns
trirads[i] = r*1.0e6
triareas[i] = mesh.getTriArea(patch_tris[i])*1e12
return mesh, patch_tris, patch_tris_n, ctri_idx, trirads, triareas
Explanation: Finally, we return the important information found within this function.
Our complete geometry function is:
End of explanation
model = gen_model()
tmgeom, patch_tris, patch_tris_n, ctri_idx, trirads, triareas = gen_geom()
Explanation: Simulation with Tetexact
We're now ready to run the simulation and collect data. First we call the two functions gen_model and gen_geom and store references to the important objects
they return, the steps.model.Model and steps.geom.Tetmesh container objects, and the patch triangles information (being consistent with their
names in the gen_geom function to minimise confusion):
End of explanation
rng = srng.create('mt19937', 512)
rng.initialize(234)
Explanation: We then need to create our random number generator object, as usual for stochastic simulations:
End of explanation
# Create solver object
sim = solvmod.Tetexact(model, tmgeom, rng)
Explanation: And then we can create a Tetexact spatial stochastic solver object:
End of explanation
tpnts = pylab.arange(0.0, INT, DT)
ntpnts = tpnts.shape[0]
# Create the data structure: iterations x time points x tri samples
res = pylab.zeros((NITER, ntpnts, patch_tris_n))
Explanation: Finally before running a simulation we, similar to previous chapters, create arrays to help run the simulation and to record data:
End of explanation
from __future__ import print_function # for backward compatibility with Py2
# Run NITER number of iterations:
for j in range(NITER):
if not j%10: print("Running iteration ", j)
sim.reset()
sim.setTriCount(ctri_idx, 'A', NINJECT)
for i in range(ntpnts):
sim.run(tpnts[i])
for k in range(patch_tris_n):
res[j, i, k] = sim.getTriCount(patch_tris[k], 'A')/triareas[k]
Explanation: And we are ready to run the simulation. This will be very similar to the simulation loop in Simulating Diffusion in Volumes, but instead of injecting and recording molecules from
tetrahedrons, we inject and record from patch triangles with solver methods steps.solver.Tetexact.setTriCount and steps.solver.Tetexact.getTriCount.
We will inject all molecules in the central triangle (to approximate a point source) and record molecules from each triangle in the patch. We'll use the triareas data
to record, in the res results array, molecule number per square micron. The complete simulation loop is:
End of explanation
res_mean = pylab.mean(res, axis = 0)
Explanation: To look at the mean molecule density over all iterations for ease of comparison to the analytical solution we use the numpy.mean function as in
previous chapters:
End of explanation
def plotres(res_mean, tidx):
if (tidx >= INT/DT):
print("Time index is out of range.")
return
pylab.scatter(trirads, res_mean[tidx], s=10)
pylab.xlabel('Radial distance ($\mu$m)')
pylab.ylabel('Concentration (/$\mu$m$^2$)')
t = tpnts[tidx]
pylab.title('Unbounded surface diffusion. Time: ' + str(t) + 's')
plotanlyt(t)
pylab.xlim(0,10)
pylab.ylim(0)
pylab.show()
Explanation: Plotting simulation output
As we are quite familiar with plotting STEPS output now we won't dwell on the details. As in Simulating Diffusion in Volumes we wish to also plot the analytical solution alongside our
STEPS output for comparison, where the solution is of a similar form to that for unbounded volume diffusion. We create two functions
to take care of all our plotting needs. The first function plots the results:
End of explanation
def plotanlyt(t):
segs = 100
anlytconc = pylab.zeros((segs))
radialds = pylab.zeros((segs))
maxrad = 0.0
for i in trirads:
if (i > maxrad): maxrad = i
maxrad *= 1e-6
intervals = maxrad/segs
rad = 0.0
for i in range((segs)):
anlytconc[i]=(NINJECT/(4*math.pi*DCST*t))* \
(math.exp((-1.0*(rad*rad))/(4*DCST*t)))*1e-12
radialds[i] = rad*1e6
rad += intervals
pylab.plot(radialds, anlytconc, color = 'red')
Explanation: And the second function (which is called within the plotres function) plots the analytical solution, this time as number of molecules per square micron:
End of explanation
pylab.figure(figsize=(10,7))
plotres(res_mean, 20)
Explanation: And that is everything that we need to run our surface diffusion simulation, plot the results and compare to the analytical solution. After running the simulation simply
by importing this module, we can then plot data at any of the "time points" with a function call such as:
End of explanation
NITER = 1
Explanation: Here we plotted the mean surface density of diffusing species A in individualtriangles in STEPS (black dots) is plotted with the analytical solution from the above equation (red).There is a small discrepancy near the centre of the surface due to the injection of molecules into a finite area inSTEPS whereas a point source is assumed for the analytical solution. STEPS output shows some noise due to the effects of stochastic surface diffusion.
Simulation with TetODE
Another option for spatial simulation is to use the deterministic solver 'TetODE' (steps.solver.TetODE). TetODE shares many similarities with Tetexact
in terms of model and geometry construction operating on the same tetrahedral meshes, but solutions are deterministic. TetODE uses
CVODE (http://computation.llnl.gov/casc/sundials/description/description.html) for solutions. Although solutions are therefore
very different between solver Tetexact and TetODE, in terms of simulation construction there are only a few implementation differences.
Therefore, we can use almost the exact same code as already introduced to run the deterministic unbounded
surface diffusion model, with a few changes highlighted below.
Firstly, since TetODE solutions are deterministic, we do not need to run more than one iteration so we set
End of explanation
# Create solver object
sim = solvmod.TetODE(model, tmgeom)
sim.setTolerances(1e-3, 1e-4)
sim.setMaxNumSteps(50)
tpnts = pylab.arange(0.0, INT, DT)
ntpnts = tpnts.shape[0]
# Create the data structure: iterations x time points x tri samples
res = pylab.zeros((NITER, ntpnts, patch_tris_n))
# Run NITER number of iterations:
for j in range(NITER):
sim.setTriCount(ctri_idx, 'A', NINJECT)
for i in range(ntpnts):
sim.run(tpnts[i])
for k in range(patch_tris_n):
res[j, i, k] = sim.getTriCount(patch_tris[k], 'A')/ \
triareas[k]
res_mean = pylab.mean(res, axis = 0)
pylab.figure(figsize=(10,7))
plotres(res_mean, 20)
Explanation: Nothing needs to change for the model and geometry descriptions, and we can go on to creating the steps.solver.TetODE solver
object. As a deterministic solver, TetODE does not require a random number generator so that does not need to be created and can be omitted
from the object construction step.
Finally, the reset solver function is not available for TetODE (in part because only one simulation iteration need be run per model), so we remove the call to sim.reset().
from our simulation loop. Theses changes are all we minimally need to do to switch this simulation to deterministic mode using solver TetODE. However,
there are two important additions to this solver, the functions steps.solver.TetODE.setTolerances and steps.solver.TetODE.setMaxNumSteps.
To understand what these functions do requires a little background of how CVODE works. Although there will only be a brief explanation here, thorough descriptions
are available in CVODE documentation (http://computation.llnl.gov/casc/sundials/documentation/cv_guide.pdf).
Solving STEPS models in CVODE
requires supplying information of all the variables in a STEPS simulation at any time as a state vector to the CVODE solver. The variables in STEPS
are the molecular species, which have unique populations in individual mesh elements (tetrahedrons and triangles) meaning that
the state vector can be rather large (number_volume_specsnumber_tetrahedrons + number_surface_specsnumber_triangles). STEPS
must also supply a function that describes the rate of change of each of these variables with time depending on other variables in the system. CVODE then finds
approximate solutions (here STEPS choses the recommended Adams-Moulton formulas with functional iteration) when the system
advances in time.
To do this it takes a number of 'steps', each time estimating the local error and comparing to tolerance conditions: if the
test fails, step size is reduced, and this is repeated until tolerance conditions are met. This means that there is a tradeoff between accuracy
and simulation speed- with a high tolerance, steps sizes will be large and few steps will have to be taken to advance the simulation some amount of time
though accuracy will be low, with low tolerance, steps sizes will be small so a large number of steps will be taken to advance the simulation although
accuracy will be high. Therefore, the tolerance is an important consideration both for accuracy and efficiency.
STEPS users can control the
tolerances with function steps.solver.TetODE.setTolerances. Two different types of tolerance are specified: relative tolerance and absolute
tolerance, and in STEPS both are scalars. Relative tolerance controls relative errors so that e.g. $10^{-3}$ means that errors are controlled to
0.1% (and it is not recommended to go any higher than that). Absolute tolerances can be useful when any components of the vector approach very small numbers
when relative error control becomes meaningless. The absolute values in the internal state vectors within TetODE are the (fractional) number
of molecules per tetrahedron or triangle, so if a user specifies an absolute tolerance of $10^{-3}$ it means that populations
within tetrahedrons and triangles will be accurate to within 1/1000th of a molecule! In TetODE only one value each for absolute tolerance and relative
tolerance can be specified, and will be applied to all species in all locations in the system. The default value for both absolute tolerance and relative
tolerance is $10^{-3}$ .
We set tolerances with a call to function steps.solver.TetODE.setTolerances: the first function argument is
absolute tolerance, the second is relative tolerance. In this example we set an absolute tolerance of $10^{-3}$ and relative tolerance of $10^{-4}$ with sim.setTolerances(1e-3, 1e-4).
Closely related to this is the function steps.solver.TetODE.setMaxNumSteps, which is a kind of safety device to stop the simulation if advancement is
unacceptably slow. If tolerances are too low, or simulation time step too large, then a large number of steps in CVODE may be taken
before it reaches the requested output time. For example, if we are at time 0 seconds and we ask CVODE to advance the simulation 1 second with function call sim.run(1).
and CVODE finds acceptable accuracy with a step of approximately 10ms, then roughly 100 steps will be taken until it reaches 1 second, which is generally OK. If,
however, tolerances have been set too low and acceptable accuracy comes at a step of 1µs in CVODE, then a million steps would have to be taken to get to
1 second, and here we probably wouldn't want to go ahead with that situation, adjusting tolerances to give a larger CVODE step instead. And so we use function
steps.solver.TetODE.setMaxNumSteps to tell CVODE what is the maximum number of steps we will accept each time it tries to advance the simulation,
exiting if it ever reaches the upper limit. The default value is rather large at 10000. We can set a lower limit for this simulation, e.g. 50, with sim.setMaxNumSteps(50).
If we tried to set a maximum number of steps to 10 instead, with these tolerance levels this simulation would fail indicating that somewhere
between 10 and 100 steps are taken each time CVODE advances this simulation (i.e. each time a call is made to steps.solver.TetODE.run).
We can now run the simulation and plot the results.
End of explanation |
9,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boundary value problem
Problem
We are going to solve ordinary differential equation of 2-nd order with boundary values of different types
$$
y'' + p(x)y' + q(x) = f(x),\
\alpha y'(a) + \beta y(a) = y_a,\
\gamma y'(b) + \delta y(b) = y_b,\
a \leq x \leq b
$$
Step1: Example
Let's solve the next problem
Step2: Shooting method
Step3: Example
Let's solve the next problem | Python Code:
def thomas(a, b, c, d):
n = len(d)
A = np.empty_like(d)
B = np.empty_like(d)
A[0] = -c[0]/b[0]
B[0] = d[0]/b[0]
for i in range(1, n):
A[i] = -c[i] / (b[i] + a[i]*A[i - 1])
B[i] = (d[i] - a[i]*B[i - 1])/(b[i] + a[i]*A[i - 1])
y = np.empty_like(d)
y[n - 1] = B[n - 1]
for i in range(n - 2, -1, -1):
y[i] = A[i]*y[i + 1] + B[i]
return y
def fin_diff(x, p, q, f, alpha, beta, gamma, delta, ya, yb):
h = x[1] - x[0]
n = len(x)
d = np.empty_like(x)
a = np.empty_like(x)
b = np.empty_like(x)
c = np.empty_like(x)
a[0] = 0
b[0] = -2*alpha/h + alpha*h*q[0] + beta*(2 - p[0]*h)
c[0] = 2*alpha/h
d[0] = ya*(2 - p[0]*h) + alpha*h*f[0]
for i in range(1, n - 1):
a[i] = 1/(h*h) - p[i]/(2*h)
b[i] = -2/(h*h) + q[i]
c[i] = 1/(h*h) + p[i]/(2*h)
d[i] = f[i]
a[n - 1] = -2*gamma/h
b[n - 1] = 2*gamma/h - gamma*h*q[n - 1] + delta*(2 + p[n - 1]*h)
c[n - 1] = 0
d[n - 1] = yb*(2 + p[n - 1]*h) - gamma*h*f[n - 1]
return thomas(a, b, c, d)
Explanation: Boundary value problem
Problem
We are going to solve ordinary differential equation of 2-nd order with boundary values of different types
$$
y'' + p(x)y' + q(x) = f(x),\
\alpha y'(a) + \beta y(a) = y_a,\
\gamma y'(b) + \delta y(b) = y_b,\
a \leq x \leq b
$$
End of explanation
p = np.vectorize(lambda x: -1/x)
q = np.vectorize(lambda x: 0)
f = np.vectorize(lambda x: x*x)
alpha, beta = 1, 1
gamma, delta = 0, 1
a, b = 1, 2
ya, yb = 1, 1
x = np.linspace(a, b, 5)
y = fin_diff(x, p(x), q(x), f(x), alpha, beta, gamma, delta, ya, yb)
y_ans = x**4/8 - 11/8*x**2 + 9/2
plt.figure(figsize=(15, 10))
plt.plot(x, y, x, y_ans)
plt.legend(['fin_diff', 'original'], loc='best')
plt.xlabel('x')
plt.ylabel('y(x)')
plt.title('Finite difference method')
plt.show()
Explanation: Example
Let's solve the next problem:
$$
y'' = \frac{1}{x}y' + x^2,\
y'(1) + y(1) = 1,\
y(2) = 1
$$
having $[a, b] = [1, 2], h = 0.25$.
Correct solution is
$$y(x) = \frac{1}{8}x^4 - \frac{11}{8}x^2 + \frac{9}{2}$$
End of explanation
def fmap(fs, x):
return np.array([f(*x) for f in fs])
def runge_kutta4_system(fs, x, y0):
h = x[1] - x[0]
y = np.ndarray((len(x), len(y0)))
y[0] = y0
for i in range(1, len(x)):
k1 = h * fmap(fs, [x[i - 1], *y[i - 1]])
y[i] = y[i - 1] + k1
return y
def shooting(x, p, q, f, ya, yb):
dy = lambda x, y, z: z
dz = lambda x, y, z: -p(x)*z - q(x)*y + f(x)
g = lambda alpha: runge_kutta4_system([dy, dz], x, [ya, np.tan(alpha)])[-1][0] - yb
alpha = scipy.optimize.bisect(g, 0, np.pi/2)
return runge_kutta4_system([dy, dz], x, [ya, np.tan(alpha)])[:, 0]
Explanation: Shooting method
End of explanation
p = np.vectorize(lambda x: -1/x)
q = np.vectorize(lambda x: 0)
f = np.vectorize(lambda x: x*x)
a, b = 1, 2
ya, yb = 0, 1
x = np.linspace(a, b, 5)
y = shooting(x, p, q, f, ya, yb)
y_diff = fin_diff(x, p(x), q(x), f(x), 0, 1, 0, 1, ya, yb)
y_ans = (3*x**4 - 7*x**2 + 4)/24
plt.figure(figsize=(15, 10))
plt.plot(x, y, x, y_diff, x, y_ans)
plt.legend(['shooting', 'fin_diff', 'answer'], loc='best')
plt.xlabel('x')
plt.ylabel('y(x)')
plt.title('Shooting method vs Finite difference method')
plt.show()
Explanation: Example
Let's solve the next problem:
$$
y'' = \frac{1}{x}y' + x^2,\
y(1) = 0,\
y(2) = 1
$$
having $[a, b] = [1, 2], h = 0.25$.
Correct solution is
$$y(x) = \frac{1}{24}(3x^4 - 7x^2 + 4)$$
End of explanation |
9,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
dbcollection package usage tutorial
This tutorial shows how to use the dbcollection package to load and manage datasets in a simple and easy way. It is divided into two main topics
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Tutorial 2
Step5: Step 1
Step6: Step 2
Step7: Step 3
Step8: Step 4
Step9: Step 5
Step10: Step 6
Step11: Step 7 | Python Code:
# import tutorial packages
from __future__ import print_function
import os
import sys
import numpy as np
import dbcollection.manager as dbclt
Explanation: dbcollection package usage tutorial
This tutorial shows how to use the dbcollection package to load and manage datasets in a simple and easy way. It is divided into two main topics:
<ol>
<li>Dataset managing.</li>
<li>Fetch data from a dataset.</li>
</ol>
dbcollection package information
Below a brief description of the package and its core APIs is presented so you as a user can immediately start using this package after this tutorial.
Overview
This package contains a collection of datasets with pre-defined methods for data download (when available/possible)
and processing. A dataset's information is stored into a HDF5 file where the necessary metadata is stored into several groups that (usually) correspond to train, val and/or sets. In turn, the information of which dataset's have been defined in the system is stored in a .json cache file in a directory your home directory: dbcollection/db_cache_info.json.
Data access
To access this file, a special API is available for easy access to data through a few commands. Also, the user can directly access the metadata file if desired. The metadata file contains all information available for each dataset like image file names, class labels, bounding boxes, etc. The metadata is stored into two ways: 1) the original metadata format style is kept and stored under the group 'set/source/'; and 2) all fields are stored separately into single arrays for fast access and combined by a field named 'object_id' which contains all the indexes relating each field with each other, and all fields are stored under the group 'set/default/'.
Features
The benefits of using this framework allows the following:
- A dataset can be setup once and reused as many times as needed
- Since data is stored and accessed from disk, the memory footprint is small for any dataset
- It has cross-platform (Windows, Linux, MacOS) and cross-language (C/C++, Python, Lua, Matlab) capabilities
- Any dataset can be setup/stored using this framework (images, text, etc.)
Dataset managing API
For loading/removing/setup datasets, the dbcollection package contains module manager which has the following methods:
- dbcollection.manager.load(name, task, verbose): Returns a metadata loader of a dataset.
- dbcollection.manager.setup(name, data_dir, task_name, is_download, verbose): Setup a dataset's metadata and cache files on disk.
- dbcollection.manager.remove(name, delete_data): Delete a dataset from the cache.
- dbcollection.manager.manage_cache(field, value, delete_cache, clear_cache, verbose): Manages the cache file.
- dbcollection.manager.query(pattern): Do simple queries to the cache.
- dbcollection.manager.info(verbose): Prints the cache file contents.
Data loading API
When loading a dataset, dbcollection.manager.load() returns a class DatasetLoader which contains some information about the dataset like name, task, data_dir, cache_path and, for each set of the dataset (train/val/test/etc), a ManagerHDF5 class which contains the data fetching methods for the HDF5 metadata file is assigned.
The data loader API contains the following methods for data retrieval:
- get(field_name, idx): Retrieve the i'th data from the field 'field_name'.
- object(idx, is_value): Retrieves the data's ids or contents of all fields of an object.
- size(field_name): Returns the number of the elements of a 'field_name'.
- list(): Lists all fields.
- object_field_id(field_name): Retrieves the index position of a field in the object id list.
Note
This package uses a cache file to store all the information about the dataset's data directory, name, available tasks, etc. This file won't be available until you use the dataset managing API methods for the first time.
Tutorial 1: Setup a dataset
This tutorial shows how you can easily setup and load a dataset from the available list with a few simple commands.
Here, we will download the MNIST dataset, setup its metadata file with the train+test sets and display the Loader contents for this dataset.
End of explanation
# display cache files contents
dbclt.info()
# display available datasets for download/process in the package
dbclt.info(list_datasets=True)
# lets make a directory to store the data
path = os.path.join(os.path.expanduser("~"), 'tmp', 'data')
if not os.path.exists(path):
os.makedirs(path)
# download + setup the MNIST dataset
dbclt.setup(name='mnist', data_dir=path)
# display cache files contents (mnist should be listed now)
dbclt.info()
# load the MNIST dataset
loader = dbclt.load(name='mnist')
# print the dataset's information
print('######### info #########')
print('')
print('Dataset: ' + loader.name)
print('Task: ' + loader.task)
print('Data path: ' + loader.data_dir)
print('Metadata cache path: ' + loader.cache_path)
print('Sets: ', loader.sets)
Explanation: Step 1: Display the cache file contents to the screen
First, lets check if the MNIST dataset exists on cache.
NOTE: if this is the first time using the package, a folder dbcollection/ will be created in your home directory along with the db_cache_info.json cache file inside which contains all of the dataset's information.
End of explanation
# download + setup the CIFAR10 dataset
dbclt.setup(name='cifar10', data_dir=path) # store the dataset's files to ~/data/
# display cache files contents (cifar10 should be listed now)
dbclt.info()
# load the cifar10 dataset
loader = dbclt.load(name='cifar10')
# print the dataset's information
print('######### info #########')
print('')
print('Dataset: ' + loader.name)
print('Task: ' + loader.task)
print('Data path: ' + loader.data_dir)
print('Metadata cache path: ' + loader.cache_path)
print('Sets: ', loader.sets)
# remove the dataset from the cache and delete its files from disk
dbclt.remove(name='cifar10', delete_data=True)
# display cache files contents (cifar10 shouldn't be listed)
dbclt.info()
# to show that the dataset it was removed, lets attempt to load cifar10 again (should give an error)
loader = dbclt.load(name='cifar10')
Explanation: Step 2: Remove a dataset
Removing datasets from the list is very simple by calling the remove() method.
Lets install another dataset, show that it was successfully installed and then remove it.
End of explanation
# fetch MNIST data folder path. For this, we'll use the query() method to retrieve
# the information about the mnist dataset as a list
result = dbclt.query('mnist')
print('Result query: ', result)
if 'data_dir' in result['mnist']:
data = result['mnist']
data_dir = result['mnist']['data_dir']
print('\nMNIST data directory: {}'.format(data_dir))
# rename the directory
data_dir_new = data_dir + 'new'
os.rename(data_dir, data_dir_new)
print('\nRenamed mnist folder to: {}'.format(data_dir_new))
# update the path of the data dir in the cache file
new_data = data
new_data['data_dir'] = data_dir_new
dbclt.manage_cache(field='mnist', value=new_data)
print('New data: ', new_data)
# check if the data directory path was modified
dbclt.info()
Explanation: Step 3: Change some information of the cache file
In cases where you need to change some information regarding the cache file, you can use the manage_cache() method for this. Note: This can also be done by modifying the .json file directly.
Here we'll rename the path of the MNIST data directory to another name and we'll update the cache file information with the new path. Later we'll see the effects of this change when loading data samples from disk.
End of explanation
# NOTE: in order to store strings in the HDF5 file, they are converted to ascii format
# and then stored as a numpy array of type 'numpy.uint8'. So, when retrieving string
# fields, we must convert them from ascii back to str. This package contains some utility
# functions for this task.
from dbcollection.utils import convert_ascii_to_str as _tostr
import matplotlib.pyplot as plt
Explanation: Tutorial 2: Fetch data from a dataset.
This tutorial shows how to fetch data from a dataset using an API.
The load() method returns a DatasetLoader class which contains information about the dataset like the name, task, data paths, sets and a handler for the metadata (HDF5) file for direct access. Also, for each set, a ManagerHDF5 class is setup so you can easily access data with a simple API to retrieve data from the metadata file.
In this tutorial we will use the MNIST dataset to retrieve and display data by using the API to fetch data and by directly accessing the data by using the HDF5 file handler.
End of explanation
# load mnist
mnist_loader = dbclt.load(name='mnist', task='default')
Explanation: Step 1: Load the MNIST dataset
Load the MNIST dataset using the default task.
<blockquote>
<p>NOTE: many datasets like [MSCOCO](http://mscoco.org) have multiple tasks like **object detection**, **caption** or **human body joint keypoint detection**. In order to cope with this, we store each task into a separate HDF5 file by name. Then, when loading a dataset, one just needs to specify which dataset and task to load.</p>
</blockquote>
End of explanation
# fetch data fields
fields = mnist_loader.train.list()
print('MNIST fields:')
print(fields)
Explanation: Step 2: List all data fields composing the MNIST metadata
Usually, different datasets have different attributes/data fields/annotations. The list() method returns all data fields of a dataset.
Here we'll use the train set to fetch this information, but you could retrieve this information from the test set as well. For the rest of the steps we'll continue to use the train set as the source of data.
End of explanation
# fetch class labels
labels = mnist_loader.train.get('classes')
print('MNIST class labels:')
print(_tostr(labels))
Explanation: Step 3: Fetch all class labels and print them
Fetch the class names/labels of the mnist dataset using the get() method.
End of explanation
# show size of the images data
print('Total images:', mnist_loader.train.size('data'))
print('Image data size:', mnist_loader.train.size('data', True)) # return the shape of the array
print('')
# show the size of the labels
print('Total labels:', mnist_loader.train.size('labels'))
print('Label size:', mnist_loader.train.size('labels', True)) # return the shape of the array
print('')
# show the size of the object_id list
print('Total objects:', mnist_loader.train.size('labels'))
print('Objects size:', mnist_loader.train.size('labels', True)) # return the shape of the array
Explanation: Step 4: Show data size
To get the size of any field you need to use the size() method.
End of explanation
# fetch the first image + label
# fetch data using get()
list_idx = mnist_loader.train.get('object_ids', 0)
img = mnist_loader.train.get('data', list_idx[0])
label = mnist_loader.train.get('labels', list_idx[1])
# fetch the same data using object()
img2, label2 = mnist_loader.train.object(0, True) #True - return values | False - return indexes
fig, axs = plt.subplots(nrows=1, ncols=2)
axs[0].set_title('Method 1 (get): label {}'.format(label))
axs[0].imshow(img, cmap=plt.cm.gray)
axs[1].set_title('Method 2 (object): label {}'.format(label2))
axs[1].imshow(img2 cmap=plt.cm.gray)
plt.show()
Explanation: Step 5: Fetch an image + label
The image's data + label are grouped together in a field named object_id (this is true for any dataset). This is usefull in some cases and not in others. In the MNIST case, having only the image data and labels information would suffice, but in other cases it would be impossible to keep track of what matches with.
For example, in object detection tasks likes MSCOCO or the Pascal VOC, images usually contain several objects and each has its own class label. The easiest way to store such relationships between images and objects is to use a list of indexes of each data field like filename, label, bounding box, etc.
Here, the object_id field contains the indexes of both images and labels. To fetch this information, you have two choices:
<ol>
<li>Use **get('object_id', idx)**</li>
<li>Use **object(idx)** to fetch a list of indexes.</li>
</ol>
Although both returns the same output, the object() can return either a list of indexes or a list of values, i.e., it automatically fetch the data of all fields w.r.t their indexes.
End of explanation
import random
from mpl_toolkits.axes_grid1 import ImageGrid
# get data size
img_size = mnist_loader.train.size('data')
fig = plt.figure(1, (6., 6.))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(8, 8), # creates 8*8 grid of axes
axes_pad=0.05, # pad between axes in inch.
)
for i in range(8*8):
img, label = mnist_loader.train.object(random.randint(1, img_size), True)
grid[i].axis('off')
grid[i].imshow(img, cmap=plt.cm.gray)
plt.show()
Explanation: Step 6: Fetch random images and display them
This example loads images randomly and displays them in a 8x8 grid.
End of explanation
# get the HDF5 file handler
mnist_h5 = mnist_loader.file.storage
# fetch a random image and label from the test set
size = len(mnist_h5['test/source/labels'])
idx = random.randint(1, size)
img = mnist_h5['test/source/data'][idx]
label = mnist_h5['test/source/labels'][idx]
print('Display image nº{}'.format(idx))
print('Label: {}'.format(label))
plt.imshow(img, cmap=plt.cm.gray)
plt.show()
Explanation: Step 7: Access data through python's HDF5 file API
You can directly access the metadata's data file from the data loader. This allows the user to access the data in two formats:
<ol>
<li>`default` format, where each data field is stored as a single and separate numpy array and where the fields are combined by the `object_id` field.</li>
<li>`source` format, where the data is stored in the dataset's original format (usually in a nested scheme).</li>
</ol>
These two formats are translated to two distinct groups in the HDF5 file for each set (train/val/test/etc). They are defined by default/ (API friendly) and source/ (original data format). Since some users might prefer one approach over the other, by given both choices it should provide the most adequate option for most users and/or situations.
In the following example, data is retrieved by directly using the python's HDF5 API.
End of explanation |
9,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p style="text-align
Step1: Bad match dates
Step2: Setup MySQL connection
Login credentials for connecting to MySQL database.
Step3: All import statements here.
Step4: Try to connect to the tennis database on the local mysql host. If successful, print out the MySQL version number, if unsuccessful, exit gracefully.
Step11: Extract data from MySQL database
We'll focus on cleaning the seven years of data between 2010 and 2016.
Step12: Note some issues already
Step13: Step I
Step14: For each tournament (group), extract features to assist in "tourney-matching", including the number of matches in the tournament, the maxium and minimum listed dates, the location (in odds) and the tournament name (in matches.)
Step15: Define lookup-table connecting Location in odds to tourney_name in matches. Establishing this matching is not totally straightforward
Step17: Define a function that will take as input a set of features from one tournament in odds, and try to match it to one tournament in matches. The algorithm make sure that dates are roughly the same, and that there is a correspondence between Location and tourney_name.
Step18: Perform the matches.
Step19: Merge match numbers back into bigger odds table, checking that the sizes of each "matched" tournament are the same.
Step20: Step II
Step21: We'll use round information to help with the task of pairing individual matches . To use round information effectively, we need to establish a correspondence between round signifiers in the two datasets. The exact correspondence depends on how many tennis matches are in each tournmanet.
Step22: With the round lookup tables defind, define a function that takes a row of odds, and figures how to map its round information to the round information in matches.
Step23: We'll apply that mapping to each row of the odds dataframe.
Step24: Before going on, a quick sanity check
Step25: Looks good. Now pare down the matches dataframe to contain only records in odds.
Step26: To do player name matching, we split each name, and attempt to match on substrings.
Step27: Match individual matches as follows
Step28: Take a quick peak at the remaining names to get a sense for what the issues are
Step30: Now match across all substrings in a name. To do so, we need a function that will return True if there is a match between one or more strings in two string lists.
Step32: We also need a function that will take each row of odds, and try to find a match for some appropriate subchunk of matches.
Step33: Update match list, and check progress.
Step34: Take a peak at the remaining names and see what the problems are.
Step35: Some rounds are wrong. Try re-matching both winner and loser last names, without insisting on round information.
Step37: That solved some. Now try matching unusual names, ignorning rounds. This involves slightly modifying the comparison function.
Step38: Still some errors. Let's take a look.
Step39: Two big problems. One is 'delbonis' vs. 'del bonis'.
Step40: What remains is exclusively a mismatch of winner and loser. First sort and match keys, then check who really won and correct the data.
Step41: To see who really won, calculate who played the most rounds.
Step42: Correct mistakes any mistakes in either table.
Step43: Now we can assign a single key for both odds and matches which corresponds on the match level. We can also standardize match numbers within tournaments.
Step44: Finally, we save our cleansed and matched datasets. | Python Code:
import IPython as IP
IP.display.Image("example_of_name_matching_problems_mod.png",width=400,height=200,embed=True)
Explanation: <p style="text-align: center"> Merging "odds" and "player" data</p>
Author: Carl Toews
File: merge_datasets.ipynb
Description:
An obvious metric for assessing the quality of learning algorithms is to compare their profitability against on-line betting markets. The notebook setup_mysql.ipynb loads into a MySQL database files from two distinct sources, one containing characteristics of the players (good for learning), the other containing odds for each match (good for assessing the learning). Each record from the latter presumably corresponds to a unique record from the former.
Unfortunately, there are a couple of issues that make establishing this correspondence difficult, including each of the following:
1. Incorrect match dates
2. Incorrect spellings of player names
3. Listing the winner as the loser, and vice versa.
4. Inconsistent tournament names
5. Inconsistent tournament ID numbers
This notebook records some of the methods I used to establish the correspondence. It ultimately produces two DataFrames, one called matches (with player characteristics) and one called odds (with betting characteristics) of the same size and a one-to-one numerical key uniquely identifying each match across both datasets.
Examples of bad data
Bad player names
End of explanation
IP.display.Image("../aux/bad_csv_data_mod.png",width=500,height=500,embed=True)
Explanation: Bad match dates
End of explanation
# name of database
db_name = "tennis"
# name of db user
username = "testuser"
# db password for db user
password = "test623"
# location of atp data files
atpfile_directory = "../data/tennis_atp-master/"
# location of odds data files
oddsfiles_directory = "../data/odds_data/"
# we'll read and write pickle files here
pickle_dir = '../pickle_files/'
Explanation: Setup MySQL connection
Login credentials for connecting to MySQL database.
End of explanation
import sqlalchemy # pandas-mysql interface library
import sqlalchemy.exc # exception handling
from sqlalchemy import create_engine # needed to define db interface
import sys # for defining behavior under errors
import numpy as np # numerical libraries
import scipy as sp
import pandas as pd # for data analysis
import pandas.io.sql as sql # for interfacing with MySQL database
import matplotlib as mpl # a big library with plotting functionality
import matplotlib.pyplot as plt # a subset of matplotlib with most of the useful tools
import IPython as IP
%matplotlib inline
import pdb
#%qtconsole
Explanation: All import statements here.
End of explanation
# create an engine for interacting with the MySQL database
try:
eng_str = 'mysql+mysqldb://' + username + ':' + password + '@localhost/' + db_name
engine = create_engine(eng_str)
connection = engine.connect()
version = connection.execute("SELECT VERSION()")
print("Database version : ")
print(version.fetchone())
# report what went wrong if this fails.
except sqlalchemy.exc.DatabaseError as e:
reason = e.message
print("Error %s:" % (reason))
sys.exit(1)
# close the connection
finally:
if connection:
connection.close()
else:
print("Failed to create connection.")
Explanation: Try to connect to the tennis database on the local mysql host. If successful, print out the MySQL version number, if unsuccessful, exit gracefully.
End of explanation
# focus on most recent data; exclude Davis Cup stuff
startdate = '20100101'
enddate = '20161231'
with engine.begin() as connection:
odds = pd.read_sql_query(SELECT * FROM odds \
WHERE DATE >= ' + startdate + ' \
AND DATE <= ' + enddate + ';, connection)
with engine.begin() as connection:
matches = pd.read_sql_query(SELECT * FROM matches \
WHERE tourney_date >= ' + startdate + ' \
AND tourney_date <= ' + enddate + ' \
AND tourney_name NOT LIKE 'Davis%%';, connection)
# view results
IP.display.display(odds[['ATP','Location','Tournament','Date','Round',
'Winner','Loser']].sort_values('Date')[0:5])
IP.display.display(matches[['tourney_id','tourney_name','tourney_date','round',
'winner_name','loser_name']].sort_values('tourney_date')[0:5])
Explanation: Extract data from MySQL database
We'll focus on cleaning the seven years of data between 2010 and 2016.
End of explanation
odds[['Location','Winner','Loser']] = \
odds[['Location','Winner','Loser']].\
apply(lambda x: x.str.strip().str.lower().str.replace('-',' '),axis=1)
matches[['tourney_name','winner_name','loser_name']] = \
matches[['tourney_name','winner_name','loser_name']].\
apply(lambda x: x.str.strip().str.lower().str.replace('-',' '),axis=1)
Explanation: Note some issues already:
1. Date in matches are often pegged to the start date of the tournament, while dates in odds are often tied to the match itself. Thus time-ordering leads to different results.
2. The variable tourney_name in matches often corresponds to the variable Location in odds (but not always!)
3. Rounds are denoted in different ways. "1st round" in odds can match to R32, R64, or R128 on matches.
4. Name formats are different.
5. While matches has a unique tournament ID for each tournament, odds recycles tournament ids each year.
Before doing any serious data processing, tidy up the strings (strip whitespace, convert to lowercase, replace dashes by space)
End of explanation
# matches tournament identifiers are unique
g_matches = matches.groupby('tourney_id')
# odds tournament identifiers are recycled every year
g_odds= odds.groupby(['ATP','fname'])
Explanation: Step I: pair unique tournaments from each set
Each unique tournament has an identifying key in both odds and matches, but the keys are different, and not one to one. Our first task is to associate unique tournaments with one another. We'll start by grouping both odds and matches by unique tournament number.
End of explanation
def extract_odds_features(group):
sizes = len(group)
min_date = group['Date'].min()
max_date = group['Date'].max()
location = group['Location'].unique()[0]
return pd.Series({'size': sizes,'min_date':min_date,\
'max_date':max_date,'location':location})
def extract_matches_features(group):
sizes = len(group)
min_date = group['tourney_date'].min()
max_date = group['tourney_date'].max()
tourney_name = group['tourney_name'].unique()[0]
return pd.Series({'size': sizes,'min_date':min_date,\
'max_date':max_date,'tourney_name':tourney_name})
g_odds = g_odds.apply(extract_odds_features).reset_index()
g_matches = g_matches.apply(extract_matches_features).reset_index()
Explanation: For each tournament (group), extract features to assist in "tourney-matching", including the number of matches in the tournament, the maxium and minimum listed dates, the location (in odds) and the tournament name (in matches.)
End of explanation
tourney_lookup = pd.read_pickle(pickle_dir + 'tourney_lookup.pkl')
print("Snapshot of lookup table:")
IP.display.display(tourney_lookup.sort_values('o_name')[15:25])
Explanation: Define lookup-table connecting Location in odds to tourney_name in matches. Establishing this matching is not totally straightforward: in many cases, the two columns are identical, but in others they are not, with no obvious connection. (Eg. "Roland Garros" is the tourney_name listed for the French Open in matches, while it's Location is "Paris".) Of the 100 or so difference tennis tournaments, about 60 have a direct correspondence, and the others need some massaging. The derived lookup table is the product of considerable grubby data wrangling.
End of explanation
def get_tourney_ID(o_row):
function: get_tourney_ID(o_row)
Input: row from dataframe g_odds
Output: a Series object with two elements: 1) a match ID,
and 2), a flag of True if the sizes of the two tournmanets are identical
# calculate the diffence in start/stop dates between this tournament and those in `matches`.
min_date_delta = np.abs(g_matches['min_date'] - o_row['min_date']).apply(lambda x: x.days)
max_date_delta = np.abs(g_matches['max_date'] - o_row['max_date']).apply(lambda x: x.days)
# find a list of candidate tournament names, based on lookup table
mtchs = (tourney_lookup['o_name']==o_row['location'])
if sum(mtchs)>0:
m_name = tourney_lookup.loc[mtchs,'m_name']
else:
print('no match found for record {}'.format(o_row['location']))
return ['Nan','Nan']
# the "right" tournament has the right name, and reasonable close start or stop dates
idx = ((min_date_delta <=3) | (max_date_delta <=1)) & (g_matches['tourney_name'].isin(m_name))
record = g_matches.loc[idx,'tourney_id']
# if there are no matches, print some diagnostic information and don't assign a match
if len(record)<1:
print("Warning: no match found for `odds` match {}, year {}".format(o_row.ATP, o_row.fname))
print("min date delta: {}, max date delta: {}, g_matches: {}".format(np.min(min_date_delta), \
np.min(max_date_delta), \
g_matches.loc[g_matches['tourney_name'].isin(m_name),'tourney_name']))
return pd.Series({'ID':'None','size':'NA'})
# if there are too many matches, print a warning and don't assign a match.
elif (len(record)>1):
print("Warning: multiple matches found for `odds` match {}".format(o_row.ATP))
return pd.Series({'ID':'Multiple','size':'NA'})
# otherwise, assign a match, and check if the sizes of the matches are consistent (a good double-check)
else:
size_flag = (g_matches.loc[idx,'size']==o_row['size'])
return pd.Series({'ID':record.iloc[0],'size':size_flag.iloc[0]})
Explanation: Define a function that will take as input a set of features from one tournament in odds, and try to match it to one tournament in matches. The algorithm make sure that dates are roughly the same, and that there is a correspondence between Location and tourney_name.
End of explanation
# add columns to g_odds to hold match ID and also info about size-correspondence
g_odds.insert(len(g_odds.columns),'ID','None')
g_odds.insert(len(g_odds.columns),'sizes_match','NA')
# perform the match
g_odds[['ID','sizes_match']] = g_odds.apply(get_tourney_ID,axis=1).values
Explanation: Perform the matches.
End of explanation
# add "size" columns to both dataframes
odds = pd.merge(g_odds[['ATP','fname','ID','size']],odds,how='inner',on=['ATP','fname'])
matches = pd.merge(g_matches[['tourney_id','size']],matches,how='inner',on=['tourney_id'])
# sum the sizes
if sum(g_odds['sizes_match']==True) != len(g_odds):
print("Warning: at least one tournament in `odds` is matched to a \
tournament in `matches` of a different size.")
else:
print("Sizes seem to match up.")
Explanation: Merge match numbers back into bigger odds table, checking that the sizes of each "matched" tournament are the same.
End of explanation
# for each tournament, label match numbers from 1 to n_tourneys
odds.insert(5,'match_num',0)
grouped = odds[['ID','match_num']].groupby('ID')
odds['match_num'] = grouped.transform(lambda x: 1+np.arange(len(x)))
# add keys to both odds and match data
odds.insert(len(odds.columns),'key',np.arange(len(odds)))
matches.insert(len(matches.columns),'key',np.arange(len(matches)))
Explanation: Step II: Pair matches within each tournament.
To help in this process, we first insert an integer index key into both odds and matches. Also assign a match number to each match within a tournament in odds.
End of explanation
# figure out how many discrete sizes there are
print("size in odds: ", odds['size'].unique())
print("size in matches: ", matches['size'].unique())
print("unique round designators in odds: ", odds.Round.unique())
print("unique round designators in matches: ", matches['round'].unique())
# create a lookup table to be able to match on rounds
m_rounds = ['R128','R64','R32','R16','QF','SF','F','RR']
o_rounds = ['1st Round','2nd Round','3rd Round','4th Round', \
'Quarterfinals','Semifinals','The Final','Round Robin']
round_lookup_small = pd.DataFrame({'m_rounds': m_rounds[2:-1],\
'o_rounds':o_rounds[0:2]+o_rounds[4:-1]})
round_lookup_medium = pd.DataFrame({'m_rounds': m_rounds[1:-1],\
'o_rounds':o_rounds[0:3]+o_rounds[4:-1]})
round_lookup_large = pd.DataFrame({'m_rounds': m_rounds[0:-1],\
'o_rounds':o_rounds[0:-1]})
round_lookup_RR = pd.DataFrame({'m_rounds':m_rounds[5:],\
'o_rounds':o_rounds[5:]})
Explanation: We'll use round information to help with the task of pairing individual matches . To use round information effectively, we need to establish a correspondence between round signifiers in the two datasets. The exact correspondence depends on how many tennis matches are in each tournmanet.
End of explanation
def map_rounds(x):
cur_name = x['Round']
t_size = x['size']
if t_size in [27,31]:
new_name = round_lookup_small.loc[round_lookup_small.o_rounds==cur_name,'m_rounds']
elif t_size in [47,55]:
new_name = round_lookup_medium.loc[round_lookup_medium.o_rounds==cur_name,'m_rounds']
elif t_size in [95, 127]:
new_name = round_lookup_large.loc[round_lookup_large.o_rounds==cur_name,'m_rounds']
else:
new_name = round_lookup_RR.loc[round_lookup_RR.o_rounds==cur_name,'m_rounds']
return new_name.iloc[0]
Explanation: With the round lookup tables defind, define a function that takes a row of odds, and figures how to map its round information to the round information in matches.
End of explanation
# translate round indentifier appropriately
odds.insert(4,'round','TBD')
odds['round'] = odds.apply(map_rounds,axis=1).values
IP.display.display(odds[0:4])
IP.display.display(matches[0:4])
Explanation: We'll apply that mapping to each row of the odds dataframe.
End of explanation
t1=odds.ID.drop_duplicates().sort_values()
t2=matches.tourney_id.drop_duplicates().sort_values()
m_sizes=matches.loc[matches.tourney_id.isin(t1),['tourney_id','size']].drop_duplicates()
o_sizes=odds.loc[odds.ID.isin(t2),['ID','size']].drop_duplicates()
#comp = pd.merge(o_sizes,m_sizes,how='outer',left_on='ID',right_on='tourney_id')
print('sum of sizes of tournaments in odds: ', np.sum(o_sizes['size']))
print('sum of sizes of tournaments in matches: ', np.sum(m_sizes['size']))
Explanation: Before going on, a quick sanity check: is the set of matches in matches that are in odds the same size as the set of matches in odds that are in matches? Does the size column have the correct data?
End of explanation
matches = matches.loc[matches.tourney_id.isin(t1),:]
print("number of records in `odds`: ", len(odds))
print("number of records in `matches`: ", len(matches))
Explanation: Looks good. Now pare down the matches dataframe to contain only records in odds.
End of explanation
# extract dataframe with player names split into discrete 'words'
m_players = pd.merge(matches.winner_name.str.split(pat=' ',expand=True), \
matches.loser_name.str.split(pat=' ',expand=True), \
how='inner',left_index=True, right_index=True,suffixes=('_W','_L'))
# add on tournament, round, and match identifiers
m_players = pd.merge(matches[['tourney_id','match_num', 'round','key']], m_players,\
how='inner',left_index=True, right_index=True).sort_values(['tourney_id','round','1_W','1_L'])
# extract dataframe with player names split into discrete 'words'
o_players = pd.merge(odds.Winner.str.split(pat=' ',expand=True), \
odds.Loser.str.split(pat=' ',expand=True), \
how='inner',left_index=True, right_index=True,suffixes=('_W','_L'))
# add on tournament and round identifiers
o_players = pd.merge(odds[['ID','round','match_num','key']], o_players,\
how='inner',left_index=True, right_index=True).sort_values(['ID','round','0_W','0_L'])
print("m_players: ")
IP.display.display(m_players[0:5])
print("o_players")
IP.display.display(o_players[0:5])
Explanation: To do player name matching, we split each name, and attempt to match on substrings.
End of explanation
# try for an exact match on last names of both winner and loser
A = pd.merge(m_players[['tourney_id','round','key','1_W','1_L']],\
o_players[['ID','round','key','0_W','0_L']],how='inner',\
left_on=['tourney_id','round','1_W','1_L'],\
right_on=['ID','round','0_W','0_L'],suffixes=['_m','_o'])
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
A.key_o.unique().size
Explanation: Match individual matches as follows: assuming there is a match on both tournament number and round, then
1. First try to match both winner and loser names
2. In the compliment of the result, try to match just winner name or just loser names.
3. Merge 1. and 2., see what's left over.
End of explanation
IP.display.display(m_extras[0:10])
IP.display.display(o_extras[0:10])
Explanation: Take a quick peak at the remaining names to get a sense for what the issues are
End of explanation
def comp_str_lists(a,b):
checks to see if any of the strings in list a are also in list b
for i in a:
if i in b:
return True
return False
Explanation: Now match across all substrings in a name. To do so, we need a function that will return True if there is a match between one or more strings in two string lists.
End of explanation
def comp_all_cols(o_row):
input: row of o_players
output:
m_chunk = m_extras.loc[(m_extras.tourney_id==o_row['ID']) & (m_extras['round']==o_row['round'])]
o_winner = list(o_row[['0_W','1_W','2_W','3_W','4_W']].dropna())
o_loser = list(o_row[['0_L','1_L','2_L','3_L','4_L']].dropna())
pairing = []
if len(m_chunk)==0:
print("warning: no match/round pairing found for o_row key {}".format(o_row['key']))
return 0
for i, m_row in m_chunk.iterrows():
m_winner = list(m_row[['0_W','1_W','2_W','3_W','4_W']].dropna())
m_loser = list(m_row[['0_L','1_L','2_L','3_L','4_L']].dropna())
pairing.append(comp_str_lists(o_winner,m_winner) & (comp_str_lists(o_loser,o_loser)))
if sum(pairing) == 1:
m_row = m_chunk.iloc[np.array(pairing),:]
return pd.Series({'key_o':o_row['key'],'key_m':m_row['key'].iloc[0]})
elif sum(pairing)<1:
print("warning: no name matches for o_row key {}".format(o_row['key']))
return 0
else:
print("warning: multiple name matches for o_row key {}".format(o_row['key']))
return 0
new_matches = o_extras.apply(comp_all_cols,axis=1)
Explanation: We also need a function that will take each row of odds, and try to find a match for some appropriate subchunk of matches.
End of explanation
new_matches = new_matches.loc[(new_matches.key_m!=0)&(new_matches.key_o!=0),:]
A = pd.concat([A[['key_m','key_o']],new_matches])
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
Explanation: Update match list, and check progress.
End of explanation
IP.display.display(m_extras.sort_values('0_W')[0:10])
IP.display.display(o_extras.sort_values('1_L')[0:10])
Explanation: Take a peak at the remaining names and see what the problems are.
End of explanation
B = pd.merge(m_extras[['tourney_id','key','1_W','1_L']],\
o_extras[['ID','round','key','0_W','0_L']],how='inner',\
left_on=['tourney_id','1_W','1_L'],\
right_on=['ID','0_W','0_L'],suffixes=['_m','_o'])
A = pd.concat([A,B[['key_m','key_o']]])
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
IP.display.display(m_extras[0:4])
IP.display.display(o_extras[0:4])
Explanation: Some rounds are wrong. Try re-matching both winner and loser last names, without insisting on round information.
End of explanation
def comp_all_cols_no_rounds(o_row):
input: row of o_players
output:
m_chunk = m_extras.loc[(m_extras.tourney_id==o_row['ID'])]
o_winner = list(o_row[['0_W','1_W','2_W','3_W','4_W']].dropna())
o_loser = list(o_row[['0_L','1_L','2_L','3_L','4_L']].dropna())
pairing = []
if len(m_chunk)==0:
print("warning: no match/round pairing found for o_row key {}".format(o_row['key']))
return 0
for i, m_row in m_chunk.iterrows():
m_winner = list(m_row[['0_W','1_W','2_W','3_W','4_W']].dropna())
m_loser = list(m_row[['0_L','1_L','2_L','3_L','4_L']].dropna())
pairing.append(comp_str_lists(o_winner,m_winner) & (comp_str_lists(o_loser,m_loser)))
if sum(pairing) == 1:
m_row = m_chunk.iloc[np.array(pairing),:]
return pd.Series({'key_o':o_row['key'],'key_m':m_row['key'].iloc[0]})
elif sum(pairing)<1:
print("warning: no name matches for o_row key {}".format(o_row['key']))
return pd.Series({'key_o':0,'key_m':0})
else:
print("warning: multiple name matches for o_row key {}".format(o_row['key']))
print(m_chunk.iloc[np.array(pairing),:])
return pd.Series({'key_o':0,'key_m':0})
new_matches = o_extras.apply(comp_all_cols_no_rounds,axis=1)
new_matches = new_matches.loc[(new_matches.key_m!=0)&(new_matches.key_o!=0),:]
A = pd.concat([A[['key_m','key_o']],new_matches])
Explanation: That solved some. Now try matching unusual names, ignorning rounds. This involves slightly modifying the comparison function.
End of explanation
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
IP.display.display(m_extras)
IP.display.display(o_extras)
Explanation: Still some errors. Let's take a look.
End of explanation
# solve the delbonis problem
o_extras.loc[o_extras['1_W']=='bonis',('0_W','1_W')] = ['delbonis',None]
B = pd.merge(m_extras[['tourney_id','key','1_W','1_L']],\
o_extras[['ID','round','key','0_W','0_L']],how='inner',\
left_on=['tourney_id','1_W','1_L'],\
right_on=['ID','0_W','0_L'],suffixes=['_m','_o'])
A = pd.concat([A,B[['key_m','key_o']]])
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
IP.display.display(m_extras)
IP.display.display(o_extras)
Explanation: Two big problems. One is 'delbonis' vs. 'del bonis'.
End of explanation
m_extras = m_extras.sort_values(['tourney_id','1_W'])
o_extras = o_extras.sort_values(['ID','0_L'])
dregs=pd.DataFrame(list(zip(m_extras['key'].values, \
o_extras['key'].values)),\
columns=['key_m','key_o'])
A = pd.concat([A,dregs[['key_m','key_o']]])
Explanation: What remains is exclusively a mismatch of winner and loser. First sort and match keys, then check who really won and correct the data.
End of explanation
def find_winner(o_row):
ID = o_row['ID']
# nominal winner in `odds`
name1 = o_row['0_W']
# nominal loser in `odds`
name2 = o_row['0_L']
# number of rounds played by nominal winner
rnds1 = len(odds.loc[(odds.ID==ID) & \
(odds.Winner.str.contains(name1) | \
odds.Loser.str.contains(name1)),:])
# number of rounds played by nominal loser
rnds2 = len(odds.loc[(odds.ID==ID) & \
(odds.Winner.str.contains(name2) | \
odds.Loser.str.contains(name2)),:])
# if nominal winner played more rounds, `odds` is right and `matches` is wrong
if rnds1>rnds2:
print('Winner: ', name1)
return 'm'
# otherwise, `odds` is wrong and `matches` is right.
elif rnds1<rnds2:
print('Winner: ', name2)
return 'o'
else:
print("function find_winner: ambigous outcome")
return np.nan
mistake_idx = o_extras.apply(find_winner,axis=1)
Explanation: To see who really won, calculate who played the most rounds.
End of explanation
# fix messed up `odds` records
o_errs = o_extras.loc[mistake_idx.values=='o',:]
if len(o_errs)!=0:
temp = odds.loc[odds.key.isin(o_errs['key']),'Winner']
odds.loc[odds.key.isin(o_errs['key']),'Winner']=\
odds.loc[odds.key.isin(o_errs['key']),'Loser'].values
odds.loc[odds.key.isin(o_errs['key']),'Loser']=temp.values
# fix messed up `matches` records
m_errs = m_extras.loc[mistake_idx.values=='m',:]
if len(m_errs)!=0:
temp = matches.loc[matches.key.isin(o_errs['key']),'winner_name']
matches.loc[matches.key.isin(o_errs['key']),'winner_name']=\
matches.loc[matches.key.isin(o_errs['key']),'loser_name'].values
matches.loc[matches.key.isin(o_errs['key']),'loser_name']=temp.values
#sanity check
print("odds has {} records".format(len(odds)))
print("our lookup table is of size {}".format(len(A)))
print("the table has {} unique keys for `matches`".format(len(A.key_m.unique())))
print("the table has {} unique keys for `odds`".format(len(A.key_o.unique())))
Explanation: Correct mistakes any mistakes in either table.
End of explanation
# take the key originally assigned to `matches` to be the main key
A.rename(columns = {'key_m':'key'},inplace=True)
# change name of `odds` key to match that in `A`
odds.rename(columns = {'key':'key_o'},inplace=True)
# add `matches` key to `odds`, and get rid of `odds` key
odds = pd.merge(odds,A,how='inner',on='key_o')
del odds['key_o']
# use the `odds` match numbers on `matches`
matches = matches.rename(columns={'match_num':'match_num_old'})
matches = pd.merge(matches,odds[['match_num','key']],how='inner',on='key')
Explanation: Now we can assign a single key for both odds and matches which corresponds on the match level. We can also standardize match numbers within tournaments.
End of explanation
matches.to_pickle(pickle_dir + 'matches.pkl')
odds.to_pickle(pickle_dir + 'odds.pkl')
Explanation: Finally, we save our cleansed and matched datasets.
End of explanation |
9,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
感情 (肯定/否定) のラベル付けをされた,25,000のIMDB映画レビューのデータセット.レビューは前処理済みで,各レビューは単語のインデックス(整数)のシーケンスとしてエンコードされています.便宜上,単語はデータセットにおいての出現頻度によってインデックスされています.そのため例えば,整数"3"はデータの中で3番目に頻度が多い単語にエンコードされます.これによって"上位20個の頻出語を除いた,上位10,000個の頻出語についてのみ考える"というようなフィルタリング作業を高速に行うことができます. 慣例として,"0"は特定の単語を表さずに,未知語にエンコードされます.
Step1: 25000文章
各文章は単語列のlistからなると
Step2: maxlenより長いシーケンスは切り捨てられる
maxlenより短いシーケンスは前半部分が0パディングされる
Step3: Conv1Dの実装確認 | Python Code:
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
Explanation: 感情 (肯定/否定) のラベル付けをされた,25,000のIMDB映画レビューのデータセット.レビューは前処理済みで,各レビューは単語のインデックス(整数)のシーケンスとしてエンコードされています.便宜上,単語はデータセットにおいての出現頻度によってインデックスされています.そのため例えば,整数"3"はデータの中で3番目に頻度が多い単語にエンコードされます.これによって"上位20個の頻出語を除いた,上位10,000個の頻出語についてのみ考える"というようなフィルタリング作業を高速に行うことができます. 慣例として,"0"は特定の単語を表さずに,未知語にエンコードされます.
End of explanation
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
# 最大の単語数は2494
lens = []
for i in range(25000):
lens.append(len(x_train[i]))
print(max(lens))
print('Pad sequences (sample x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
Explanation: 25000文章
各文章は単語列のlistからなると
End of explanation
x_train[0]
print('Build model...')
model = Sequential()
# 単語IDを密ベクトルに変換する層
# Embedding(input_dim, output_dim, input_length)
# input_dim : 語彙数 = 5000
# output_dim: 密ベクトルの次元数 = 50
# input_length: 入力の系列長 = 400
model.add(Embedding(max_features, embedding_dims, input_length=maxlen))
model.add(Dropout(0.2))
model.add(Conv1D(filters, kernel_size, padding='valid', activation='relu', strides=1))
# 各フィルタごとに最大値を出力する
model.add(GlobalMaxPool1D())
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test))
Explanation: maxlenより長いシーケンスは切り捨てられる
maxlenより短いシーケンスは前半部分が0パディングされる
End of explanation
import numpy as np
from keras.layers import Input
from keras.models import Model
inputs = Input(shape=(10, 4))
inputs
c1 = Conv1D(2, 3, padding='valid', activation='linear', strides=1)
y = c1(inputs)
y
model = Model(inputs=inputs, outputs=y)
model.summary()
model.predict(x)
model.layers[1].get_weights()[0].shape
w = model.layers[1].get_weights()[0]
x[0][:3]
# model.predict(x)の1行目と一致する
print(x[0][:3].shape)
print(w.shape)
print(np.sum(x[0][:3] * w[:, :, 0]))
print(np.sum(x[0][:3] * w[:, :, 1]))
# xは1つずらす(strides=1)
# model.predict(x)の2行目と一致する
print(np.sum(x[0][1:4] * w[:, :, 0]))
print(np.sum(x[0][1:4] * w[:, :, 1]))
Explanation: Conv1Dの実装確認
End of explanation |
9,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TensorFlow Probability의 가우시안 프로세스 회귀
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step3: 예
Step5: 커널 하이퍼 매개변수에 우선 순위를 두고 tfd.JointDistributionNamed를 사용하여 하이퍼 매개변수와 관찰된 데이터의 결합 분포를 작성합니다.
Step6: 사전 확률에서 샘플링할 수 있는지 확인하여 구현의 온전성 검사를 수행하고 샘플의 로그 밀도를 계산할 수 있습니다.
Step7: 이제 사후 확률이 가장 높은 매개변수값을 찾기 위해 최적화해 보겠습니다. 각 매개변수에 대한 변수를 정의하고 값을 양수로 제한합니다.
Step8: 관찰된 데이터에서 모델을 조정하기 위해 커널 하이퍼 매개변수(여전히 추론되어야 함)를 사용하는 target_log_prob 함수를 정의합니다.
Step9: 참고
Step10: 하이퍼 매개변수 추적을 검사하여 샘플러에 대한 온전성 검사를 수행하겠습니다.
Step11: 이제 최적화된 하이퍼 매개변수로 단일 GP를 구성하는 대신, 사후 확률 예측 분포 GP의 혼합으로 구성합니다. 이때 각 GP는 하이퍼 매개변수에 대한 사후 확률 분포의 샘플로 정의됩니다. 몬테카를로 샘플링을 통해 사후 확률 매개변수에 대해 대략적으로 통합하여 관찰되지 않은 위치에서 주변 예측 분포를 계산합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
tfk = tfp.math.psd_kernels
tf.enable_v2_behavior()
from mpl_toolkits.mplot3d import Axes3D
%pylab inline
# Configure plot defaults
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['grid.color'] = '#666666'
%config InlineBackend.figure_format = 'png'
Explanation: TensorFlow Probability의 가우시안 프로세스 회귀
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Gaussian_Process_Regression_In_TFP"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Gaussian_Process_Regression_In_TFP.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Gaussian_Process_Regression_In_TFP.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a>
</td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/probability/examples/Gaussian_Process_Regression_In_TFP.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
이 colab에서는 TensorFlow 및 TensorFlow Probability를 사용하여 가우시안 프로세스 회귀를 살펴봅니다. 알려진 일부 함수에서 노이즈가 있는 관측치를 생성하고 GP 모델을 해당 데이터에 맞춤 조정합니다. 그런 다음 GP 사후 확률에서 샘플링하고 해당 도메인의 그리드에 샘플링된 함수값을 플롯합니다.
배경 설명
$\mathcal{X}$를 임의의 집합으로 둡니다. 가우시안 프로세스(GP)는 $\mathcal{X}$로 인덱싱한 확률 함수 모음으로, ${X_1, \ldots, X_n} \subset \mathcal{X}$가 유한 부분 집합이면 한계 밀도 $p(X_1 = x_1, \ldots, X_n = x_n)$는 다변량 가우시안입니다. 모든 가우시안 분포는 첫 번째와 두 번째 중심 모멘트(평균 및 공분산)로 완전히 지정되며 GP도 예외는 아닙니다. 평균 함수 $\mu : \mathcal{X} \to \mathbb{R}$ 및 공분산 함수 $k : \mathcal{X} \times \mathcal{X} \to \mathbb{R}$로 GP를 완전히 지정할 수 있습니다. GP의 표현 능력의 대부분은 선택한 공분산 함수로 캡슐화됩니다. 다양한 이유로 공분산 함수를 커널 함수라고도 합니다. 대칭 양정치면 됩니다(Rasmussen & Williams의 4장 참조). 아래에서는 ExponentiatedQuadratic 공분산 커널을 사용합니다. 형태는 다음과 같습니다.
$$ k(x, x') := \sigma^2 \exp \left( \frac{|x - x'|^2}{\lambda^2} \right) $$
여기서 $\sigma^2$는 '진폭'이고 $\lambda$는 길이 척도 입니다. 커널 매개변수는 최대 가능성 최적화 절차를 통해 선택할 수 있습니다.
GP의 전체 샘플은 전체 공간 $\mathcal{X}$에 대한 실수값 함수로 구성되며 실제로는 실현하기가 비현실적입니다. 종종 샘플을 관찰할 점집합을 선택하고 이들 지점에서 함수값을 추출합니다. 이는 적절한 유한 차원 다변량 가우시안에서 샘플링하여 달성됩니다.
위의 정의에 따르면 모든 유한 차원 다변량 가우시안 분포도 가우시안 프로세스입니다. 일반적으로 GP를 참조할 때 인덱스 세트가 일부 $\mathbb{R}^n$라는 것은 암시적이며 여기에서도 실제로 이러한 가정을 사용할 것입니다.
머신러닝에서 가우시안 프로세스의 일반적인 적용은 가우시안 프로세스 회귀입니다. 이 아이디어는 유한수의 지점 ${x_1, \ldots x_N}.$에서 함수의 노이즈가 있는 관측치 ${y_1, \ldots, y_N}$를 고려하여 알려지지 않은 함수를 추정하는 것입니다. 다음의 생성 과정을 생각해봅니다.
$$ \begin{align} f \sim : & \textsf{GaussianProcess}\left( \text{mean_fn}=\mu(x), \text{covariance_fn}=k(x, x')\right) \ y_i \sim : & \textsf{Normal}\left( \text{loc}=f(x_i), \text{scale}=\sigma\right), i = 1, \ldots, N \end{align} $$
위에서 언급했듯이 샘플링된 함수는 무한수의 지점에서 값이 필요하므로 계산이 불가능합니다. 대신 다변량 가우시안의 유한 샘플을 고려합니다.
$$ \begin{gather} \begin{bmatrix} f(x_1) \ \vdots \ f(x_N) \end{bmatrix} \sim \textsf{MultivariateNormal} \left( : \text{loc}= \begin{bmatrix} \mu(x_1) \ \vdots \ \mu(x_N) \end{bmatrix} :,: \text{scale}= \begin{bmatrix} k(x_1, x_1) & \cdots & k(x_1, x_N) \ \vdots & \ddots & \vdots \ k(x_N, x_1) & \cdots & k(x_N, x_N) \ \end{bmatrix}^{1/2} : \right) \end{gather} \ y_i \sim \textsf{Normal} \left( \text{loc}=f(x_i), \text{scale}=\sigma \right) $$
공분산 행렬의 지수 $\frac{1}{2}$에 유의하세요. 이는 콜레스키(Cholesky) 분해를 나타냅니다. MVN은 위치 스케일 패밀리 분포이므로 콜레스키 계산이 필요합니다. 불행히도 콜레스키 분해는 $O(N^3)$ 시간과 $O(N^2)$ 공간을 차지하기 때문에 계산 비용이 많이 듭니다. GP 문헌의 대부분은 이 겉보기에 무해한 작은 지수를 다루는 데 초점을 맞추고 있습니다.
사전 확률 평균 함수를 상수(종종 0)로 사용하는 것이 일반적입니다. 일부 표기법도 편리합니다. 샘플링된 함수값의 유한 벡터에 대해 $\mathbf{f}$를 종종 작성합니다. $k$를 입력 쌍에 적용한 결과 공분산 행렬에는 여러 가지 흥미로운 표기법이 사용됩니다. (Quiñonero-Candela, 2005)에 이어 행렬의 구성 요소가 특정 입력 지점에서 함수값의 공분산이라는 점에 주목합니다. 따라서 공분산 행렬을 $K_{AB}$로 표시할 수 있습니다. 여기서 $A$ 및 $B$는 주어진 행렬 차원에 따른 함수값 모음의 일부 지표입니다.
예를 들어, 잠재 함수값 $\mathbf{f}$가 포함된 관측 데이터 $(\mathbf{x}, \mathbf{y})$가 주어지면 다음과 같이 작성할 수 있습니다.
$$ K_{\mathbf{f},\mathbf{f}} = \begin{bmatrix} k(x_1, x_1) & \cdots & k(x_1, x_N) \ \vdots & \ddots & \vdots \ k(x_N, x_1) & \cdots & k(x_N, x_N) \ \end{bmatrix} $$
이와 유사하게, 다음과 같이 입력 집합을 혼합할 수 있습니다.
$$ K_{\mathbf{f},} = \begin{bmatrix} k(x_1, x^_1) & \cdots & k(x_1, x^_T) \ \vdots & \ddots & \vdots \ k(x_N, x^_1) & \cdots & k(x_N, x^*_T) \ \end{bmatrix} $$
$N$ 훈련 입력과 $T$ 테스트 입력이 있다고 가정합니다. 위의 생성 프로세스는 다음과 같이 간결하게 작성될 수 있습니다.
$$ \begin{align} \mathbf{f} \sim : & \textsf{MultivariateNormal} \left( \text{loc}=\mathbf{0}, \text{scale}=K_{\mathbf{f},\mathbf{f}}^{1/2} \right) \ y_i \sim : & \textsf{Normal} \left( \text{loc}=f_i, \text{scale}=\sigma \right), i = 1, \ldots, N \end{align} $$
첫 번째 줄의 샘플링 연산은 위의 GP 추출 표기법에서와 같이 전체 함수가 아닌 다변량 가우시안에서 유한한 $N$ 함수값 집합을 생성합니다. 두 번째 줄은 고정된 관측 노이즈 $\sigma^2$를 사용하여 다양한 함수값을 중심으로 하는 일변량 가우시안에서 $N$ 추출 모음을 설명합니다.
위의 생성 모델을 사용하면 사후 확률 추론 문제를 고려할 수 있습니다. 이렇게 하면 위의 프로세스에서 관찰된 노이즈가 있는 데이터를 조건으로 새로운 테스트 포인트 집합에서 함수값에 대한 사후 확률 분포가 생성됩니다.
위의 표기법을 사용하면 다음과 같이 해당 입력 및 훈련 데이터를 조건으로 미래의 관찰(노이즈가 있는)에 대한 사후 확률 예측 분포를 다음과 같이 간결하게 작성할 수 있습니다(자세한 내용은 Rasmussen & Williams의 §2.2 참조).
$$ \mathbf{y}^ \mid \mathbf{x}^, \mathbf{x}, \mathbf{y} \sim \textsf{Normal} \left( \text{loc}=\mathbf{\mu}^, \text{scale}=(\Sigma^)^{1/2} \right), $$
여기서
$$ \mathbf{\mu}^ = K_{,\mathbf{f}}\left(K_{\mathbf{f},\mathbf{f}} + \sigma^2 I \right)^{-1} \mathbf{y} $$
및
$$ \Sigma^ = K_{,} - K_{,\mathbf{f}} \left(K_{\mathbf{f},\mathbf{f}} + \sigma^2 I \right)^{-1} K_{\mathbf{f},*} $$
가져오기
End of explanation
def sinusoid(x):
return np.sin(3 * np.pi * x[..., 0])
def generate_1d_data(num_training_points, observation_noise_variance):
Generate noisy sinusoidal observations at a random set of points.
Returns:
observation_index_points, observations
index_points_ = np.random.uniform(-1., 1., (num_training_points, 1))
index_points_ = index_points_.astype(np.float64)
# y = f(x) + noise
observations_ = (sinusoid(index_points_) +
np.random.normal(loc=0,
scale=np.sqrt(observation_noise_variance),
size=(num_training_points)))
return index_points_, observations_
# Generate training data with a known noise level (we'll later try to recover
# this value from the data).
NUM_TRAINING_POINTS = 100
observation_index_points_, observations_ = generate_1d_data(
num_training_points=NUM_TRAINING_POINTS,
observation_noise_variance=.1)
Explanation: 예: 노이즈가 있는 사인파 데이터에 대한 정확한 GP 회귀
여기에서는 노이즈가 있는 사인파에서 훈련 데이터를 생성한 다음 GP 회귀 모델의 사후 확률에서 여러 곡선을 샘플링합니다. Adam을 사용하여 커널 하이퍼 매개변수를 최적화합니다(사전 확률 데이터의 음의 로그 가능성을 최소화합니다). 훈련 곡선과 실제 함수와 사후 확률 샘플을 플롯합니다.
End of explanation
def build_gp(amplitude, length_scale, observation_noise_variance):
Defines the conditional dist. of GP outputs, given kernel parameters.
# Create the covariance kernel, which will be shared between the prior (which we
# use for maximum likelihood training) and the posterior (which we use for
# posterior predictive sampling)
kernel = tfk.ExponentiatedQuadratic(amplitude, length_scale)
# Create the GP prior distribution, which we will use to train the model
# parameters.
return tfd.GaussianProcess(
kernel=kernel,
index_points=observation_index_points_,
observation_noise_variance=observation_noise_variance)
gp_joint_model = tfd.JointDistributionNamed({
'amplitude': tfd.LogNormal(loc=0., scale=np.float64(1.)),
'length_scale': tfd.LogNormal(loc=0., scale=np.float64(1.)),
'observation_noise_variance': tfd.LogNormal(loc=0., scale=np.float64(1.)),
'observations': build_gp,
})
Explanation: 커널 하이퍼 매개변수에 우선 순위를 두고 tfd.JointDistributionNamed를 사용하여 하이퍼 매개변수와 관찰된 데이터의 결합 분포를 작성합니다.
End of explanation
x = gp_joint_model.sample()
lp = gp_joint_model.log_prob(x)
print("sampled {}".format(x))
print("log_prob of sample: {}".format(lp))
Explanation: 사전 확률에서 샘플링할 수 있는지 확인하여 구현의 온전성 검사를 수행하고 샘플의 로그 밀도를 계산할 수 있습니다.
End of explanation
# Create the trainable model parameters, which we'll subsequently optimize.
# Note that we constrain them to be strictly positive.
constrain_positive = tfb.Shift(np.finfo(np.float64).tiny)(tfb.Exp())
amplitude_var = tfp.util.TransformedVariable(
initial_value=1.,
bijector=constrain_positive,
name='amplitude',
dtype=np.float64)
length_scale_var = tfp.util.TransformedVariable(
initial_value=1.,
bijector=constrain_positive,
name='length_scale',
dtype=np.float64)
observation_noise_variance_var = tfp.util.TransformedVariable(
initial_value=1.,
bijector=constrain_positive,
name='observation_noise_variance_var',
dtype=np.float64)
trainable_variables = [v.trainable_variables[0] for v in
[amplitude_var,
length_scale_var,
observation_noise_variance_var]]
Explanation: 이제 사후 확률이 가장 높은 매개변수값을 찾기 위해 최적화해 보겠습니다. 각 매개변수에 대한 변수를 정의하고 값을 양수로 제한합니다.
End of explanation
def target_log_prob(amplitude, length_scale, observation_noise_variance):
return gp_joint_model.log_prob({
'amplitude': amplitude,
'length_scale': length_scale,
'observation_noise_variance': observation_noise_variance,
'observations': observations_
})
# Now we optimize the model parameters.
num_iters = 1000
optimizer = tf.optimizers.Adam(learning_rate=.01)
# Use `tf.function` to trace the loss for more efficient evaluation.
@tf.function(autograph=False, jit_compile=False)
def train_model():
with tf.GradientTape() as tape:
loss = -target_log_prob(amplitude_var, length_scale_var,
observation_noise_variance_var)
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
return loss
# Store the likelihood values during training, so we can plot the progress
lls_ = np.zeros(num_iters, np.float64)
for i in range(num_iters):
loss = train_model()
lls_[i] = loss
print('Trained parameters:')
print('amplitude: {}'.format(amplitude_var._value().numpy()))
print('length_scale: {}'.format(length_scale_var._value().numpy()))
print('observation_noise_variance: {}'.format(observation_noise_variance_var._value().numpy()))
# Plot the loss evolution
plt.figure(figsize=(12, 4))
plt.plot(lls_)
plt.xlabel("Training iteration")
plt.ylabel("Log marginal likelihood")
plt.show()
# Having trained the model, we'd like to sample from the posterior conditioned
# on observations. We'd like the samples to be at points other than the training
# inputs.
predictive_index_points_ = np.linspace(-1.2, 1.2, 200, dtype=np.float64)
# Reshape to [200, 1] -- 1 is the dimensionality of the feature space.
predictive_index_points_ = predictive_index_points_[..., np.newaxis]
optimized_kernel = tfk.ExponentiatedQuadratic(amplitude_var, length_scale_var)
gprm = tfd.GaussianProcessRegressionModel(
kernel=optimized_kernel,
index_points=predictive_index_points_,
observation_index_points=observation_index_points_,
observations=observations_,
observation_noise_variance=observation_noise_variance_var,
predictive_noise_variance=0.)
# Create op to draw 50 independent samples, each of which is a *joint* draw
# from the posterior at the predictive_index_points_. Since we have 200 input
# locations as defined above, this posterior distribution over corresponding
# function values is a 200-dimensional multivariate Gaussian distribution!
num_samples = 50
samples = gprm.sample(num_samples)
# Plot the true function, observations, and posterior samples.
plt.figure(figsize=(12, 4))
plt.plot(predictive_index_points_, sinusoid(predictive_index_points_),
label='True fn')
plt.scatter(observation_index_points_[:, 0], observations_,
label='Observations')
for i in range(num_samples):
plt.plot(predictive_index_points_, samples[i, :], c='r', alpha=.1,
label='Posterior Sample' if i == 0 else None)
leg = plt.legend(loc='upper right')
for lh in leg.legendHandles:
lh.set_alpha(1)
plt.xlabel(r"Index points ($\mathbb{R}^1$)")
plt.ylabel("Observation space")
plt.show()
Explanation: 관찰된 데이터에서 모델을 조정하기 위해 커널 하이퍼 매개변수(여전히 추론되어야 함)를 사용하는 target_log_prob 함수를 정의합니다.
End of explanation
num_results = 100
num_burnin_steps = 50
sampler = tfp.mcmc.TransformedTransitionKernel(
tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=target_log_prob,
step_size=tf.cast(0.1, tf.float64)),
bijector=[constrain_positive, constrain_positive, constrain_positive])
adaptive_sampler = tfp.mcmc.DualAveragingStepSizeAdaptation(
inner_kernel=sampler,
num_adaptation_steps=int(0.8 * num_burnin_steps),
target_accept_prob=tf.cast(0.75, tf.float64))
initial_state = [tf.cast(x, tf.float64) for x in [1., 1., 1.]]
# Speed up sampling by tracing with `tf.function`.
@tf.function(autograph=False, jit_compile=False)
def do_sampling():
return tfp.mcmc.sample_chain(
kernel=adaptive_sampler,
current_state=initial_state,
num_results=num_results,
num_burnin_steps=num_burnin_steps,
trace_fn=lambda current_state, kernel_results: kernel_results)
t0 = time.time()
samples, kernel_results = do_sampling()
t1 = time.time()
print("Inference ran in {:.2f}s.".format(t1-t0))
Explanation: 참고: 위의 코드를 여러 번 실행하면 때로는 좋아 보이고 다른 때는 끔찍해 보입니다. 매개변수의 최대 가능성 훈련은 매우 민감하며 때로는 불량 모델로 수렴됩니다. 가장 좋은 방법은 MCMC를 사용하여 모델 하이퍼 매개변수를 주변화하는 것입니다.
HMC로 하이퍼 매개변수 주변화하기
하이퍼 매개변수를 최적화하는 대신 해밀턴 몬테카를로와 통합해 보겠습니다. 먼저 샘플러를 정의하고 실행하여 관측치를 고려하여 커널 하이퍼 매개변수에 대한 사후 확률 분포에서 대략적으로 추출합니다.
End of explanation
(amplitude_samples,
length_scale_samples,
observation_noise_variance_samples) = samples
f = plt.figure(figsize=[15, 3])
for i, s in enumerate(samples):
ax = f.add_subplot(1, len(samples) + 1, i + 1)
ax.plot(s)
Explanation: 하이퍼 매개변수 추적을 검사하여 샘플러에 대한 온전성 검사를 수행하겠습니다.
End of explanation
# The sampled hyperparams have a leading batch dimension, `[num_results, ...]`,
# so they construct a *batch* of kernels.
batch_of_posterior_kernels = tfk.ExponentiatedQuadratic(
amplitude_samples, length_scale_samples)
# The batch of kernels creates a batch of GP predictive models, one for each
# posterior sample.
batch_gprm = tfd.GaussianProcessRegressionModel(
kernel=batch_of_posterior_kernels,
index_points=predictive_index_points_,
observation_index_points=observation_index_points_,
observations=observations_,
observation_noise_variance=observation_noise_variance_samples,
predictive_noise_variance=0.)
# To construct the marginal predictive distribution, we average with uniform
# weight over the posterior samples.
predictive_gprm = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(logits=tf.zeros([num_results])),
components_distribution=batch_gprm)
num_samples = 50
samples = predictive_gprm.sample(num_samples)
# Plot the true function, observations, and posterior samples.
plt.figure(figsize=(12, 4))
plt.plot(predictive_index_points_, sinusoid(predictive_index_points_),
label='True fn')
plt.scatter(observation_index_points_[:, 0], observations_,
label='Observations')
for i in range(num_samples):
plt.plot(predictive_index_points_, samples[i, :], c='r', alpha=.1,
label='Posterior Sample' if i == 0 else None)
leg = plt.legend(loc='upper right')
for lh in leg.legendHandles:
lh.set_alpha(1)
plt.xlabel(r"Index points ($\mathbb{R}^1$)")
plt.ylabel("Observation space")
plt.show()
Explanation: 이제 최적화된 하이퍼 매개변수로 단일 GP를 구성하는 대신, 사후 확률 예측 분포 GP의 혼합으로 구성합니다. 이때 각 GP는 하이퍼 매개변수에 대한 사후 확률 분포의 샘플로 정의됩니다. 몬테카를로 샘플링을 통해 사후 확률 매개변수에 대해 대략적으로 통합하여 관찰되지 않은 위치에서 주변 예측 분포를 계산합니다.
End of explanation |
9,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word2Vec Tutorial
In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.
This tutorial
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
Preparing the Input
Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words (utf8 strings)
Step1: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM
Step2: Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users
Step3: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)
Step4: Training
Word2Vec accepts several parameters that affect both training speed and quality.
min_count
min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them
Step5: size
size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto.
Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
Step6: workers
workers, the last of the major parameters (full list here) is for training parallelization, to speed up training
Step7: The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application.
Google has released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad
Step8: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contains word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, 'coast' and 'shore' are very similar as they appear in the same context. At the same time 'clothes' and 'closet' are less similar because they are related but not interchangeable.
Step9: Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
Storing and loading models
You can store/load models using the standard gensim methods
Step10: which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats
Step11: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box
Step12: You can get the probability distribution for the center word given the context words as input
Step13: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis
Step14: …or en-masse as a 2D NumPy matrix from model.wv.syn0.
Training Loss Computation
The parameter compute_loss can be used to toggle computation of loss while training the Word2Vec model. The computed loss is stored in the model attribute running_training_loss and can be retrieved using the function get_latest_training_loss as follows
Step15: Benchmarks to see effect of training loss compuation code on training time
We first download and setup the test data used for getting the benchmarks.
Step16: We now compare the training time taken for different combinations of input data and model training parameters like hs and sg. | Python Code:
# import modules & set up logging
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentences = [['first', 'sentence'], ['second', 'sentence']]
# train word2vec on the two sentences
model = gensim.models.Word2Vec(sentences, min_count=1)
Explanation: Word2Vec Tutorial
In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.
This tutorial
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
Preparing the Input
Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words (utf8 strings):
End of explanation
# create some toy data to use with the following example
import smart_open, os
if not os.path.exists('./data/'):
os.makedirs('./data/')
filenames = ['./data/f1.txt', './data/f2.txt']
for i, fname in enumerate(filenames):
with smart_open.smart_open(fname, 'w') as fout:
for line in sentences[i]:
fout.write(line + '\n')
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
for line in open(os.path.join(self.dirname, fname)):
yield line.split()
sentences = MySentences('./data/') # a memory-friendly iterator
print(list(sentences))
# generate the Word2Vec model
model = gensim.models.Word2Vec(sentences, min_count=1)
print(model)
print(model.wv.vocab)
Explanation: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence…
For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:
End of explanation
# build the same model, making the 2 steps explicit
new_model = gensim.models.Word2Vec(min_count=1) # an empty model, no training
new_model.build_vocab(sentences) # can be a non-repeatable, 1-pass generator
new_model.train(sentences, total_examples=new_model.corpus_count, epochs=new_model.iter)
# can be a non-repeatable, 1-pass generator
print(new_model)
print(model.wv.vocab)
Explanation: Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users: calling Word2Vec(sentences, iter=1) will run two passes over the sentences iterator. In general it runs iter+1 passes. By the way, the default value is iter=5 to comply with Google's word2vec in C language.
1. The first pass collects words and their frequencies to build an internal dictionary tree structure.
2. The second pass trains the neural model.
These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and you’re able to initialize the vocabulary some other way:
End of explanation
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = test_data_dir + 'lee_background.cor'
class MyText(object):
def __iter__(self):
for line in open(lee_train_file):
# assume there's one document per line, tokens separated by whitespace
yield line.lower().split()
sentences = MyText()
print(sentences)
Explanation: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim):
End of explanation
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)
Explanation: Training
Word2Vec accepts several parameters that affect both training speed and quality.
min_count
min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them:
End of explanation
# default value of size=100
model = gensim.models.Word2Vec(sentences, size=200)
Explanation: size
size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto.
Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
End of explanation
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)
Explanation: workers
workers, the last of the major parameters (full list here) is for training parallelization, to speed up training:
End of explanation
model.accuracy('./datasets/questions-words.txt')
Explanation: The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application.
Google has released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning.
The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?).
Gensim supports the same evaluation set, in exactly the same format:
End of explanation
model.evaluate_word_pairs(test_data_dir + 'wordsim353.tsv')
Explanation: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contains word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, 'coast' and 'shore' are very similar as they appear in the same context. At the same time 'clothes' and 'closet' are less similar because they are related but not interchangeable.
End of explanation
from tempfile import mkstemp
fs, temp_path = mkstemp("gensim_temp") # creates a temp file
model.save(temp_path) # save the model
new_model = gensim.models.Word2Vec.load(temp_path) # open the model
Explanation: Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
Storing and loading models
You can store/load models using the standard gensim methods:
End of explanation
model = gensim.models.Word2Vec.load(temp_path)
more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']]
model.build_vocab(more_sentences, update=True)
model.train(more_sentences, total_examples=model.corpus_count, epochs=model.iter)
# cleaning up temp
os.close(fs)
os.remove(temp_path)
Explanation: which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# using gzipped/bz2 input works too, no need to unzip:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)
Online training / Resuming training
Advanced users can load a model and continue training it with more sentences and new vocabulary words:
End of explanation
model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1)
model.doesnt_match("input is lunch he sentence cat".split())
print(model.similarity('human', 'party'))
print(model.similarity('tree', 'murder'))
Explanation: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box:
End of explanation
print(model.predict_output_word(['emergency', 'beacon', 'received']))
Explanation: You can get the probability distribution for the center word given the context words as input:
End of explanation
model['tree'] # raw NumPy vector of a word
Explanation: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis:
End of explanation
# instantiating and training the Word2Vec model
model_with_loss = gensim.models.Word2Vec(sentences, min_count=1, compute_loss=True, hs=0, sg=1, seed=42)
# getting the training loss value
training_loss = model_with_loss.get_latest_training_loss()
print(training_loss)
Explanation: …or en-masse as a 2D NumPy matrix from model.wv.syn0.
Training Loss Computation
The parameter compute_loss can be used to toggle computation of loss while training the Word2Vec model. The computed loss is stored in the model attribute running_training_loss and can be retrieved using the function get_latest_training_loss as follows :
End of explanation
input_data_files = []
def setup_input_data():
# check if test data already present
if os.path.isfile('./text8') is False:
# download and decompress 'text8' corpus
import zipfile
! wget 'http://mattmahoney.net/dc/text8.zip'
! unzip 'text8.zip'
# create 1 MB, 10 MB and 50 MB files
! head -c1000000 text8 > text8_1000000
! head -c10000000 text8 > text8_10000000
! head -c50000000 text8 > text8_50000000
# add 25 KB test file
input_data_files.append(os.path.join(os.getcwd(), '../../gensim/test/test_data/lee_background.cor'))
# add 1 MB test file
input_data_files.append(os.path.join(os.getcwd(), 'text8_1000000'))
# add 10 MB test file
input_data_files.append(os.path.join(os.getcwd(), 'text8_10000000'))
# add 50 MB test file
input_data_files.append(os.path.join(os.getcwd(), 'text8_50000000'))
# add 100 MB test file
input_data_files.append(os.path.join(os.getcwd(), 'text8'))
setup_input_data()
print(input_data_files)
Explanation: Benchmarks to see effect of training loss compuation code on training time
We first download and setup the test data used for getting the benchmarks.
End of explanation
# using 25 KB and 50 MB files only for generating o/p -> comment next line for using all 5 test files
input_data_files = [input_data_files[0], input_data_files[-2]]
print(input_data_files)
import time
import numpy as np
import pandas as pd
train_time_values = []
seed_val = 42
sg_values = [0, 1]
hs_values = [0, 1]
for data_file in input_data_files:
data = gensim.models.word2vec.LineSentence(data_file)
for sg_val in sg_values:
for hs_val in hs_values:
for loss_flag in [True, False]:
time_taken_list = []
for i in range(3):
start_time = time.time()
w2v_model = gensim.models.Word2Vec(data, compute_loss=loss_flag, sg=sg_val, hs=hs_val, seed=seed_val)
time_taken_list.append(time.time() - start_time)
time_taken_list = np.array(time_taken_list)
time_mean = np.mean(time_taken_list)
time_std = np.std(time_taken_list)
train_time_values.append({'train_data': data_file, 'compute_loss': loss_flag, 'sg': sg_val, 'hs': hs_val, 'mean': time_mean, 'std': time_std})
train_times_table = pd.DataFrame(train_time_values)
train_times_table = train_times_table.sort_values(by=['train_data', 'sg', 'hs', 'compute_loss'], ascending=[False, False, True, False])
print(train_times_table)
Explanation: We now compare the training time taken for different combinations of input data and model training parameters like hs and sg.
End of explanation |
9,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualing The Paradise Papers With Python And Neo4j
Connect to Neo4j from Python
Create some Pandas Dataframes from Cypher queries
Matplotlib visualizations from Dataframe
Bokeh chord diagram from Dataframe
Step1: Connect to Neo4j from Python
Step2: Chord diagram with bokeh | Python Code:
# !pip install neo4j-driver
# !pip install pandas
# !pip install bokeh
from neo4j.v1 import GraphDatabase
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
plt.figure(dpi=300)
Explanation: Visualing The Paradise Papers With Python And Neo4j
Connect to Neo4j from Python
Create some Pandas Dataframes from Cypher queries
Matplotlib visualizations from Dataframe
Bokeh chord diagram from Dataframe
End of explanation
# don't worry this is a read only user ;-)
driver = GraphDatabase.driver("bolt://165.227.223.190:7687", auth=("ppviz", "ppviz"))
with driver.session() as session:
results = session.run('''
MATCH (e:Entity)
WITH e.jurisdiction_description AS juris, COUNT(*) AS count
WHERE count > 20
RETURN *
ORDER BY count ASC
''')
with driver.session() as session:
results = session.run('''
match (n) return n.sourceID, labels(n), count(*) as c order by n.sourceID, c desc
''')
df = pd.DataFrame([dict(zip(r.keys(), r.values())) for r in results])
df
df.plot.bar(x="labels(n)")
df = pd.DataFrame([dict(zip(r.keys(), r.values())) for r in results])
df
ax = df.plot.bar(x="juris")
ax.set_xlabel("Entity Jurisdiction")
ax.set_ylabel("Number of entities")
ax.set_title("Legal Entity Count By Jurisdiction ")
plt.gcf().subplots_adjust(bottom=0.45)
plt.savefig("entity_count", dpi=300, bbox="tight")
Explanation: Connect to Neo4j from Python
End of explanation
from neo4j.v1 import GraphDatabase
import pandas as pd
with driver.session() as session:
results = session.run('''
MATCH (a:Address)<-[:REGISTERED_ADDRESS]-(o:Officer)--(e:Entity)
WITH a.countries AS officer_country, e.jurisdiction_description AS juris,
COUNT(*) AS num
WHERE officer_country <> juris AND num > 1000
RETURN * ORDER BY num DESC
''')
df = pd.DataFrame([dict(zip(r.keys(), r.values())) for r in results])
df[:5]
from bokeh.charts import Chord
from bokeh.io import show, output_file
#df = df[df["num"] > 1000]
juris_chord = Chord(df, source="officer_country", target="juris", value="num")
output_file('juris_chord.html')
show(juris_chord)
Explanation: Chord diagram with bokeh
End of explanation |
9,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MLP for CIFAR10
Multi-Layer Perceptron (MLP) is a simple neural network model that can be used for classification tasks.
In this demo, we will train a 3-layer MLP on the CIFAR10 dataset. We will illustrate 2 MLP implementations.
Let us first import the required modules.
Step1: MLP using PyTorch nn.Linear
The most straightforward way to implement an MLP is to use the nn.Linear module. In the following code, we implement a 3-layer MLP with GELU activation function. The GELU can be replaced by other activation functions such as RELU.
Pls take note of the correct sizes. fc1 input size is n_features which is size of the flattened input x. fc1 output size is n_hidden which then becomes the input size of fc2. In other words, all input/output sizes up to fc3 fit together perfectly.
Step2: MLP implementation using Tensors
In this case, we illustrate how to implement the formula of an MLP layer using weights and biases. Note that if we remove the initialization of the weights and biases, the model will not converge. In the previous example, Linear automatically performs the weights and biases initialization.
Step3: PyTorch Lightning Module for MLP
This is the PL module so we can easily change the implementation of the MLP and compare the results. More detailed results can be found on the wandb.ai page.
Using model parameter, we can easily switch between the two MLP implementations shown above. We also benchmark the result using a ResNet18 model. The rest of the code is similar to our PL module example for MNIST.
Step4: Arguments
Please change the --model argument to switch between the different models to be used as CIFAR10 classifier.
Step5: Weights and Biases Callback
The callback logs train and validation metrics to wandb. It also logs sample predictions. This is similar to our WandbCallback example for MNIST.
Step6: Training and Validation of Different Models
The validation accuracy of both MLP model implmentations are almost the same at ~53%. This shows that the 2 MLP implementations are almost the same.
Meanwhile the ResNet18 model has accuracy of ~78%. The MLP model has still a long way to go. | Python Code:
import torch
import torchvision
import wandb
import math
from torch import nn
from einops import rearrange
from argparse import ArgumentParser
from pytorch_lightning import LightningModule, Trainer, Callback
from pytorch_lightning.loggers import WandbLogger
from torchmetrics.functional import accuracy
from torch.optim import SGD, Adam
from torch.optim.lr_scheduler import CosineAnnealingLR
Explanation: MLP for CIFAR10
Multi-Layer Perceptron (MLP) is a simple neural network model that can be used for classification tasks.
In this demo, we will train a 3-layer MLP on the CIFAR10 dataset. We will illustrate 2 MLP implementations.
Let us first import the required modules.
End of explanation
class SimpleMLP(nn.Module):
def __init__(self, n_features=3*32*32, n_hidden=512, num_classes=10):
super().__init__()
# the 3 Linear layers of the MLP
self.fc1 = nn.Linear(n_features, n_hidden)
self.fc2 = nn.Linear(n_hidden, n_hidden)
self.fc3 = nn.Linear(n_hidden, num_classes)
def forward(self, x):
# flatten x - (batch_size, 3, 32, 32) -> (batch_size, 3*32*32)
# ascii art for the case that x is 1 x 2 x 2 (channel, height, width)
# --------- -----------------
# | 1 | 2 | ---> | 1 | 2 | 3 | 4 |
# | 3 | 4 | -----------------
# ---------
# we can use any of the following methods to flatten the tensor
#y = torch.flatten(x, 1)
#y = x.view(x.size(0), -1)
# but this is the most intuitive since it shows the actual flattening
y = rearrange(x, 'b c h w -> b (c h w)')
y = nn.GELU()(self.fc1(y))
y = nn.GELU()(self.fc2(y))
y = self.fc3(y)
return y
# we dont need to compute softmax since it is already
# built into the CE loss function in PyTorch
#return F.log_softmax(y, dim=1)
Explanation: MLP using PyTorch nn.Linear
The most straightforward way to implement an MLP is to use the nn.Linear module. In the following code, we implement a 3-layer MLP with GELU activation function. The GELU can be replaced by other activation functions such as RELU.
Pls take note of the correct sizes. fc1 input size is n_features which is size of the flattened input x. fc1 output size is n_hidden which then becomes the input size of fc2. In other words, all input/output sizes up to fc3 fit together perfectly.
End of explanation
class TensorMLP(nn.Module):
def __init__(self, n_features=3*32*32, n_hidden=512, num_classes=10):
super().__init__()
# weights and biases for layer 1
self.w1 = nn.Parameter(torch.empty((n_hidden, n_features)))
self.b1 = nn.Parameter(torch.empty((n_hidden,)))
# weights and biases for layer 2
self.w2 = nn.Parameter(torch.empty((n_hidden, n_hidden)))
self.b2 = nn.Parameter(torch.empty((n_hidden,)))
# weights and biases for layer 3
self.w3 = nn.Parameter(torch.empty((num_classes, n_hidden)))
self.b3 = nn.Parameter(torch.empty((num_classes,)))
# initialize parameters manually bec we implemented the linear layer manually
self.reset_parameters()
def reset_parameters(self):
# we use Kaiming initializer for weights
nn.init.kaiming_uniform_(self.w1, a=math.sqrt(5))
# zero for biases
nn.init.constant_(self.b1, 0)
nn.init.kaiming_uniform_(self.w2, a=math.sqrt(5))
nn.init.constant_(self.b2, 0)
nn.init.kaiming_uniform_(self.w3, a=math.sqrt(5))
nn.init.constant_(self.b3, 0)
def forward(self, x):
# flatten
y = rearrange(x, 'b c h w -> b (c h w)')
# we manually compute the output of each layer
y = y @ self.w1.T + self.b1
y = nn.GELU()(y)
y = y @ self.w2.T + self.b2
y = nn.GELU()(y)
y = y @ self.w3.T + self.b3
return y
Explanation: MLP implementation using Tensors
In this case, we illustrate how to implement the formula of an MLP layer using weights and biases. Note that if we remove the initialization of the weights and biases, the model will not converge. In the previous example, Linear automatically performs the weights and biases initialization.
End of explanation
class LitCIFAR10Model(LightningModule):
def __init__(self, num_classes=10, lr=0.001, batch_size=64,
num_workers=4, max_epochs=30,
model=SimpleMLP):
super().__init__()
self.save_hyperparameters()
self.model = model(num_classes=num_classes)
self.loss = nn.CrossEntropyLoss()
def forward(self, x):
return self.model(x)
# this is called during fit()
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
return {"loss": loss}
# calls to self.log() are recorded in wandb
def training_epoch_end(self, outputs):
avg_loss = torch.stack([x["loss"] for x in outputs]).mean()
self.log("train_loss", avg_loss, on_epoch=True)
# this is called at the end of an epoch
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
acc = accuracy(y_hat, y) * 100.
# we use y_hat to display predictions during callback
return {"y_hat": y_hat, "test_loss": loss, "test_acc": acc}
# this is called at the end of all epochs
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean()
avg_acc = torch.stack([x["test_acc"] for x in outputs]).mean()
self.log("test_loss", avg_loss, on_epoch=True, prog_bar=True)
self.log("test_acc", avg_acc, on_epoch=True, prog_bar=True)
# validation is the same as test
def validation_step(self, batch, batch_idx):
return self.test_step(batch, batch_idx)
def validation_epoch_end(self, outputs):
return self.test_epoch_end(outputs)
# we use Adam optimizer
def configure_optimizers(self):
optimizer = Adam(self.parameters(), lr=self.hparams.lr)
# this decays the learning rate to 0 after max_epochs using cosine annealing
scheduler = CosineAnnealingLR(optimizer, T_max=self.hparams.max_epochs)
return [optimizer], [scheduler]
# this is called after model instatiation to initiliaze the datasets and dataloaders
def setup(self, stage=None):
self.train_dataloader()
self.test_dataloader()
# build train and test dataloaders using MNIST dataset
# we use simple ToTensor transform
def train_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10(
"./data", train=True, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=True,
num_workers=self.hparams.num_workers,
pin_memory=True,
)
def test_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10(
"./data", train=False, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=False,
num_workers=self.hparams.num_workers,
pin_memory=True,
)
def val_dataloader(self):
return self.test_dataloader()
Explanation: PyTorch Lightning Module for MLP
This is the PL module so we can easily change the implementation of the MLP and compare the results. More detailed results can be found on the wandb.ai page.
Using model parameter, we can easily switch between the two MLP implementations shown above. We also benchmark the result using a ResNet18 model. The rest of the code is similar to our PL module example for MNIST.
End of explanation
def get_args():
parser = ArgumentParser(description="PyTorch Lightning MNIST Example")
parser.add_argument("--max-epochs", type=int, default=30, help="num epochs")
parser.add_argument("--batch-size", type=int, default=64, help="batch size")
parser.add_argument("--lr", type=float, default=0.001, help="learning rate")
parser.add_argument("--num-classes", type=int, default=10, help="num classes")
parser.add_argument("--devices", default=1)
parser.add_argument("--accelerator", default='gpu')
parser.add_argument("--num-workers", type=int, default=4, help="num workers")
#parser.add_argument("--model", default=torchvision.models.resnet18)
#parser.add_argument("--model", default=TensorMLP)
parser.add_argument("--model", default=SimpleMLP)
args = parser.parse_args("")
return args
Explanation: Arguments
Please change the --model argument to switch between the different models to be used as CIFAR10 classifier.
End of explanation
class WandbCallback(Callback):
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
# process first 10 images of the first batch
if batch_idx == 0:
label_human = ["airplane", "automobile", "bird", "cat",
"deer", "dog", "frog", "horse", "ship", "truck"]
n = 10
x, y = batch
outputs = outputs["y_hat"]
outputs = torch.argmax(outputs, dim=1)
# log image, ground truth and prediction on wandb table
columns = ['image', 'ground truth', 'prediction']
data = [[wandb.Image(x_i), label_human[y_i], label_human[y_pred]] for x_i, y_i, y_pred in list(
zip(x[:n], y[:n], outputs[:n]))]
wandb_logger.log_table(
key=pl_module.model.__class__.__name__,
columns=columns,
data=data)
Explanation: Weights and Biases Callback
The callback logs train and validation metrics to wandb. It also logs sample predictions. This is similar to our WandbCallback example for MNIST.
End of explanation
if __name__ == "__main__":
args = get_args()
model = LitCIFAR10Model(num_classes=args.num_classes,
lr=args.lr, batch_size=args.batch_size,
num_workers=args.num_workers,
model=args.model,)
model.setup()
# printing the model is useful for debugging
print(model)
print(model.model.__class__.__name__)
# wandb is a great way to debug and visualize this model
wandb_logger = WandbLogger(project="mlp-cifar")
trainer = Trainer(accelerator=args.accelerator,
devices=args.devices,
max_epochs=args.max_epochs,
logger=wandb_logger,
callbacks=[WandbCallback()])
trainer.fit(model)
trainer.test(model)
wandb.finish()
Explanation: Training and Validation of Different Models
The validation accuracy of both MLP model implmentations are almost the same at ~53%. This shows that the 2 MLP implementations are almost the same.
Meanwhile the ResNet18 model has accuracy of ~78%. The MLP model has still a long way to go.
End of explanation |
9,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Thermal equilibrium of interacting dimer
In this notebook we simulate the thermal equilibrium (Boltzmann distrubion) of two interacting magnetic nanoparticles (dimer), coupled with dipolar interactions. The equilibrium distribution is computed with three different approaches
- Analytic solution is computed by hand (easy for two aligned particles)
- Markov chain Monte-Carlo (MCMC) sampling of the energy functions using PyMC3
- Simulating the stochastic Landau-Lifshitz-Gilbert equation until equilibrium using magpy
Setup
Two particles are aligned along their anisotropy axes at a distance of $R$. They have identical volume $V$, anisotropy constant $K$, saturation magnetisation $M_s$. The temperature of the environment is $T$. The angle of the magnetic moment to the anisotropy axis is $\theta_1,\theta_2$ for particle 1 and 2 respectively.
Analytic solution
The solid angle of the magnetisation angles follow a Boltzmann distribution, such that the angle $\theta$ is distributed
Step1: Individual energy terms
Step2: The unnormalised probability of state $\theta_1\theta_2$
Step3: 2-dimensional Boltzmann distribution
Step4: Analytical results
We show resulting calculations for the following three cases
Step5: Energy landscape
The plots below show the energy associated with each point of the 2-dimensional phase space (i.e. one angle per particle). Low energy (purple) states are energetically favourable wherea the high energy states (yellow) are not.
Step6: Probability state-space
The probability of each point in the phase space is directily related to the energy through the Boltzmann distribution (above). The three different cases are described
Step7: Markov-chain Monte-Carlo (MCMC)
Computing the analytic solutions required the partition function $Z$ to be computed, which was achieved with numerical integration. For more than 2 particles this becomes extremely difficult. A more efficient approach is to use MCMC. In MCMC we randomly choose a system state and add it to a histogram depending on how energetically favourable the state is. The more states we try, the closer our histogram becomes to the true distribution.
We use PyMC3 for MCMC sampling. The model is simple
Step8: Setting up the PyMC3 model is simple! Specify the priors and the energy function using pm.Potential
Step9: Chose the NUTS algorithm as our MCMC step and request a large number of random samples. These are returned in the trace variable.
Step10: Compare results
The trace contains a large number of samples of the system states. The distribution (histogram) over this large sample of states should be close to the true distribution. We can compare the 2D histogram of the system angles to the analytic solution we computed previously.
Step11: Marginalisation over one variable
The results look good! We can also check that the marginal distributions of the two angles are correct. The marginal distributions are also easier to check by eye.
$$p(\theta_1) = \int p(\theta_1, \theta_2) \mathrm{d}\theta_2$$
$$p(\theta_2) = \int p(\theta_1, \theta_2) \mathrm{d}\theta_1$$
Step12: Langevin dynamics simulations (sLLG)
Simulate the stochastic Landau-Lifshitz-Gilbert equation using MagPy.
By generating an ensemble of trajectories until thermal equilibriation we can approximate the equilibrium distribution.
Non-interacting case
Step13: Negligible interactions case
Step14: Weakly interacting case
Step15: Strongly interacting case
Step16: High noise | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
%matplotlib inline
Explanation: Thermal equilibrium of interacting dimer
In this notebook we simulate the thermal equilibrium (Boltzmann distrubion) of two interacting magnetic nanoparticles (dimer), coupled with dipolar interactions. The equilibrium distribution is computed with three different approaches
- Analytic solution is computed by hand (easy for two aligned particles)
- Markov chain Monte-Carlo (MCMC) sampling of the energy functions using PyMC3
- Simulating the stochastic Landau-Lifshitz-Gilbert equation until equilibrium using magpy
Setup
Two particles are aligned along their anisotropy axes at a distance of $R$. They have identical volume $V$, anisotropy constant $K$, saturation magnetisation $M_s$. The temperature of the environment is $T$. The angle of the magnetic moment to the anisotropy axis is $\theta_1,\theta_2$ for particle 1 and 2 respectively.
Analytic solution
The solid angle of the magnetisation angles follow a Boltzmann distribution, such that the angle $\theta$ is distributed:
$$p\left(\theta_1,\theta_2\right) = \frac{\sin(\theta_1)\sin(\theta_2)e^{-E\left(\theta_1,\theta_2\right)/\left(K_BT\right)}}{Z}$$
where
$$\frac{E\left(\theta_1,\theta_2\right)}{K_BT}=\sigma\left(\cos^2\theta_1+\cos^2\theta_2\right)
-\nu\left(3\cos\theta_1\cos\theta_2 - \cos\left(\theta_1-\theta_2\right)\right)$$
$$\sigma=\frac{KV}{K_BT}$$
$$\nu=\frac{\mu_0V^2M_s^2}{2\pi R^3K_BT}$$
$\sigma,\nu$ are the normalised anisotropy and interaction strength respectively.
End of explanation
def dd(t1, t2, nu):
return -nu*(3*np.cos(t1)*np.cos(t2) - np.cos(t1-t2))
def anis(t1, t2, sigma):
return sigma*(np.sin(t1)**2 + np.sin(t2)**2)
def tot(t1, t2, nu, sigma):
return dd(t1, t2, nu) + anis(t1, t2, sigma)
Explanation: Individual energy terms
End of explanation
def p_unorm(t1,t2,nu,sigma):
return np.sin(t1)*np.sin(t2)*np.exp(-tot(t1,t2,nu,sigma))
Explanation: The unnormalised probability of state $\theta_1\theta_2$
End of explanation
from scipy.integrate import dblquad
def boltz_2d(ts, nu, sigma):
e = np.array([[p_unorm(t1,t2,nu,sigma) for t1 in ts] for t2 in ts])
Z = dblquad(lambda t1,t2: p_unorm(t1,t2,nu,sigma),
0, ts[-1], lambda x: 0, lambda x: ts[-1])[0]
return e/Z
Explanation: 2-dimensional Boltzmann distribution
End of explanation
nus = [0, 0.3, 0.3]
sigmas = [2.0, 2.0, 0.5]
Explanation: Analytical results
We show resulting calculations for the following three cases:
- No interactions
- Strong interactions
- Strong interactions and weak anisotropy
End of explanation
ts = np.linspace(0, np.pi, 100)
fg = plt.figure(figsize=(10,4))
axs = ImageGrid(
fg, 111, nrows_ncols=(1,3), axes_pad=0.15,
share_all=True,cbar_location="right",
cbar_mode="single",cbar_size="7%",
cbar_pad=0.15,
)
for nu, sigma, ax in zip(nus, sigmas, axs):
e = [[tot(t1, t2, nu, sigma) for t1 in ts] for t2 in ts]
cf=ax.contourf(ts, ts, e)
ax.set_xlabel('$\\theta_1$'); ax.set_ylabel('$\\theta_2$')
ax.set_aspect('equal')
axs[0].set_title('No interactions')
axs[1].set_title('Strong interactions')
axs[2].set_title('Weak anisotropy')
ax.cax.colorbar(cf) # fix color bar
Explanation: Energy landscape
The plots below show the energy associated with each point of the 2-dimensional phase space (i.e. one angle per particle). Low energy (purple) states are energetically favourable wherea the high energy states (yellow) are not.
End of explanation
ts = np.linspace(0, np.pi, 100)
fg = plt.figure(figsize=(10,4))
axs = ImageGrid(
fg, 111, nrows_ncols=(1,3), axes_pad=0.15,
share_all=True,cbar_location="right",
cbar_mode="single",cbar_size="7%",
cbar_pad=0.15,
)
for nu, sigma, ax in zip(nus, sigmas, axs):
b = boltz_2d(ts, nu, sigma)
cf=ax.contourf(ts, ts, b)
ax.set_xlabel('$\\theta_1$'); ax.set_ylabel('$\\theta_2$')
ax.set_aspect('equal')
axs[0].set_title('No interactions')
axs[1].set_title('Strong interactions')
axs[2].set_title('Weak anisotropy')
ax.cax.colorbar(cf) # fix color bar
Explanation: Probability state-space
The probability of each point in the phase space is directily related to the energy through the Boltzmann distribution (above). The three different cases are described:
- With no interactions, the probability space is symetrical since both variables are identically distibuted and independent of one another. Both particle are much more likely to be found close to the anisotropy axis (either parallel or antiparallel).
- With interactions, the symmetry is broken and the particles prefer to be aligned with the anisotropy axis and one another. This increases the probability of the system being in aligned states around the axes rather.
- With weak anisotropy, the particles prefer to be aligned with one another but are less likely to be found near the anisotropy axis.
End of explanation
import pymc3 as pm
Explanation: Markov-chain Monte-Carlo (MCMC)
Computing the analytic solutions required the partition function $Z$ to be computed, which was achieved with numerical integration. For more than 2 particles this becomes extremely difficult. A more efficient approach is to use MCMC. In MCMC we randomly choose a system state and add it to a histogram depending on how energetically favourable the state is. The more states we try, the closer our histogram becomes to the true distribution.
We use PyMC3 for MCMC sampling. The model is simple:
We sample uniformly on the unit sphere (i.e. draw solid angles uniformly), converting into the angle $\theta_1$ and $\theta_2$
We compute the energy for both particles
We pass this energy to PyMC3
PyMC3 uses the NUTS algorithm to efficicently sample trial states with a high acceptance rate.
End of explanation
nu = 0.3
sigma = 1.5
with pm.Model() as model:
z1 = pm.Uniform('z1', -1, 1)
theta1 = pm.Deterministic('theta1', np.arccos(z1))
z2 = pm.Uniform('z2', -1, 1)
theta2 = pm.Deterministic('theta2', np.arccos(z2))
energy = tot(theta1, theta2, nu, sigma)
like = pm.Potential('energy', -energy)
Explanation: Setting up the PyMC3 model is simple! Specify the priors and the energy function using pm.Potential
End of explanation
with model:
step = pm.NUTS()
trace = pm.sample(100000, step=step)
Explanation: Chose the NUTS algorithm as our MCMC step and request a large number of random samples. These are returned in the trace variable.
End of explanation
b = boltz_2d(ts, nu, sigma)
plt.hist2d(trace['theta1'], trace['theta2'], bins=70, normed=True)
plt.contour(ts, ts, b, cmap='Greys')
plt.gca().set_aspect('equal')
plt.xlabel('$\\theta_1$'); plt.ylabel('$\\theta_2$');
Explanation: Compare results
The trace contains a large number of samples of the system states. The distribution (histogram) over this large sample of states should be close to the true distribution. We can compare the 2D histogram of the system angles to the analytic solution we computed previously.
End of explanation
from scipy.integrate import trapz
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
fg, axs = plt.subplots(ncols=2, figsize=(9,3))
for i in range(2):
axs[i].hist(trace['theta{}'.format(i+1)], bins=100, normed=True)
axs[i].plot(ts, b_marginal, lw=2)
axs[i].set_xlabel('$\\theta_{}$'.format(i+1))
axs[i].set_ylabel('$p(\\theta_{})$'.format(i+1))
plt.suptitle('Marginal distributions', fontsize=18)
Explanation: Marginalisation over one variable
The results look good! We can also check that the marginal distributions of the two angles are correct. The marginal distributions are also easier to check by eye.
$$p(\theta_1) = \int p(\theta_1, \theta_2) \mathrm{d}\theta_2$$
$$p(\theta_2) = \int p(\theta_1, \theta_2) \mathrm{d}\theta_1$$
End of explanation
import magpy as mp
K = 1e5
r = 8e-9
T = 330
Ms=400e3
R=1.5e-3
kdir = [0, 0, 1]
location1 = np.array([0, 0, 0], dtype=np.float)
location2 = np.array([0, 0, R], dtype=np.float)
direction = np.array([0, 0, 1], dtype=np.float)
alpha = 1.0
V = 4./3*np.pi*r**3
sigma = K*V/mp.get_KB()/T
nu = mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-12,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=False)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi/2, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
b_noint = b_marginal
plt.hist(theta, bins=50, normed=True)
plt.plot(ts[ts<theta.max()], b_marginal[ts<theta.max()])
Explanation: Langevin dynamics simulations (sLLG)
Simulate the stochastic Landau-Lifshitz-Gilbert equation using MagPy.
By generating an ensemble of trajectories until thermal equilibriation we can approximate the equilibrium distribution.
Non-interacting case
End of explanation
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-12,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=True)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi/2, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta, bins=50, normed=True)
plt.plot(ts[ts<theta.max()], b_marginal[ts<theta.max()])
plt.plot(ts[ts<theta.max()], b_noint[ts<theta.max()])
Explanation: Negligible interactions case
End of explanation
R=1e-7
location1 = np.array([0, 0, 0], dtype=np.float)
location2 = np.array([0, 0, R], dtype=np.float)
nu = mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-12,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=True)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi/2, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta, bins=50, normed=True)
plt.plot(ts[ts<theta.max()], b_marginal[ts<theta.max()])
plt.plot(ts[ts<theta.max()], b_noint[ts<theta.max()])
Explanation: Weakly interacting case
End of explanation
R=0.6e-8
location1 = np.array([0, 0, 0], dtype=np.float)
location2 = np.array([0, 0, R], dtype=np.float)
nu = mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
res = model.simulate_ensemble(end_time=10e-9, time_step=1e-12,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=True)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi/2, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta, bins=50, normed=True, label='simulation')
plt.plot(ts[ts<theta.max()], b_marginal[ts<theta.max()], label='analytic')
plt.plot(ts[ts<theta.max()], b_noint[ts<theta.max()], label='analytic ($\\nu=0$)')
plt.legend()
Explanation: Strongly interacting case
End of explanation
r = 1e-9
V = 4./3*np.pi*r**3
sigma = K*V/mp.get_KB()/T
nu = mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
res = model.simulate(end_time=1e-9, time_step=1e-13, max_samples=1000, seed=1001,
implicit_solve=True, interactions=True)
res.plot();
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-13,
max_samples=500, seeds=range(5000),
n_jobs=8, implicit_solve=True,
interactions=True)
m_z = [state['z'][0] for state in res.final_state()]
theta = np.arccos(m_z)
ts = np.linspace(0, np.pi, 100)
b = boltz_2d(ts, nu, sigma)
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta, bins=50, normed=True)
plt.plot(ts, b_marginal)
Explanation: High noise
End of explanation |
9,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create Fake Index Data
Step1: Build and run ERC Strategy
You can read more about ERC here.
http | Python Code:
mean = np.array([0.05/252 + 0.02/252, 0.03/252 + 0.02/252])
volatility = np.array([0.2/np.sqrt(252), 0.05/np.sqrt(252)])
variance = np.power(volatility,2)
correlation = np.array(
[
[1, 0.25],
[0.25,1]
]
)
covariance = np.zeros((2,2))
for i in range(len(variance)):
for j in range(len(variance)):
covariance[i,j] = correlation[i,j]*volatility[i]*volatility[j]
covariance
names = ['foo','bar','rf']
dates = pd.date_range(start='2015-01-01',end='2018-12-31', freq=pd.tseries.offsets.BDay())
n = len(dates)
rdf = pd.DataFrame(
np.zeros((n, len(names))),
index = dates,
columns = names
)
np.random.seed(1)
rdf.loc[:,['foo','bar']] = np.random.multivariate_normal(mean,covariance,size=n)
rdf['rf'] = 0.02/252
pdf = 100*np.cumprod(1+rdf)
pdf.plot()
Explanation: Create Fake Index Data
End of explanation
runAfterDaysAlgo = bt.algos.RunAfterDays(
20*6 + 1
)
selectTheseAlgo = bt.algos.SelectThese(['foo','bar'])
# algo to set the weights so each asset contributes the same amount of risk
# with data over the last 6 months excluding yesterday
weighERCAlgo = bt.algos.WeighERC(
lookback=pd.DateOffset(days=20*6),
covar_method='standard',
risk_parity_method='slsqp',
maximum_iterations=1000,
tolerance=1e-9,
lag=pd.DateOffset(days=1)
)
rebalAlgo = bt.algos.Rebalance()
strat = bt.Strategy(
'ERC',
[
runAfterDaysAlgo,
selectTheseAlgo,
weighERCAlgo,
rebalAlgo
]
)
backtest = bt.Backtest(
strat,
pdf,
integer_positions=False
)
res_target = bt.run(backtest)
res_target.get_security_weights().plot()
res_target.prices.plot()
weights_target = res_target.get_security_weights().copy()
rolling_cov_target = pdf.loc[:,weights_target.columns].pct_change().rolling(window=252).cov()*252
trc_target = pd.DataFrame(
np.nan,
index = weights_target.index,
columns = weights_target.columns
)
for dt in pdf.index:
trc_target.loc[dt,:] = weights_target.loc[dt,:].values*(rolling_cov_target.loc[dt,:].values@weights_target.loc[dt,:].values)/np.sqrt(weights_target.loc[dt,:].values@rolling_cov_target.loc[dt,:].values@weights_target.loc[dt,:].values)
fig, ax = plt.subplots(nrows=1,ncols=1)
trc_target.plot(ax=ax)
ax.set_title('Total Risk Contribution')
ax.plot()
Explanation: Build and run ERC Strategy
You can read more about ERC here.
http://thierry-roncalli.com/download/erc.pdf
End of explanation |
9,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: 유니코드 문자열
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: tf.string 데이터 타입
텐서플로의 기본 tf.string dtype은 바이트 문자열로 이루어진 텐서를 만듭니다. 유니코드 문자열은 기본적으로 utf-8로 인코딩 됩니다.
Step3: tf.string 텐서는 바이트 문자열을 최소 단위로 다루기 때문에 다양한 길이의 바이트 문자열을 다룰 수 있습니다. 문자열 길이는 텐서 차원(dimensions)에 포함되지 않습니다.
Step4: 노트
Step5: 표현 간의 변환
텐서플로는 다른 표현으로 변환하기 위한 연산을 제공합니다.
tf.strings.unicode_decode
Step6: 배치(batch) 차원
여러 개의 문자열을 디코딩 할 때 문자열마다 포함된 문자의 개수는 동일하지 않습니다. 반환되는 값은 tf.RaggedTensor로 가장 안쪽 차원의 크기가 문자열에 포함된 문자의 개수에 따라 결정됩니다.
Step7: tf.RaggedTensor를 바로 사용하거나, 패딩(padding)을 사용해 tf.Tensor로 변환하거나, tf.RaggedTensor.to_tensor 와 tf.RaggedTensor.to_sparse 메서드를 사용해 tf.SparseTensor로 변환할 수 있습니다.
Step8: 길이가 같은 여러 문자열을 인코딩할 때는 tf.Tensor를 입력으로 사용합니다.
Step9: 길이가 다른 여러 문자열을 인코딩할 때는 tf.RaggedTensor를 입력으로 사용해야 합니다.
Step10: 패딩된 텐서나 희소(sparse) 텐서는 unicode_encode를 호출하기 전에 tf.RaggedTensor로 바꿉니다.
Step11: 유니코드 연산
길이
tf.strings.length 연산은 계산해야 할 길이를 나타내는 unit 인자를 가집니다. unit의 기본 단위는 "BYTE"이지만 인코딩된 string에 포함된 유니코드 코드 포인트의 수를 파악하기 위해 "UTF8_CHAR"나 "UTF16_CHAR"같이 다른 값을 설정할 수 있습니다.
Step12: 부분 문자열
이와 유사하게 tf.strings.substr 연산은 "unit" 매개변수 값을 사용해 "pos"와 "len" 매개변수로 지정된 문자열의 종류를 결정합니다.
Step13: 유니코드 문자열 분리
tf.strings.unicode_split 연산은 유니코드 문자열의 개별 문자를 부분 문자열로 분리합니다.
Step14: 문자 바이트 오프셋
tf.strings.unicode_decode로 만든 문자 텐서를 원본 문자열과 위치를 맞추려면 각 문자의 시작 위치의 오프셋(offset)을 알아야 합니다. tf.strings.unicode_decode_with_offsets은 unicode_decode와 비슷하지만 각 문자의 시작 오프셋을 포함한 두 번째 텐서를 반환합니다.
Step15: 유니코드 스크립트
각 유니코드 코드 포인트는 스크립트(script)라 부르는 하나의 코드 포인트의 집합(collection)에 속합니다. 문자의 스크립트는 문자가 어떤 언어인지 결정하는 데 도움이 됩니다. 예를 들어, 'Б'가 키릴(Cyrillic) 스크립트라는 것을 알고 있으면 이 문자가 포함된 텍스트는 아마도 (러시아어나 우크라이나어 같은) 슬라브 언어라는 것을 알 수 있습니다.
텐서플로는 주어진 코드 포인트가 어떤 스크립트를 사용하는지 판별하기 위해 tf.strings.unicode_script 연산을 제공합니다. 스크립트 코드는 International Components for
Unicode (ICU) UScriptCode 값과 일치하는 int32 값입니다.
Step16: tf.strings.unicode_script 연산은 코드 포인트의 다차원 tf.Tensor나 tf.RaggedTensor에 적용할 수 있습니다
Step17: 예제
Step18: 먼저 문장을 문자 코드 포인트로 디코딩하고 각 문자에 대한 스크립트 식별자를 찾습니다.
Step19: 그다음 스크립트 식별자를 사용하여 단어 경계가 추가될 위치를 결정합니다. 각 문장의 시작과 이전 문자와 스크립트가 다른 문자에 단어 경계를 추가합니다.
Step20: 이 시작 오프셋을 사용하여 전체 배치에 있는 단어 리스트를 담은 RaggedTensor를 만듭니다.
Step21: 마지막으로 단어 코드 포인트 RaggedTensor를 문장으로 다시 나눕니다.
Step22: 최종 결과를 읽기 쉽게 utf-8 문자열로 다시 인코딩합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
Explanation: 유니코드 문자열
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/unicode"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
tensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[email protected]로
메일을 보내주시기 바랍니다.
소개
자연어 처리 모델은 종종 다른 문자 집합을 갖는 다양한 언어를 다루게 됩니다. 유니코드(unicode)는 거의 모든 언어의 문자를 표현할 수 있는 표준 인코딩 시스템입니다. 각 문자는 0부터 0x10FFFF 사이의 고유한 정수 코드 포인트(code point)를 사용해서 인코딩됩니다. 유니코드 문자열은 0개 또는 그 이상의 코드 포인트로 이루어진 시퀀스(sequence)입니다.
이 튜토리얼에서는 텐서플로(Tensorflow)에서 유니코드 문자열을 표현하고, 표준 문자열 연산의 유니코드 버전을 사용해서 유니코드 문자열을 조작하는 방법에 대해서 소개합니다. 또한 스크립트 감지(script detection)를 활용하여 유니코드 문자열을 토큰으로 분리해 보겠습니다.
End of explanation
tf.constant(u"Thanks 😊")
Explanation: tf.string 데이터 타입
텐서플로의 기본 tf.string dtype은 바이트 문자열로 이루어진 텐서를 만듭니다. 유니코드 문자열은 기본적으로 utf-8로 인코딩 됩니다.
End of explanation
tf.constant([u"You're", u"welcome!"]).shape
Explanation: tf.string 텐서는 바이트 문자열을 최소 단위로 다루기 때문에 다양한 길이의 바이트 문자열을 다룰 수 있습니다. 문자열 길이는 텐서 차원(dimensions)에 포함되지 않습니다.
End of explanation
# UTF-8로 인코딩된 string 스칼라로 표현한 유니코드 문자열입니다.
text_utf8 = tf.constant(u"语言处理")
text_utf8
# UTF-16-BE로 인코딩된 string 스칼라로 표현한 유니코드 문자열입니다.
text_utf16be = tf.constant(u"语言处理".encode("UTF-16-BE"))
text_utf16be
# 유니코드 코드 포인트의 벡터로 표현한 유니코드 문자열입니다.
text_chars = tf.constant([ord(char) for char in u"语言处理"])
text_chars
Explanation: 노트: 파이썬을 사용해 문자열을 만들 때 버전 2와 버전 3에서 유니코드를 다루는 방식이 다릅니다. 버전 2에서는 위와 같이 "u" 접두사를 사용하여 유니코드 문자열을 나타냅니다. 버전 3에서는 유니코드 인코딩된 문자열이 기본값입니다.
유니코드 표현
텐서플로에서 유니코드 문자열을 표현하기 위한 두 가지 방법이 있습니다:
string 스칼라 — 코드 포인트의 시퀀스가 알려진 문자 인코딩을 사용해 인코딩됩니다.
int32 벡터 — 위치마다 개별 코드 포인트를 포함합니다.
예를 들어, 아래의 세 가지 값이 모두 유니코드 문자열 "语言处理"(중국어로 "언어 처리"를 의미함)를 표현합니다.
End of explanation
tf.strings.unicode_decode(text_utf8,
input_encoding='UTF-8')
tf.strings.unicode_encode(text_chars,
output_encoding='UTF-8')
tf.strings.unicode_transcode(text_utf8,
input_encoding='UTF8',
output_encoding='UTF-16-BE')
Explanation: 표현 간의 변환
텐서플로는 다른 표현으로 변환하기 위한 연산을 제공합니다.
tf.strings.unicode_decode: 인코딩된 string 스칼라를 코드 포인트의 벡터로 변환합니다.
tf.strings.unicode_encode: 코드 포인트의 벡터를 인코드된 string 스칼라로 변환합니다.
tf.strings.unicode_transcode: 인코드된 string 스칼라를 다른 인코딩으로 변환합니다.
End of explanation
# UTF-8 인코딩된 문자열로 표현한 유니코드 문자열의 배치입니다.
batch_utf8 = [s.encode('UTF-8') for s in
[u'hÃllo', u'What is the weather tomorrow', u'Göödnight', u'😊']]
batch_chars_ragged = tf.strings.unicode_decode(batch_utf8,
input_encoding='UTF-8')
for sentence_chars in batch_chars_ragged.to_list():
print(sentence_chars)
Explanation: 배치(batch) 차원
여러 개의 문자열을 디코딩 할 때 문자열마다 포함된 문자의 개수는 동일하지 않습니다. 반환되는 값은 tf.RaggedTensor로 가장 안쪽 차원의 크기가 문자열에 포함된 문자의 개수에 따라 결정됩니다.
End of explanation
batch_chars_padded = batch_chars_ragged.to_tensor(default_value=-1)
print(batch_chars_padded.numpy())
batch_chars_sparse = batch_chars_ragged.to_sparse()
Explanation: tf.RaggedTensor를 바로 사용하거나, 패딩(padding)을 사용해 tf.Tensor로 변환하거나, tf.RaggedTensor.to_tensor 와 tf.RaggedTensor.to_sparse 메서드를 사용해 tf.SparseTensor로 변환할 수 있습니다.
End of explanation
tf.strings.unicode_encode([[99, 97, 116], [100, 111, 103], [ 99, 111, 119]],
output_encoding='UTF-8')
Explanation: 길이가 같은 여러 문자열을 인코딩할 때는 tf.Tensor를 입력으로 사용합니다.
End of explanation
tf.strings.unicode_encode(batch_chars_ragged, output_encoding='UTF-8')
Explanation: 길이가 다른 여러 문자열을 인코딩할 때는 tf.RaggedTensor를 입력으로 사용해야 합니다.
End of explanation
tf.strings.unicode_encode(
tf.RaggedTensor.from_sparse(batch_chars_sparse),
output_encoding='UTF-8')
tf.strings.unicode_encode(
tf.RaggedTensor.from_tensor(batch_chars_padded, padding=-1),
output_encoding='UTF-8')
Explanation: 패딩된 텐서나 희소(sparse) 텐서는 unicode_encode를 호출하기 전에 tf.RaggedTensor로 바꿉니다.
End of explanation
# UTF8에서 마지막 문자는 4바이트를 차지합니다.
thanks = u'Thanks 😊'.encode('UTF-8')
num_bytes = tf.strings.length(thanks).numpy()
num_chars = tf.strings.length(thanks, unit='UTF8_CHAR').numpy()
print('{} 바이트; {}개의 UTF-8 문자'.format(num_bytes, num_chars))
Explanation: 유니코드 연산
길이
tf.strings.length 연산은 계산해야 할 길이를 나타내는 unit 인자를 가집니다. unit의 기본 단위는 "BYTE"이지만 인코딩된 string에 포함된 유니코드 코드 포인트의 수를 파악하기 위해 "UTF8_CHAR"나 "UTF16_CHAR"같이 다른 값을 설정할 수 있습니다.
End of explanation
# 기본: unit='BYTE'. len=1이면 바이트 하나를 반환합니다.
tf.strings.substr(thanks, pos=7, len=1).numpy()
# unit='UTF8_CHAR'로 지정하면 4 바이트인 문자 하나를 반환합니다.
print(tf.strings.substr(thanks, pos=7, len=1, unit='UTF8_CHAR').numpy())
Explanation: 부분 문자열
이와 유사하게 tf.strings.substr 연산은 "unit" 매개변수 값을 사용해 "pos"와 "len" 매개변수로 지정된 문자열의 종류를 결정합니다.
End of explanation
tf.strings.unicode_split(thanks, 'UTF-8').numpy()
Explanation: 유니코드 문자열 분리
tf.strings.unicode_split 연산은 유니코드 문자열의 개별 문자를 부분 문자열로 분리합니다.
End of explanation
codepoints, offsets = tf.strings.unicode_decode_with_offsets(u"🎈🎉🎊", 'UTF-8')
for (codepoint, offset) in zip(codepoints.numpy(), offsets.numpy()):
print("바이트 오프셋 {}: 코드 포인트 {}".format(offset, codepoint))
Explanation: 문자 바이트 오프셋
tf.strings.unicode_decode로 만든 문자 텐서를 원본 문자열과 위치를 맞추려면 각 문자의 시작 위치의 오프셋(offset)을 알아야 합니다. tf.strings.unicode_decode_with_offsets은 unicode_decode와 비슷하지만 각 문자의 시작 오프셋을 포함한 두 번째 텐서를 반환합니다.
End of explanation
uscript = tf.strings.unicode_script([33464, 1041]) # ['芸', 'Б']
print(uscript.numpy()) # [17, 8] == [USCRIPT_HAN, USCRIPT_CYRILLIC]
Explanation: 유니코드 스크립트
각 유니코드 코드 포인트는 스크립트(script)라 부르는 하나의 코드 포인트의 집합(collection)에 속합니다. 문자의 스크립트는 문자가 어떤 언어인지 결정하는 데 도움이 됩니다. 예를 들어, 'Б'가 키릴(Cyrillic) 스크립트라는 것을 알고 있으면 이 문자가 포함된 텍스트는 아마도 (러시아어나 우크라이나어 같은) 슬라브 언어라는 것을 알 수 있습니다.
텐서플로는 주어진 코드 포인트가 어떤 스크립트를 사용하는지 판별하기 위해 tf.strings.unicode_script 연산을 제공합니다. 스크립트 코드는 International Components for
Unicode (ICU) UScriptCode 값과 일치하는 int32 값입니다.
End of explanation
print(tf.strings.unicode_script(batch_chars_ragged))
Explanation: tf.strings.unicode_script 연산은 코드 포인트의 다차원 tf.Tensor나 tf.RaggedTensor에 적용할 수 있습니다:
End of explanation
# dtype: string; shape: [num_sentences]
#
# 처리할 문장들 입니다. 이 라인을 수정해서 다른 입력값을 시도해 보세요!
sentence_texts = [u'Hello, world.', u'世界こんにちは']
Explanation: 예제: 간단한 분할
분할(segmentation)은 텍스트를 단어와 같은 단위로 나누는 작업입니다. 공백 문자가 단어를 나누는 구분자로 사용되는 경우는 쉽지만, (중국어나 일본어 같이) 공백을 사용하지 않는 언어나 (독일어 같이) 단어를 길게 조합하는 언어는 의미를 분석하기 위한 분할 과정이 꼭 필요합니다. 웹 텍스트에는 "NY株価"(New York Stock Exchange)와 같이 여러 가지 언어와 스크립트가 섞여 있는 경우가 많습니다.
스크립트의 변화를 단어 경계로 근사하여 (ML 모델 사용 없이) 대략적인 분할을 수행할 수 있습니다. 위에서 언급된 "NY株価"의 예와 같은 문자열에 적용됩니다. 다양한 스크립트의 공백 문자를 모두 USCRIPT_COMMON(실제 텍스트의 스크립트 코드와 다른 특별한 스크립트 코드)으로 분류하기 때문에 공백을 사용하는 대부분의 언어들에서도 역시 적용됩니다.
End of explanation
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_codepoint[i, j]는
# i번째 문장 안에 있는 j번째 문자에 대한 코드 포인트 입니다.
sentence_char_codepoint = tf.strings.unicode_decode(sentence_texts, 'UTF-8')
print(sentence_char_codepoint)
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_codepoint[i, j]는
# i번째 문장 안에 있는 j번째 문자의 유니코드 스크립트 입니다.
sentence_char_script = tf.strings.unicode_script(sentence_char_codepoint)
print(sentence_char_script)
Explanation: 먼저 문장을 문자 코드 포인트로 디코딩하고 각 문자에 대한 스크립트 식별자를 찾습니다.
End of explanation
# dtype: bool; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_starts_word[i, j]는
# i번째 문장 안에 있는 j번째 문자가 단어의 시작이면 True 입니다.
sentence_char_starts_word = tf.concat(
[tf.fill([sentence_char_script.nrows(), 1], True),
tf.not_equal(sentence_char_script[:, 1:], sentence_char_script[:, :-1])],
axis=1)
# dtype: int64; shape: [num_words]
#
# word_starts[i]은 (모든 문장의 문자를 일렬로 펼친 리스트에서)
# i번째 단어가 시작되는 문자의 인덱스 입니다.
word_starts = tf.squeeze(tf.where(sentence_char_starts_word.values), axis=1)
print(word_starts)
Explanation: 그다음 스크립트 식별자를 사용하여 단어 경계가 추가될 위치를 결정합니다. 각 문장의 시작과 이전 문자와 스크립트가 다른 문자에 단어 경계를 추가합니다.
End of explanation
# dtype: int32; shape: [num_words, (num_chars_per_word)]
#
# word_char_codepoint[i, j]은
# i번째 단어 안에 있는 j번째 문자에 대한 코드 포인트 입니다.
word_char_codepoint = tf.RaggedTensor.from_row_starts(
values=sentence_char_codepoint.values,
row_starts=word_starts)
print(word_char_codepoint)
Explanation: 이 시작 오프셋을 사용하여 전체 배치에 있는 단어 리스트를 담은 RaggedTensor를 만듭니다.
End of explanation
# dtype: int64; shape: [num_sentences]
#
# sentence_num_words[i]는 i번째 문장 안에 있는 단어의 수입니다.
sentence_num_words = tf.reduce_sum(
tf.cast(sentence_char_starts_word, tf.int64),
axis=1)
# dtype: int32; shape: [num_sentences, (num_words_per_sentence), (num_chars_per_word)]
#
# sentence_word_char_codepoint[i, j, k]는 i번째 문장 안에 있는
# j번째 단어 안의 k번째 문자에 대한 코드 포인트입니다.
sentence_word_char_codepoint = tf.RaggedTensor.from_row_lengths(
values=word_char_codepoint,
row_lengths=sentence_num_words)
print(sentence_word_char_codepoint)
Explanation: 마지막으로 단어 코드 포인트 RaggedTensor를 문장으로 다시 나눕니다.
End of explanation
tf.strings.unicode_encode(sentence_word_char_codepoint, 'UTF-8').to_list()
Explanation: 최종 결과를 읽기 쉽게 utf-8 문자열로 다시 인코딩합니다.
End of explanation |
9,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Paillier Homomorphic Encryption Example
DISCLAIMER
Step1: Basic Ops
Step2: Key SerDe
Step3: Value SerDe | Python Code:
from syft.he.paillier import KeyPair, PaillierTensor
from syft import TensorBase
import numpy as np
Explanation: Paillier Homomorphic Encryption Example
DISCLAIMER: This is a proof-of-concept implementation. It does not represent a remotely product ready implementation or follow proper conventions for security, convenience, or scalability. It is part of a broader proof-of-concept demonstrating the vision of the OpenMined project, its major moving parts, and how they might work together.
End of explanation
pubkey,prikey = KeyPair().generate()
x = PaillierTensor(pubkey, np.array([1, 2, 3, 4, 5.]))
x.decrypt(prikey)
(x+x[0]).decrypt(prikey)
(x*5).decrypt(prikey)
(x+x/5).decrypt(prikey)
Explanation: Basic Ops
End of explanation
pubkey,prikey = KeyPair().generate()
x = PaillierTensor(pubkey, np.array([1, 2, 3, 4, 5.]))
pubkey_str = pubkey.serialize()
prikey_str = prikey.serialize()
pubkey2,prikey2 = KeyPair().deserialize(pubkey_str,prikey_str)
prikey2.decrypt(x)
y = PaillierTensor(pubkey,(np.ones(5))/2)
prikey.decrypt(y)
Explanation: Key SerDe
End of explanation
import pickle
y_str = pickle.dumps(y)
y2 = pickle.loads(y_str)
prikey.decrypt(y2)
Explanation: Value SerDe
End of explanation |
9,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'mri-esm2-0', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MRI
Source ID: MRI-ESM2-0
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
9,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring the TTC Subway Real-time API
The API we're pulling data from is what supports the TTC's Next Train Arrivals page. With a bit of exploration through your browser's developer console, you can see that the page gets refreshed with data from a request to http
Step1: So now we've just received our first request from the API and the response is stored in the requests object r. From previous examination of the API we know that the response to an API request is in JSON format. So the below code will pretty print out the response so we can have a look at the variables.
Step3: Building a scraping script
By opening up the inspector tools in the browser, we can see the full list of station ids by hovering over the Select a subway station dropdown list. Stations increase in number from West to East.
For Line 1 they are numbered 1-32 (from Downsview to Finch, in order)
For Line 2 they are numbered 33-63 (from Kipling to Kennedy)
For Line 4 they are numbered 64-68 (from Sheppard to Don Mills)
Thus we can construct a dictionary that will represent every possible API call
Step4: Database schema
Looking at the response above. I've written up a basic schema of two tables to store the responses to the API. it's in create_tables.sql. Use this file to setup of PostgreSQL database either using terminal (Linux/OSX) or command line (Windows). Alternately, you can download PgAdmin v3 or v4 (depending on your platform) which will provide you with a GUI to setup and manage the database. In the latter case, the default database name is 'postgres' and use 'postgres' as the password as well when setting up the server.
Step5: Querying data from database
Now we will pull the data we've inserted in the Postgre database | Python Code:
import requests #to handle http requests to the API
from psycopg2 import connect
stationid = 3
#We'll find out the full range of possible stations further down.
lineid = 1
#[1,2,4]
# The url for the request
base_url = "http://www.ttc.ca/Subway/loadNtas.action"
# Our query parameters for this API request
payload = {#"subwayLine":lineid,
"stationId":stationid,
"searchCriteria":''} #The value in the search box
#it has to be included otherwise the query fails
#"_":request_epoch} #Great job naming variables...
# subwayLine and _ are redundant variables.
# We thought we could query historical data using the "_" parameter
# But it seems no
r = requests.get(base_url, params = payload)
Explanation: Exploring the TTC Subway Real-time API
The API we're pulling data from is what supports the TTC's Next Train Arrivals page. With a bit of exploration through your browser's developer console, you can see that the page gets refreshed with data from a request to http://www.ttc.ca/Subway/loadNtas.action
End of explanation
r.json()
data = r.json()
data['ntasData'][0]['createDate']
#Testing whether have to be explicit about line numbers for stations with multiple lines
payload = {#"subwayLine":lineid,
"stationId":10, #St. George, Line 1
"searchCriteria":''}
r = requests.get(base_url, params = payload)
r.json()
#Testing whether have to be explicit about line numbers for stations with multiple lines
payload = {#"subwayLine":lineid,
"stationId":48, #St. George, Line 2
"searchCriteria":''}
r = requests.get(base_url, params = payload)
r.json()
data = r.json()
data['ntasData'][0]['createDate'].replace('T',' ')
Explanation: So now we've just received our first request from the API and the response is stored in the requests object r. From previous examination of the API we know that the response to an API request is in JSON format. So the below code will pretty print out the response so we can have a look at the variables.
End of explanation
lines = {1: range(1, 33), #max value must be 1 greater
2: range(33, 64), # west to east (Kipling onwards)
3: range(64, 68)} # also west to east (Sheppard onwards)
def get_API_response(*args):
baseurl = "http://www.ttc.ca/Subway/loadNtas.action"
if len(args) > 1:
line_id = args[0]
station_id = args[2]
payload = {"subwayLine":line_id,
"stationId":station_id,
"searchCriteria":''}
else:
station_id = args[0]
payload = {"stationId":station_id,
"searchCriteria":''}
r = requests.get(baseurl, params = payload)
return r.json()
def insert_request_info(con, data, line_id, station_id):
request_row = {}
request_row['data_'] = data['data']
request_row['stationid'] = station_id
request_row['lineid'] = line_id
request_row['all_stations'] = data['allStations']
request_row['create_date'] = data['ntasData'][0]['createDate'].replace( 'T', ' ')
cursor = con.cursor()
cursor.execute("INSERT INTO public.requests(data_, stationid, lineid, all_stations, create_date)"
"VALUES(%(data_)s, %(stationid)s, %(lineid)s, %(all_stations)s, %(create_date)s)"
"RETURNING requestid", request_row)
request_id = cursor.fetchone()[0]
con.commit()
return request_id
def insert_ntas_data(con, ntas_data, request_id):
cursor = con.cursor()
sql = INSERT INTO public.ntas_data(
requestid, id, station_char, subwayline, system_message_type,
timint, traindirection, trainid, train_message)
VALUES (%(requestid)s, %(id)s, %(station_char)s, %(subwayline)s, %(system_message_type)s,
%(timint)s, %(traindirection)s, %(trainid)s, %(train_message)s);
for record in ntas_data:
record_row ={}
record_row['requestid'] = request_id
record_row['id'] = record['id']
record_row['station_char'] = record['stationId']
record_row['subwayline'] = record['subwayLine']
record_row['system_message_type'] = record['systemMessageType']
record_row['timint'] = record['timeInt']
record_row['traindirection'] = record['trainDirection']
record_row['trainid'] = record['trainId']
record_row['train_message'] = record['trainMessage']
cursor.execute(sql, record_row)
con.commit()
cursor.close()
def query_stations(con, lines):
data = {}
for line_id, stations in lines.items():
for station_id in stations:
data = get_API_response(station_id)
request_id = insert_request_info(con, data, line_id, station_id)
insert_ntas_data(con, data['ntasData'], request_id)
return data, request_id
Explanation: Building a scraping script
By opening up the inspector tools in the browser, we can see the full list of station ids by hovering over the Select a subway station dropdown list. Stations increase in number from West to East.
For Line 1 they are numbered 1-32 (from Downsview to Finch, in order)
For Line 2 they are numbered 33-63 (from Kipling to Kennedy)
For Line 4 they are numbered 64-68 (from Sheppard to Don Mills)
Thus we can construct a dictionary that will represent every possible API call:
End of explanation
dbsettings = {'database':'ttc',
'user':'postgres'}
# 'host':'localhost'}
con = connect(database = dbsettings['database'],
user = dbsettings['user'])
#host = dbsettings['host'])
data = query_stations(con, lines) # be patient, this command can take a while to complete
data
Explanation: Database schema
Looking at the response above. I've written up a basic schema of two tables to store the responses to the API. it's in create_tables.sql. Use this file to setup of PostgreSQL database either using terminal (Linux/OSX) or command line (Windows). Alternately, you can download PgAdmin v3 or v4 (depending on your platform) which will provide you with a GUI to setup and manage the database. In the latter case, the default database name is 'postgres' and use 'postgres' as the password as well when setting up the server.
End of explanation
lines = {1: [3]}# station_id = 3 (St. George), line_id = 1 (YUS)
data, request_id = query_stations(con, lines)
data
cursor = con.cursor()
cursor.execute('''SELECT timint FROM ntas_data WHERE requestid = ''' + str(request_id) + ''' limit 10''')
rows = cursor.fetchall()
print(rows)
import numpy
print(numpy.mean(rows)) # Average (expected) wait time at St. George. Note this is not the true wait time.
Explanation: Querying data from database
Now we will pull the data we've inserted in the Postgre database
End of explanation |
9,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Post-training dynamic range quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Train a TensorFlow model
Step3: For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TensorFlow Lite model
Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.
Now load the model using the TFLiteConverter
Step4: Write it out to a tflite file
Step5: To quantize the model on export, set the optimizations flag to optimize for size
Step6: Note how the resulting file, is approximately 1/4 the size.
Step7: Run the TFLite models
Run the TensorFlow Lite model using the Python TensorFlow Lite
Interpreter.
Load the model into an interpreter
Step8: Test the model on one image
Step9: Evaluate the models
Step10: Repeat the evaluation on the dynamic range quantized model to obtain
Step11: In this example, the compressed model has no difference in the accuracy.
Optimizing an existing model
Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications.
Pre-trained frozen graph for resnet-v2-101 is available on
Tensorflow Hub.
You can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
from tensorflow import keras
import numpy as np
import pathlib
Explanation: Post-training dynamic range quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
TensorFlow Lite now supports
converting weights to 8 bit precision as part of model conversion from
tensorflow graphdefs to TensorFlow Lite's flat buffer format. Dynamic range quantization achieves a 4x reduction in the model size. In addition, TFLite supports on the fly quantization and dequantization of activations to allow for:
Using quantized kernels for faster implementation when available.
Mixing of floating-point kernels with quantized kernels for different parts
of the graph.
The activations are always stored in floating point. For ops that
support quantized kernels, the activations are quantized to 8 bits of precision
dynamically prior to processing and are de-quantized to float precision after
processing. Depending on the model being converted, this can give a speedup over
pure floating point computation.
In contrast to
quantization aware training
, the weights are quantized post training and the activations are quantized dynamically
at inference in this method.
Therefore, the model weights are not retrained to compensate for quantization
induced errors. It is important to check the accuracy of the quantized model to
ensure that the degradation is acceptable.
This tutorial trains an MNIST model from scratch, checks its accuracy in
TensorFlow, and then converts the model into a Tensorflow Lite flatbuffer
with dynamic range quantization. Finally, it checks the
accuracy of the converted model and compare it to the original float model.
Build an MNIST model
Setup
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
Explanation: Train a TensorFlow model
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Explanation: For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TensorFlow Lite model
Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.
Now load the model using the TFLiteConverter:
End of explanation
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
Explanation: Write it out to a tflite file:
End of explanation
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
Explanation: To quantize the model on export, set the optimizations flag to optimize for size:
End of explanation
!ls -lh {tflite_models_dir}
Explanation: Note how the resulting file, is approximately 1/4 the size.
End of explanation
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
Explanation: Run the TFLite models
Run the TensorFlow Lite model using the Python TensorFlow Lite
Interpreter.
Load the model into an interpreter
End of explanation
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
Explanation: Test the model on one image
End of explanation
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
Explanation: Evaluate the models
End of explanation
print(evaluate_model(interpreter_quant))
Explanation: Repeat the evaluation on the dynamic range quantized model to obtain:
End of explanation
import tensorflow_hub as hub
resnet_v2_101 = tf.keras.Sequential([
keras.layers.InputLayer(input_shape=(224, 224, 3)),
hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4")
])
converter = tf.lite.TFLiteConverter.from_keras_model(resnet_v2_101)
# Convert to TF Lite without quantization
resnet_tflite_file = tflite_models_dir/"resnet_v2_101.tflite"
resnet_tflite_file.write_bytes(converter.convert())
# Convert to TF Lite with quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
resnet_quantized_tflite_file = tflite_models_dir/"resnet_v2_101_quantized.tflite"
resnet_quantized_tflite_file.write_bytes(converter.convert())
!ls -lh {tflite_models_dir}/*.tflite
Explanation: In this example, the compressed model has no difference in the accuracy.
Optimizing an existing model
Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications.
Pre-trained frozen graph for resnet-v2-101 is available on
Tensorflow Hub.
You can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by:
End of explanation |
9,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OIQ-Exam-Question-1 (Version 2)
Technical exam question from Ordre des ingénieurs du Québec. Obviously meant to be done using moment-distribution, but even easier using slope-deflection. This version use a newer 'sdutil' that also computes end shears.
Step1: Solve equilbrium equations for rotations
Step2: Member end moments
Step3: Member end shears
Step4: Reactions
Step5: Check overal equilibrium | Python Code:
from sympy import *
init_printing(use_latex='mathjax')
from IPython import display
display.SVG('oiq-exam-1.svg')
from sdutil2 import SD, FEF
var('EI theta_a theta_b theta_c theta_d')
Mab,Mba,Vab,Vba = SD(6,EI,theta_a,theta_b) + FEF.p(6,180,4)
Mbc,Mcb,Vbc,Vcb = SD(8,2*EI,theta_b,theta_c) + FEF.udl(8,45)
Mcd,Mdc,Vcd,Vdc = SD(6,EI,theta_c,theta_d)
Mab
Explanation: OIQ-Exam-Question-1 (Version 2)
Technical exam question from Ordre des ingénieurs du Québec. Obviously meant to be done using moment-distribution, but even easier using slope-deflection. This version use a newer 'sdutil' that also computes end shears.
End of explanation
soln = solve( [Mab,Mba+Mbc,Mcb+Mcd,Mdc],[theta_a,theta_b,theta_c,theta_d] )
soln
Explanation: Solve equilbrium equations for rotations:
End of explanation
[m.subs(soln) for m in [Mab,Mba,Mbc,Mcb,Mcd,Mdc]]
Explanation: Member end moments:
End of explanation
[v.subs(soln).n(4) for v in [Vab,Vba,Vbc,Vcb,Vcd,Vdc]]
Explanation: Member end shears:
End of explanation
Ra = Vab
Rb = Vbc - Vba
Rc = Vcd - Vcb
Rd = -Vdc
[r.subs(soln).n(4) for r in [Ra,Rb,Rc,Rd]]
Explanation: Reactions:
End of explanation
# sum forces in vertical dirn.
(Ra+Rb+Rc+Rd - 180 - 45*8).subs(soln)
# sum moments about left
(-Rb*6 - Rc*(6+8) -Rd*(6+8+6) + 180*4 + 45*8*(6 + 8/2.)).subs(soln)
Ra.expand()
Explanation: Check overal equilibrium
End of explanation |
9,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep learning using fastai library
(https
Step1: Below picture is not an iceberg
Step2: Get rgb of image using color composite function
Thanks to MadScientist for color composite.
Here is the kernal --
https
Step3: Exploring images before training CNN model
Step4: Observation from images --
• Ships have a trace of bright lights around them which will be taken as a feature in CNN
• Ships are more consistent with shapes
• Icebergs shape vary more than ships
saving images in directories (train, valid, test)
Step5: The reason of converting these images to .png is that the pretrained ConvLearner that I am going to call takes image as input
Step6: let's check directory where files are saved. Ok we have train, test and valid directories
I can start from here from next time
let's train first resnet model using fastai
Step7: let's look at a random ships now (from png) to make sure images are saved in directory
Step8: Finding learning rate using lr finder
One of the most sensitive hyperparameter for deep learning is learning rate. Finding a good learning rate is most important step to make a good model (without over or underfitting)
Step9: Now, this plot is important to decide a good learning rate. We will not decide the learning rate at the lowest loss, which might sound confusing. But the catch is that we are going to do differential annealing to activate our layers which take varying learning rates. And the rate we chose here is going to be the maximum rate. So, we will chose some rate just before it bottoms up where loss is still falling.
Minimum is at 10^-2, I would chose 10^-3 is LR
Stochastic descent with restart
Step10: epochs = 3
cycle length = 1
cycle multiple = 2
Step11: Loss is decreasing with increaseing iterations
Fine tuning and differential annealing
Since, the images that we have do not have very clear and sharp features which might be present in top layers of our pre-trained neural network. Let's unfreeze our layers and calculate activations
Step12: Do Train It again (atleast 2 times)
Step13: TTA (Test Time Augmentation) simply makes predictions not just on the images in your validation set, but also makes predictions on a number of randomly augmented versions of them
Predictions
Step14: scp submit1 to local machine to upload on kaggle
Post analysis
Looking at correctly, incorrectly classified images
Analyzing results
Step15: SENet34 | Python Code:
# Put these at the top of every notebook, to get automatic reloading and inline plotting
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# This file contains all the main external libs we'll use
import numpy as np
import pandas as pd
from fastai.imports import *
from sklearn.model_selection import train_test_split
from fastai.models.cifar10.senet import SENet
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
! ls data/processed/
path = "data/processed/"
train = pd.read_json(f'{path}train.json')
test = pd.read_json(f'{path}test.json')
train[:2]
len(train.iloc[4][1])
train.inc_angle = train.inc_angle.apply(lambda x: np.nan if x == 'na' else x)
test.inc_angle = test.inc_angle.apply(lambda x: np.nan if x == 'na' else x)
img1 = train.loc[0,['band_1','band_2']]
img1
img1 = np.stack([img1['band_1'], img1['band_2']], -1).reshape(75,75,2)
Explanation: Deep learning using fastai library
(https://github.com/fastai/courses)
This is a beginner level code (can be learnt in 2 classes of deep learning taught by Jeremy Howard) and we can get decent results by only tuning learning rate and training until model overfits.
I have used pretrained resnet34 model based on Imagenet data for this
Steps to use fastai library --
git clone https://github.com/fastai/fastai
cd fastai
conda create -n fastai python=3.6 anaconda
conda env update
source activate fastai
This kernel is specifically is for Beginners who want's to experiment building CNN using fastai (on the top of pytorch). By using this kernel, you can expect to get good score and also learn fastai. Fastai has made building deep neural networks very easy.
End of explanation
plt.imshow(img1[:,:,1])
Explanation: Below picture is not an iceberg
End of explanation
def color_composite(data):
rgb_arrays = []
for i, row in data.iterrows():
band_1 = np.array(row['band_1']).reshape(75, 75)
band_2 = np.array(row['band_2']).reshape(75, 75)
band_3 = band_1 / band_2
r = (band_1 + abs(band_1.min())) / np.max((band_1 + abs(band_1.min())))
g = (band_2 + abs(band_2.min())) / np.max((band_2 + abs(band_2.min())))
b = (band_3 + abs(band_3.min())) / np.max((band_3 + abs(band_3.min())))
# r = ((band_1 - np.mean(band_1)) / (np.max(band_1) - np.min(band_1)))
# g = ((band_2 - np.mean(band_2)) / (np.max(band_2) - np.min(band_2)))
# b = ((band_3 - np.mean(band_3)) / (np.max(band_3) - np.min(band_3)))
rgb = np.dstack((r, g, b))
rgb_arrays.append(rgb)
return np.array(rgb_arrays)
# Trained with data about rgb
rgb_train = color_composite(train)
rgb_train.shape
# Test with data about rgb
rgb_test = color_composite(test)
rgb_test.shape
Explanation: Get rgb of image using color composite function
Thanks to MadScientist for color composite.
Here is the kernal --
https://www.kaggle.com/keremt/getting-color-composites
End of explanation
# look at random ships
print('Looking at random ships')
ships = np.random.choice(np.where(train.is_iceberg ==0)[0], 9)
fig = plt.figure(1,figsize=(12,12))
for i in range(9):
ax = fig.add_subplot(3,3,i+1)
arr = rgb_train[ships[i], :, :]
ax.imshow(arr)
plt.show()
# look at random iceberges
print('Looking at random icebergs')
ice = np.random.choice(np.where(train.is_iceberg ==1)[0], 9)
fig = plt.figure(200,figsize=(12,12))
for i in range(9):
ax = fig.add_subplot(3,3,i+1)
arr = rgb_train[ice[i], :, :]
ax.imshow(arr)
plt.show()
Explanation: Exploring images before training CNN model
End of explanation
# # making directories for training resnet (as it need files to be in right dir)
os.makedirs(f'{path}composites', exist_ok= True)
os.makedirs(f'{path}composites/train', exist_ok=True)
os.makedirs(f'{path}composites/valid', exist_ok=True)
os.makedirs(f'{path}composites/test', exist_ok=True)
dir_list = [f'{path}composites/train', f'{path}composites/valid']
for i in dir_list:
os.makedirs(f'{i}/ship')
os.makedirs(f'{i}/iceberg')
Explanation: Observation from images --
• Ships have a trace of bright lights around them which will be taken as a feature in CNN
• Ships are more consistent with shapes
• Icebergs shape vary more than ships
saving images in directories (train, valid, test)
End of explanation
! ls {path}composites
# split
train_y, valid_y = train_test_split(train.is_iceberg, test_size=0.10)
train_iceberg_index, train_ship_index, valid_iceberg_index, valid_ship_index = train_y[train_y==1].index, train_y[train_y==0].index, valid_y[valid_y==1].index, valid_y[valid_y==0].index
#save train images
for idx in train_iceberg_index:
img = rgb_train[idx]
plt.imsave(f'{path}/composites/train/iceberg/' + str(idx) + '.png', img)
for idx in train_ship_index:
img = rgb_train[idx]
plt.imsave(f'{path}/composites/train/ship/' + str(idx) + '.png', img)
#save valid images
for idx in valid_iceberg_index:
img = rgb_train[idx]
plt.imsave(f'{path}/composites/valid/iceberg/' + str(idx) + '.png', img)
for idx in valid_ship_index:
img = rgb_train[idx]
plt.imsave(f'{path}/composites/valid/ship/' + str(idx) + '.png', img)
#save test images
for idx in range(len(test)):
img = rgb_test[idx]
plt.imsave(f'{path}/composites/test/' + str(idx) + '.png', img)
Explanation: The reason of converting these images to .png is that the pretrained ConvLearner that I am going to call takes image as input
End of explanation
path2 = 'data/processed/composites/'
Explanation: let's check directory where files are saved. Ok we have train, test and valid directories
I can start from here from next time
let's train first resnet model using fastai
End of explanation
files = !ls {path2}valid/ship | head
img = plt.imread(f'{path2}valid/ship/{files[0]}')
plt.imshow(img)
! ls {path2}
Explanation: let's look at a random ships now (from png) to make sure images are saved in directory
End of explanation
def get_data(sz, bs):
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_top_down, max_zoom=1.00)
data = ImageClassifierData.from_paths(path2, test_name = 'test', bs = bs,
tfms = tfms)
return data
arch=resnet34
sz = 75 # because our image size is 75*75
bs = 16 # because default batch size of 64 was not giving good converging loss
data = get_data(sz, bs)
data = data.resize(int(sz*1.5), 'tmp')
learn = ConvLearner.pretrained(arch, data, precompute=False)
lrf = learn.lr_find()
learn.sched.plot_lr()
learn.sched.plot()
Explanation: Finding learning rate using lr finder
One of the most sensitive hyperparameter for deep learning is learning rate. Finding a good learning rate is most important step to make a good model (without over or underfitting)
End of explanation
lr = 0.001
learn.unfreeze()
learn.bn_freeze(False)
Explanation: Now, this plot is important to decide a good learning rate. We will not decide the learning rate at the lowest loss, which might sound confusing. But the catch is that we are going to do differential annealing to activate our layers which take varying learning rates. And the rate we chose here is going to be the maximum rate. So, we will chose some rate just before it bottoms up where loss is still falling.
Minimum is at 10^-2, I would chose 10^-3 is LR
Stochastic descent with restart
End of explanation
learn.fit(lr, 3, cycle_len=1, cycle_mult=2) # precompute was false # first fit
# stochastic descent with restart
learn.sched.plot_lr()
learn.sched.plot_loss()
learn.fit(lr,4, cycle_len=1, cycle_mult=2) # precompute was false # first fit
# stochastic descent with restart
learn.sched.plot_lr()
learn.sched.plot_loss()
Explanation: epochs = 3
cycle length = 1
cycle multiple = 2
End of explanation
lr = 0.01
lrs=np.array([lr/9,lr/3,lr]) # use lr/100 and lr/10 respectively if images would have been larger in sizes
%time learn.fit(lrs, 5, cycle_len=1, cycle_mult=2, cycle_save_name='resnet50')
learn.sched.plot_loss()
%time learn.fit(lrs, 4, cycle_len=1, cycle_mult=2)
learn.sched.plot_loss()
Explanation: Loss is decreasing with increaseing iterations
Fine tuning and differential annealing
Since, the images that we have do not have very clear and sharp features which might be present in top layers of our pre-trained neural network. Let's unfreeze our layers and calculate activations
End of explanation
# to check validation accuray
log_preds,y = learn.TTA()
accuracy(log_preds,y)
%time learn.fit(lrs,5 , cycle_len=1, cycle_mult=2)
Explanation: Do Train It again (atleast 2 times)
End of explanation
# from here we know that 'icebergs' is label 0 and 'ships' is label 1.
data.classes
# this gives prediction for validation set. Predictions are in log scale
log_preds = learn.TTA(is_test=True) # If need TTA
#log_preds = learn.predict(is_test=True) # if don't need TTA
log_preds
probs_submit1 = np.exp(log_preds[0][:,0])
probs_submit1[:]
# getting ids from test list
id_raw = data.test_dl.dataset.fnames
id_raw[1]
# using regex to take numbers from ids
id_pro = result_array = np.empty((0, len(id_raw)))
for i in range(len(id_raw)):
stuff = int(re.findall(r'\d+', id_raw[i])[0])
#print(type(str(stuff)))
id_pro = np.append(id_pro,int(stuff))
id_pro_list = []
for i in id_pro:
id_pro_list.append(int(i))
id_pro_list[:4]
# joining id and probability
d = {'index': id_pro_list, 'is_iceberg': probs_submit1}
submit1_df = pd.DataFrame(data=d)
id_ = test['id']
id_pd = pd.DataFrame({'id': id_} )
submit1_df_sorted = submit1_df.sort_values('index')
submit1_df_sorted2 = pd.concat([id_, submit1_df_sorted.set_index('index')], axis = 1)
submit1_df_sorted2.dtypes
submit1_df.dtypes
d['is_iceberg1']
l=[]
for i in range(len(d['is_iceberg1'])):
l.append(d['is_iceberg1'][i][0])
len(l)
d_new = {'index': id_pro_list, 'is_iceberg': l}
d_new['is_iceberg']
submit1_df = pd.DataFrame(data=d_new)
id_ = test['id']
id_pd = pd.DataFrame({'id': id_} )
submit1_df_sorted = submit1_df.sort_values('index')
submit1_df_sorted2 = pd.concat([id_, submit1_df_sorted.set_index('index')], axis = 1)
submit1_df_sorted2.dtypes
submit1_df.dtypes
#submit1_df.id = submit1_df.id.astype(str)
submit1_df_sorted2.to_csv('data/processed/resnet50.csv', index = False)
! head -5 data/processed/resnet50.csv
submit_check = pd.read_csv("data/processed/resnet50.csv")
submit_check.dtypes
submit_check[:10]
submit_check['is_iceberg']
Explanation: TTA (Test Time Augmentation) simply makes predictions not just on the images in your validation set, but also makes predictions on a number of randomly augmented versions of them
Predictions
End of explanation
log_pred1 = learn.TTA() # If need TTA
preds = np.argmax(log_pred1[0], axis=1) # from log probabilities to 0 or 1
probs = np.exp(log_pred1[0][:,1]) # pr(ship)
#probs = submit_check['is_iceberg']
#submit_check['C'] = np.where(submit_check['is_iceberg'] >= 0.5,1, 0)
#preds = np.array(submit_check['C'])
def rand_by_mask(mask): return np.random.choice(np.where(mask)[0], 4, replace=False)
def rand_by_correct(is_correct): return rand_by_mask((preds == data.val_y)==is_correct)
def plot_val_with_title(idxs, title):
imgs = np.stack([data.val_ds[x][0] for x in idxs])
title_probs = [probs[x] for x in idxs]
print(title)
return plots(data.val_ds.denorm(imgs), rows=1, titles=title_probs)
def plots(ims, figsize=(12,6), rows=1, titles=None):
f = plt.figure(figsize=figsize)
for i in range(len(ims)):
sp = f.add_subplot(rows, len(ims)//rows, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i])
def load_img_id(ds, idx): return np.array(PIL.Image.open(path2+ds.fnames[idx]))
def plot_val_with_title(idxs, title):
imgs = [load_img_id(data.val_ds,x) for x in idxs]
title_probs = [probs[x] for x in idxs]
print(title)
return plots(imgs, rows=1, titles=title_probs, figsize=(16,8))
# 1. A few correct labels at random
plot_val_with_title(rand_by_correct(True), "Correctly classified")
# 2. A few incorrect labels at random
plot_val_with_title(rand_by_correct(False), "Incorrectly classified")
def most_by_mask(mask, mult):
idxs = np.where(mask)[0]
return idxs[np.argsort(mult * probs[idxs])[:4]]
def most_by_correct(y, is_correct):
mult = -1 if (y==1)==is_correct else 1
return most_by_mask((preds == data.val_y)==is_correct & (data.val_y == y), mult)
plot_val_with_title(most_by_correct(0, True), "Most correct icebergs")
plot_val_with_title(most_by_correct(1, True), "Most correct ships")
Explanation: scp submit1 to local machine to upload on kaggle
Post analysis
Looking at correctly, incorrectly classified images
Analyzing results: looking at pictures
As well as looking at the overall metrics, it's also a good idea to look at examples of each of:
* A few correct labels at random
* A few incorrect labels at random
* The most correct labels of each class (ie those with highest probability that are correct)
* The most incorrect labels of each class (ie those with highest probability that are incorrect)
* The most uncertain labels (ie those with probability closest to 0.5).
End of explanation
!pwd
!ls
from fastai.models.cifar10.senet import SENet34
bm = BasicModel(SENet34().cuda(),name='iceberg_34x34')
arch = resnet34
sz = 75 # because our image size is 75*75
bs = 16 # because default batch size of 64 was not giving good converging loss
data=get_data(32,bs)
learn = ConvLearner(data,bm)
Explanation: SENet34
End of explanation |
9,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with mne.Report
This tutorial covers making interactive HTML summaries with
Step1: Before getting started with
Step2: This report yields a textual summary of the
Step3: This time we'll pass a specific subject and subjects_dir (even though
there's only one subject in the sample dataset) and remove our
render_bem=False parameter so we can see the MRI slices, with BEM
contours overlaid on top if available. Since this is computationally
expensive, we'll also pass the mri_decim parameter for the benefit of our
documentation servers, and skip processing the
Step4: Now let's look at how
Step5: To render whitened
Step6: If you want to actually view the noise covariance in the report, make sure
it is captured by the pattern passed to
Step7: Adding custom plots to a report
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The python interface has greater flexibility compared to the command
line interface <gen_mne_report>. For example, custom plots can be added via
the
Step8: Managing report sections
^^^^^^^^^^^^^^^^^^^^^^^^
The MNE report command internally manages the sections so that plots
belonging to the same section are rendered consecutively. Within a section,
the plots are ordered in the same order that they were added using the
Step9: This allows the possibility of multiple scripts adding figures to the same
report. To make this even easier, | Python Code:
import os
import mne
Explanation: Getting started with mne.Report
This tutorial covers making interactive HTML summaries with
:class:mne.Report.
:depth: 2
As usual we'll start by importing the modules we need and loading some
example data <sample-dataset>:
End of explanation
path = mne.datasets.sample.data_path(verbose=False)
report = mne.Report(verbose=True)
report.parse_folder(path, pattern='*raw.fif', render_bem=False)
report.save('report_basic.html')
Explanation: Before getting started with :class:mne.Report, make sure the files you want
to render follow the filename conventions defined by MNE:
.. cssclass:: table-bordered
.. rst-class:: midvalign
============ ==============================================================
Data object Filename convention (ends with)
============ ==============================================================
raw -raw.fif(.gz), -raw_sss.fif(.gz), -raw_tsss.fif(.gz), _meg.fif
events -eve.fif(.gz)
epochs -epo.fif(.gz)
evoked -ave.fif(.gz)
covariance -cov.fif(.gz)
trans -trans.fif(.gz)
forward -fwd.fif(.gz)
inverse -inv.fif(.gz)
============ ==============================================================
Basic reports
^^^^^^^^^^^^^
The basic process for creating an HTML report is to instantiate the
:class:~mne.Report class, then use the :meth:~mne.Report.parse_folder
method to select particular files to include in the report. Which files are
included depends on both the pattern parameter passed to
:meth:~mne.Report.parse_folder and also the subject and
subjects_dir parameters provided to the :class:~mne.Report constructor.
.. sidebar: Viewing the report
On successful creation of the report, the :meth:~mne.Report.save method
will open the HTML in a new tab in the browser. To disable this, use the
open_browser=False parameter of :meth:~mne.Report.save.
For our first example, we'll generate a barebones report for all the
:file:.fif files containing raw data in the sample dataset, by passing the
pattern *raw.fif to :meth:~mne.Report.parse_folder. We'll omit the
subject and subjects_dir parameters from the :class:~mne.Report
constructor, but we'll also pass render_bem=False to the
:meth:~mne.Report.parse_folder method — otherwise we would get a warning
about not being able to render MRI and trans files without knowing the
subject.
End of explanation
pattern = 'sample_audvis_filt-0-40_raw.fif'
report = mne.Report(verbose=True, raw_psd=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_raw_psd.html')
Explanation: This report yields a textual summary of the :class:~mne.io.Raw files
selected by the pattern. For a slightly more useful report, we'll ask for the
power spectral density of the :class:~mne.io.Raw files, by passing
raw_psd=True to the :class:~mne.Report constructor. Let's also refine
our pattern to select only the filtered raw recording (omitting the
unfiltered data and the empty-room noise recordings):
End of explanation
subjects_dir = os.path.join(path, 'subjects')
report = mne.Report(subject='sample', subjects_dir=subjects_dir, verbose=True)
report.parse_folder(path, pattern='', mri_decim=25)
report.save('report_mri_bem.html')
Explanation: This time we'll pass a specific subject and subjects_dir (even though
there's only one subject in the sample dataset) and remove our
render_bem=False parameter so we can see the MRI slices, with BEM
contours overlaid on top if available. Since this is computationally
expensive, we'll also pass the mri_decim parameter for the benefit of our
documentation servers, and skip processing the :file:.fif files:
End of explanation
pattern = 'sample_audvis-no-filter-ave.fif'
report = mne.Report(verbose=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_evoked.html')
Explanation: Now let's look at how :class:~mne.Report handles :class:~mne.Evoked data
(we'll skip the MRIs to save computation time):
End of explanation
cov_fname = os.path.join(path, 'MEG', 'sample', 'sample_audvis-cov.fif')
report = mne.Report(cov_fname=cov_fname, verbose=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_evoked_whitened.html')
Explanation: To render whitened :class:~mne.Evoked files with baseline correction, add
the noise covariance file. This will display ERP/F plots for both the
original and whitened :class:~mne.Evoked objects, but scalp topomaps only
for the original.
End of explanation
pattern = 'sample_audvis-cov.fif'
info_fname = os.path.join(path, 'MEG', 'sample', 'sample_audvis-ave.fif')
report = mne.Report(info_fname=info_fname, verbose=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_cov.html')
Explanation: If you want to actually view the noise covariance in the report, make sure
it is captured by the pattern passed to :meth:~mne.Report.parse_folder, and
also include a source for an :class:~mne.Info object (any of the
:class:~mne.io.Raw, :class:~mne.Epochs or :class:~mne.Evoked
:file:.fif files that contain subject data also contain the measurement
information and should work):
End of explanation
# generate a custom plot:
fname_evoked = os.path.join(path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname_evoked,
condition='Left Auditory',
baseline=(None, 0),
verbose=True)
fig = evoked.plot(show=False)
# add the custom plot to the report:
report.add_figs_to_section(fig, captions='Left Auditory', section='evoked')
report.save('report_custom.html')
Explanation: Adding custom plots to a report
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The python interface has greater flexibility compared to the command
line interface <gen_mne_report>. For example, custom plots can be added via
the :meth:~mne.Report.add_figs_to_section method:
End of explanation
report.save('report.h5', overwrite=True)
report_from_disk = mne.open_report('report.h5')
print(report_from_disk)
Explanation: Managing report sections
^^^^^^^^^^^^^^^^^^^^^^^^
The MNE report command internally manages the sections so that plots
belonging to the same section are rendered consecutively. Within a section,
the plots are ordered in the same order that they were added using the
:meth:~mne.Report.add_figs_to_section command. Each section is identified
by a toggle button in the top navigation bar of the report which can be used
to show or hide the contents of the section. To toggle the show/hide state of
all sections in the HTML report, press :kbd:t.
<div class="alert alert-info"><h4>Note</h4><p>Although we've been generating separate reports in each example, you could
easily create a single report for all :file:`.fif` files (raw, evoked,
covariance, etc) by passing ``pattern='*.fif'``.</p></div>
Editing a saved report
^^^^^^^^^^^^^^^^^^^^^^
Saving to HTML is a write-only operation, meaning that we cannot read an
.html file back as a :class:~mne.Report object. In order to be able
to edit a report once it's no longer in-memory in an active Python session,
save it as an HDF5 file instead of HTML:
End of explanation
with mne.open_report('report.h5') as report:
report.add_figs_to_section(fig,
captions='Left Auditory',
section='evoked',
replace=True)
report.save('report_final.html')
Explanation: This allows the possibility of multiple scripts adding figures to the same
report. To make this even easier, :class:mne.Report can be used as a
context manager:
End of explanation |
9,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Play Notebook
Import Data
The first dataset we will import is the Iris Dataset
Step1: Neural Network
First we train te network on x dataset
Step2: If you already trained the dataset there will be a pickle file with the trained network available. Now underneath we test in on the test-set
Step3: SVM
Visualize with (k)PCA
Here I import some code to plot the (k)PCA. I use the PCA libary from Sklearn
Step4: test using python definitons from other files
Instead of import we use %run, which works because it will store all the functions in the memory of the kernel.
As you can see, the example works. | Python Code:
from sklearn import datasets
X, y = datasets.make_hastie_10_2(n_samples=12000, random_state=1)
#make random test and train set
from sklearn import cross_validation
from sklearn.cross_validation import train_test_split
train_x, test_x, train_y, test_y = train_test_split(X, y, test_size=0.3, random_state=0)
Explanation: Play Notebook
Import Data
The first dataset we will import is the Iris Dataset
End of explanation
%run NeuralNetwork
import cPickle as pickle
#the neural network is based on code by Riaan Zoetmulder
inputData = X
targetData = y
myNN = NN.NNetwork(len(inputData[1]) , 60, 1 , 0.1, 0.5)
myNN.backPropagation(np.asarray(inputData), np.asarray(targetData), 1000)
#saves the trained state of the network
with open('NeuralNetwork.p', 'wb') as output_file:
pickle.dump(myNN, output_file, -1)
Explanation: Neural Network
First we train te network on x dataset
End of explanation
import cpickle as pickle
#has definition accuracy, accuracy(y_target, y_predict)
%run modelSelection
Explanation: If you already trained the dataset there will be a pickle file with the trained network available. Now underneath we test in on the test-set
End of explanation
#important to have this magic line inplace, otherwise the notebook will not plot
%matplotlib inline
#this imports the file from the folder by running all definitions from file will be in the memory of the kernel
%run PCA_visualization
kPCA_visualization2d(X, y)
Explanation: SVM
Visualize with (k)PCA
Here I import some code to plot the (k)PCA. I use the PCA libary from Sklearn
End of explanation
%run notebook_import_test
print_import()
Explanation: test using python definitons from other files
Instead of import we use %run, which works because it will store all the functions in the memory of the kernel.
As you can see, the example works.
End of explanation |
9,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Epidemics
During this seminar we will numerically solve systems of differential equations of SI, SIS and SIR models. <br> This experience is going to help us as we switch to network models.
SI model
In this model a sustainable infection process is considered. Infected part of population has no chance to be healed..<br>
In other words
Step1: The cool thing is that we can set $\beta$ and $\gamma$ to be dependent on $t$, that is interpreted as some ''sessional'' profile of the desease. <br>
Now, based on this code, implement SIS and SIR models
Step2: SIR model
In SIR model healed population gain immunity to the infection
\begin{equation}
\begin{cases}
\cfrac{ds(t)}{dt} = -\beta s(t)i(t)\
\cfrac{di(t)}{dt} = \beta s(t)i(t) - \gamma i(t)\
\cfrac{dr(t)}{dt} = \gamma i(t)
\end{cases}
\
i(t) + s(t) + r(t) = 1
\end{equation} | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
%matplotlib inline
def si_model( z0, T, **kwargs) :
beta = kwargs['beta']
t = np.arange( T, step = 0.1 )
si = lambda z ,t, beta : np.array([
-beta * z[0] * z[1],
beta * z[0] * z[1]])
return t, odeint(si, z0, t, (beta,))
t, z = si_model( z0 = [0.9,0.1], T = 50, beta = 0.2 )
# Lets plot our solution and phase-plot
fig, ax = plt.subplots(1,2,figsize=(14,6))
ax[0].plot(t, z[:,0], color='red')
ax[0].plot(t, z[:,1], color='blue')
ax[0].set_xlabel('$t$')
ax[0].set_ylabel('proportion')
ax[0].legend(['$S$','$I$'])
ax[1].plot(z[:,1], z[:,0], color = 'blue')
ax[1].set_xlabel('$I$')
ax[1].set_ylabel('$S$')
Explanation: Epidemics
During this seminar we will numerically solve systems of differential equations of SI, SIS and SIR models. <br> This experience is going to help us as we switch to network models.
SI model
In this model a sustainable infection process is considered. Infected part of population has no chance to be healed..<br>
In other words:
\begin{equation}
\begin{cases}
\cfrac{ds(t)}{dt} = -\beta s(t)i(t)\
\cfrac{di(t)}{dt} = \beta s(t)i(t)
\end{cases}
\
i(t) + s(t) = 1
\end{equation}
End of explanation
def sis_model( z0, T, **kwargs) :
beta, gamma = kwargs['beta'], kwargs['gamma']
t = np.arange( T, step = 0.1 )
sis = lambda z ,t, beta, gamma : np.array([
-beta * z[0] * z[1] + gamma * z[1],
beta * z[0] * z[1] - gamma * z[1]])
return t, odeint(sis, z0, t, (beta,gamma,))
t, z = si_model( z0 = [0.5,0.5], T = 50, beta = 0.5, gamma = 0.2 )
# Lets plot our solution and phase-plot
fig, ax = plt.subplots(1,2,figsize=(14,6))
ax[0].plot(t, z[:,0], color='red')
ax[0].plot(t, z[:,1], color='blue')
ax[0].set_xlabel('$t$')
ax[0].set_ylabel('proportion')
ax[0].legend(['$S$','$I$'])
ax[1].plot( z[:,1], z[:,0])
ax[1].set_xlabel('$I$')
ax[1].set_ylabel('$S$')
Explanation: The cool thing is that we can set $\beta$ and $\gamma$ to be dependent on $t$, that is interpreted as some ''sessional'' profile of the desease. <br>
Now, based on this code, implement SIS and SIR models:
SIS model
SIS model allowes infected agents to be cured, but without any further immunity.
\begin{equation}
\begin{cases}
\cfrac{ds(t)}{dt} = -\beta s(t)i(t) + \gamma i(t)\
\cfrac{di(t)}{dt} = \beta s(t)i(t) - \gamma i(t)
\end{cases}
\
i(t) + s(t) = 1
\end{equation}
Implement this model and check cases when $\gamma \lessgtr \beta$
End of explanation
def sir_model( z0, T, **kwargs) :
beta, gamma = kwargs['beta'], kwargs['gamma']
t = np.arange( T, step = 0.1 )
sir = lambda z, t, beta, gamma : np.array([
-beta * z[0] * z[1],
beta * z[0] * z[1] - gamma * z[1],
gamma * z[1]])
return t, odeint(sir, z0, t, (beta,gamma,))
t, z = sir_model( z0 = [0.9,0.09,0.01], T = 50, beta = 0.5, gamma = 0.02 )
# Lets plot our solution and phase-plot
fig, ax = plt.subplots(1,2,figsize=(14,6))
ax[0].plot(t, z[:,0], color='blue')
ax[0].plot(t, z[:,1], color='red')
ax[0].plot(t, z[:,2], color='orange')
ax[0].set_xlabel('$t$')
ax[0].set_ylabel('proportion')
ax[0].legend(['$S$','$I$', '$R$'])
ax[1].plot( z[:,0], z[:,2])
ax[1].set_xlabel('$S$')
ax[1].set_ylabel('$R$')
Explanation: SIR model
In SIR model healed population gain immunity to the infection
\begin{equation}
\begin{cases}
\cfrac{ds(t)}{dt} = -\beta s(t)i(t)\
\cfrac{di(t)}{dt} = \beta s(t)i(t) - \gamma i(t)\
\cfrac{dr(t)}{dt} = \gamma i(t)
\end{cases}
\
i(t) + s(t) + r(t) = 1
\end{equation}
End of explanation |
9,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
English localization
Static list of correct answers in English.
Additional columns
Language column
Preparation for temporalities
Renaming
List of answers
Scientific questions
Demographic questions
Language selection
Basic operations
Checkpoint / Question matching
Step1: Additional columns
<a id=addcol />
Language column
<a id=lang />
Step2: Preparation for temporalities
<a id=temp />
Step3: Displays all unique answers to every question
printIndex = -12
avoidIndexes = [-12, -9, 28, 29, 30, 31]
for question in gform.columns
Step4: List of answers
<a id=enform />
Scientific questions
<a id=ensciform />
Step5: Demographic questions
<a id=endemform />
Step6: Language selection
<a id=langsel />
Step7: Basic operations
<a id=basicops />
Access
For instance, access to the correct answer to the 19th question.
Step8: Checkpoint / Question matching
<a id=checkquestmatch />
cf Hero.Coli knowledge content document | Python Code:
%run "../Utilities/Preparation.ipynb"
processGFormEN = not ('gformEN' in globals())
if processGFormEN:
# tz='Europe/Berlin' time
dateparseGForm = lambda x: pd.Timestamp(x.split(' GMT')[0], tz='Europe/Berlin').tz_convert('utc')
if processGFormEN:
csvEncoding = 'utf-8'
gformPath = "../../data/Google forms/"
genericFormName = '-gform-'
csvSuffix = '.csv'
enLanguageID = 'en'
enSuffix = enLanguageID
# dataFilesNamesStem is created in Preparation.ipynb
try:
gformEN = pd.read_csv(\
gformPath + dataFilesNamesStem + genericFormName + enSuffix + csvSuffix,\
dtype=str,\
parse_dates=['Timestamp'],\
date_parser=dateparseGForm,\
)
print("gformEN read_csv success")
except FileNotFoundError:
print("gformEN read_csv failed")
frLanguageID = 'fr'
frSuffix = frLanguageID
frTranslationsPath = gformPath + 'translations-' + frSuffix + csvSuffix
frCorrectScientificAnswersPath = gformPath + 'CorrectScientific-' + frSuffix + csvSuffix
frDemographicAnswersPath = gformPath + 'Demographic-' + frSuffix + csvSuffix
Explanation: English localization
Static list of correct answers in English.
Additional columns
Language column
Preparation for temporalities
Renaming
List of answers
Scientific questions
Demographic questions
Language selection
Basic operations
Checkpoint / Question matching
End of explanation
if processGFormEN:
gformEN['Language'] = pd.Series(enLanguageID, index=gformEN.index)
Explanation: Additional columns
<a id=addcol />
Language column
<a id=lang />
End of explanation
if processGFormEN:
# when did the user answer the questionnaire? After playing, before playing, undefined?
answerTemporalities = ['before', 'after', 'undefined'];
gformEN['Temporality'] = pd.Series(answerTemporalities[2], index=gformEN.index)
Explanation: Preparation for temporalities
<a id=temp />
End of explanation
if processGFormEN:
renamedQuestions = pd.Index([
'Timestamp',
'Are you interested in video games?',
'Do you play video games?',
'How old are you?',
'What is your gender?',
'How long have you studied biology?',
'Are you interested in biology?',
'Before playing Hero.Coli, had you ever heard about synthetic biology?',
'Before playing Hero.Coli, had you ever heard about BioBricks?',
'Have you ever played an older version of Hero.Coli before?',
'Have you played the current version of Hero.Coli?',
'Have you played the arcade cabinet version of Hero.Coli?',
'Have you played the Android version of Hero.Coli?',
'In order to modify the abilities of the bacterium, you have to...',
'What are BioBricks and devices?',
'What is the name of this BioBrick? TER',
'What is the name of this BioBrick? PR',
'What is the name of this BioBrick? CDS',
'What is the name of this BioBrick? RBS',
'What does this BioBrick do? TER',
'What does this BioBrick do? PR',
'What does this BioBrick do? CDS',
'What does this BioBrick do? RBS',
'Pick the case where the BioBricks are well-ordered:',
'When does green fluorescence happen?',
'What happens when you unequip the movement device?',
'What is this? PLASMID',
'What does this device do? PCONS:RBS:GFP:TER',
'What does this device do? PCONS:RBS:FLHDC:TER',
'What does this device do? PCONS:RBS:AMPR:TER',
'What does this device do? PBAD:RBS:GFP:TER',
'What does this device do? PCONS:RBS:GFP:TER 2',
'What does this device do? PCONS:RBS:FLHDC:TER 2',
'What does this device do? PCONS:RBS:AMPR:TER 2',
'What does this device do? PBAD:RBS:GFP:TER 2',
'Guess: what would a device producing l-arabinose do, if it started with a l-arabinose-induced promoter?',
'Guess: the bacterium would glow yellow...',
'What is the species of the bacterium of the game?',
'What is the scientific name of the tails of the bacterium?',
'Find the antibiotic:',
'You can write down remarks here.',
'Do not edit - pre-filled anonymous ID',
'Language',
'Temporality'
])
if processGFormEN:
gformEN.columns = renamedQuestions
Explanation: Displays all unique answers to every question
printIndex = -12
avoidIndexes = [-12, -9, 28, 29, 30, 31]
for question in gform.columns:
if (printIndex not in avoidIndexes):
print("Q" + str(printIndex) + " " + question \
+ "\n" + str(gform[question].unique()) + "\n\n")
printIndex = printIndex + 1
Renaming
<a id=renaming />
Labels are made more explicit. Their naming was done automatically by Google forms.
End of explanation
if processGFormEN:
correctAnswersEN = pd.Series(
[
# Timestamp
[], #1
# Basic demographics questions
[], #2
[], #3
[], #4
[], #5
# Basic biology questions
[], #6
[], #7
[], #8
[], #9
# Your experience with Hero.Coli
[], #10
[], #11
[], #12
[], #13
# General mechanics of the game
["Edit the DNA of the bacterium"], #14
["DNA sequences"], #15
# BioBricks
["Terminator"], #16
["Promoter"], #17
["Coding Sequence"], #18
["RBS"], #19
# BioBrick functions
["It shows the end of the device"], #20
["It controls when the device is active"], #21
["It controls which protein is produced, and thus which ability is affected"], #22
["It controls the level of expression, and thus how much the ability will be affected"], #23
# Devices
["Option 1"], #24
["Under blue light, when the GFP device is equipped"], #25
["Flagella quickly disappear one by one"], #26
# Devices
["A plasmid - it makes it possible to equip an additional device"], #27
# Device symbols
["It generates green fluorescence"], #28
["It makes it possible to move faster"], #29
["It generates antibiotic resistance"], #30
["It generates green fluorescence in presence of l-arabinose"], #31
# Device symbols
["It generates green fluorescence"], #32
["It makes it possible to move faster"], #33
["It generates antibiotic resistance"], #34
["It generates green fluorescence in presence of l-arabinose"], #35
# Beyond the game
["After being induced, it would produce more and"], #36
["If it produces YFP under cyan light",
"If it produced YFP under cyan light"], #37
["E. Coli"], #38
["Flagella"], #39
["Ampicillin"], #40
# Remarks
[], #41
# ID
[], #42
# Language
[], #43
# Temporality
[], #44
], index = gformEN.columns
)
#correctAnswersEN
Explanation: List of answers
<a id=enform />
Scientific questions
<a id=ensciform />
End of explanation
if processGFormEN:
interestPositives = ["A lot", "Extremely", "Moderately"]
gameInterestPositives = interestPositives
frequencyPositives = interestPositives
agePositives = [18,19,20,21,22,23]
genderPositives = ["Female"]
biologyStudyPositives = ["Until bachelor's degree", "At least until master's degree"]
biologyInterestPositives = interestPositives
yesNoIdontknowPositives = ["Yes"]
previousPlayPositives = ["Multiple times","A few times","Once","Yes"]
languagePositives = [enLanguageID]
temporalityPositives = [answerTemporalities[1]]
demographicAnswersEN = pd.Series(
[
# Timestamp
[], #1
# Basic demographics questions
interestPositives, #2
frequencyPositives, #3
agePositives, #4
genderPositives, #5
# Basic biology questions
biologyStudyPositives, #6
biologyInterestPositives, #7
yesNoIdontknowPositives, #8
yesNoIdontknowPositives, #9
# Your experience with Hero.Coli
previousPlayPositives, #10
previousPlayPositives, #11
previousPlayPositives, #12
previousPlayPositives, #13
# General mechanics of the game
[], #14
[], #15
# BioBricks
[], #16
[], #17
[], #18
[], #19
# BioBrick functions
[], #20
[], #21
[], #22
[], #23
# Devices
[], #24
[], #25
[], #26
# Devices
[], #27
# Device symbols
[], #28
[], #29
[], #30
[], #31
# Device symbols
[], #32
[], #33
[], #34
[], #35
# Beyond the game
[], #36
[], #37
[], #38
[], #39
[], #40
# Remarks
[], #41
# ID
[], #42
# Language
languagePositives, #43
# Temporality
temporalityPositives, #44
], index = gformEN.columns
)
#demographicAnswersEN
Explanation: Demographic questions
<a id=endemform />
End of explanation
if processGFormEN:
correctAnswers = correctAnswersEN
if processGFormEN:
demographicAnswers = demographicAnswersEN
Explanation: Language selection
<a id=langsel />
End of explanation
#correctAnswers.loc[gformEN.columns[19]]
Explanation: Basic operations
<a id=basicops />
Access
For instance, access to the correct answer to the 19th question.
End of explanation
if processGFormEN:
checkpointQuestionMatching = pd.DataFrame(
{
'checkpoint' : [
# "Timestamp", # 1
'',
# Basic demographics questions
# "Are you interested in video games?", # 2
'',
# "Do you play video games?", # 3
'',
# "How old are you?", # 4
'',
# "What is your gender?", # 5
'',
# Basic biology questions
# "How long have you studied biology?", # 6
'',
# "Are you interested in biology?", # 7
'',
# "Before playing Hero.Coli, had you ever heard about synthetic biology?", # 8
'',
# "Before playing Hero.Coli, had you ever heard about BioBricks?", # 9
'',
# Your experience with Hero.Coli
# "Have you ever played an older version of Hero.Coli before?", # 10
'',
# "Have you played the current version of Hero.Coli?", # 11
'',
# "Have you played the arcade cabinet version of Hero.Coli?", # 12
'',
# "Have you played the Android version of Hero.Coli?", # 13
'',
# General mechanics of the game
# "In order to modify the abilities of the bacterium, you have to...", # 14
'tutorial1.Checkpoint00',
# "What are BioBricks and devices?", # 15
'tutorial1.Checkpoint00',
# BioBricks
# "What is the name of this BioBrick?", # 16
'tutorial1.Checkpoint05',
# "What is the name of this BioBrick?", # 17
'tutorial1.Checkpoint05',
# "What is the name of this BioBrick?", # 18
'tutorial1.Checkpoint02',
# "What is the name of this BioBrick?", # 19
'tutorial1.Checkpoint01',
# BioBrick functions
# "What does this BioBrick do?", # 20
'tutorial1.Checkpoint05',
# "What does this BioBrick do?", # 21
'tutorial1.Checkpoint05',
# "What does this BioBrick do?", # 22
'tutorial1.Checkpoint02',
# "What does this BioBrick do?", # 23
'tutorial1.Checkpoint01',
# Devices
# "Pick the case where the BioBricks are well-ordered:", # 24
'tutorial1.Checkpoint01',
# "When does green fluorescence happen?", # 25
'tutorial1.Checkpoint02',
# "What happens when you unequip the movement device?", # 26
'tutorial1.Checkpoint00',
# Devices
# "What is this?", # 27
'tutorial1.Checkpoint05',
# Device symbols
# "What does this device do?", # 28
'tutorial1.Checkpoint02',
# "What does this device do?", # 29
'tutorial1.Checkpoint02',
# "What does this device do?", # 30
'tutorial1.Checkpoint13',
# "What does this device do?", # 31
'tutorial1.Checkpoint05',
# Device symbols
# "What does this device do?", # 32
'tutorial1.Checkpoint02',
# "What does this device do?", # 33
'tutorial1.Checkpoint02',
# "What does this device do?", # 34
'tutorial1.Checkpoint13',
# "What does this device do?", # 35
'tutorial1.Checkpoint05',
# Beyond the game
# "Guess: what would a device producing l-arabinose do, \
# if it started with a l-arabinose-induced promoter?", # 36
'tutorial1.Checkpoint05',
# "Guess: the bacterium would glow yellow...", # 37
'tutorial1.Checkpoint02',
# "What is the species of the bacterium of the game?", # 38
'tutorial1.Checkpoint00',
# "What is the scientific name of the tails of the bacterium?", # 39
'tutorial1.Checkpoint00',
# "Find the antibiotic:", # 40
'tutorial1.Checkpoint02',
# Remarks
# "You can write down remarks here.", # 41
'',
# Thanks to have filled this study!
# "Do not edit - pre-filled anonymous ID" # 42
'',
# Language
'',
# Temporality
'',
]
}, index = gformEN.columns
)
#checkpointQuestionMatching
#checkpointQuestionMatching['checkpoint'][20]
#checkpointQuestionMatching.loc[gformEN.columns[20], 'checkpoint']
if processGFormEN:
def getUniqueSortedCheckpoints( checkpoints ):
result = checkpoints.unique()
result = result[result!='']
result = pd.Series(result)
result = result.sort_values()
result.index = range(0, len(result))
return result
if processGFormEN:
validableCheckpoints = getUniqueSortedCheckpoints(checkpointQuestionMatching['checkpoint'])
Explanation: Checkpoint / Question matching
<a id=checkquestmatch />
cf Hero.Coli knowledge content document
End of explanation |
9,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-3', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
9,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recommender systems
Описание задачи
Небольшой интернет-магазин попросил вас добавить ранжирование товаров в блок "Смотрели ранее" - в нём теперь надо показывать не последние просмотренные пользователем товары, а те товары из просмотренных, которые он наиболее вероятно купит. Качество вашего решения будет оцениваться по количеству покупок в сравнении с прошлым решением в ходе А/В теста, т.к. по доходу от продаж статзначимость будет достигаться дольше из-за разброса цен. Таким образом, ничего заранее не зная про корреляцию оффлайновых и онлайновых метрик качества, в начале проекта вы можете лишь постараться оптимизировать recall@k и precision@k.
Это задание посвящено построению простых бейзлайнов для этой задачи
Step1: 1. Reading sessions train and test datasets.
Step2: 2. Split datasets by looks and purchases.
Step3: 3. Create and sort arrays of unique ids counters for looks and purchases for train dataset.
Step4: 4. Calculating metrics for train dataset with suggestions based on looks.
Step5: 5. Calculating metrics for train dataset with suggestions based on purchases.
Step6: 6. Create and sort arrays of unique ids counters for looks and purchases for test dataset.
Step7: 7. Calculating metrics for test dataset with suggestions based on looks.
Step8: 8. Calculating metrics for test dataset with suggestions based on purchases. | Python Code:
from __future__ import division, print_function
import numpy as np
import pandas as pd
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Explanation: Recommender systems
Описание задачи
Небольшой интернет-магазин попросил вас добавить ранжирование товаров в блок "Смотрели ранее" - в нём теперь надо показывать не последние просмотренные пользователем товары, а те товары из просмотренных, которые он наиболее вероятно купит. Качество вашего решения будет оцениваться по количеству покупок в сравнении с прошлым решением в ходе А/В теста, т.к. по доходу от продаж статзначимость будет достигаться дольше из-за разброса цен. Таким образом, ничего заранее не зная про корреляцию оффлайновых и онлайновых метрик качества, в начале проекта вы можете лишь постараться оптимизировать recall@k и precision@k.
Это задание посвящено построению простых бейзлайнов для этой задачи: ранжирование просмотренных товаров по частоте просмотров и по частоте покупок. Эти бейзлайны, с одной стороны, могут помочь вам грубо оценить возможный эффект от ранжирования товаров в блоке - например, чтобы вписать какие-то числа в коммерческое предложение заказчику, а с другой стороны, могут оказаться самым хорошим вариантом, если данных очень мало (недостаточно для обучения даже простых моделей).
Входные данные
Вам дается две выборки с пользовательскими сессиями - id-шниками просмотренных и id-шниками купленных товаров. Одна выборка будет использоваться для обучения (оценки популярностей товаров), а другая - для теста.
В файлах записаны сессии по одной в каждой строке. Формат сессии: id просмотренных товаров через , затем идёт ; после чего следуют id купленных товаров (если такие имеются), разделённые запятой. Например, 1,2,3,4; или 1,2,3,4;5,6.
Гарантируется, что среди id купленных товаров все различные.
Важно:
Сессии, в которых пользователь ничего не купил, исключаем из оценки качества.
Если товар не встречался в обучающей выборке, его популярность равна 0.
Рекомендуем разные товары. И их число должно быть не больше, чем количество различных просмотренных пользователем товаров.
Рекомендаций всегда не больше, чем минимум из двух чисел: количество просмотренных пользователем товаров и k в recall@k / precision@k.
Задание
На обучении постройте частоты появления id в просмотренных и в купленных (id может несколько раз появляться в просмотренных, все появления надо учитывать)
Реализуйте два алгоритма рекомендаций:
сортировка просмотренных id по популярности (частота появления в просмотренных),
сортировка просмотренных id по покупаемости (частота появления в покупках).
Для данных алгоритмов выпишите через пробел AverageRecall@1, AveragePrecision@1, AverageRecall@5, AveragePrecision@5 на обучающей и тестовых выборках, округляя до 2 знака после запятой. Это будут ваши ответы в этом задании. Посмотрите, как они соотносятся друг с другом. Где качество получилось выше? Значимо ли это различие? Обратите внимание на различие качества на обучающей и тестовой выборке в случае рекомендаций по частотам покупки.
Если частота одинаковая, то сортировать нужно по возрастанию момента просмотра (чем раньше появился в просмотренных, тем больше приоритет)
End of explanation
# Reading train and test data
with open('coursera_sessions_train.txt', 'r') as f:
sess_train = f.read().splitlines()
with open('coursera_sessions_test.txt', 'r') as f:
sess_test = f.read().splitlines()
Explanation: 1. Reading sessions train and test datasets.
End of explanation
# Create train array splitted by looks (look_items) and purchases (pur_items)
sess_train_lp = []
for sess in sess_train:
look_items, pur_items = sess.split(';')
look_items = map(int, look_items.split(','))
if len(pur_items) > 0:
pur_items = map(int, pur_items.split(','))
else:
pur_items = []
sess_train_lp.append([look_items, pur_items])
# Create test array splitted by looks (look_items) and purchases (pur_items)
sess_test_lp = []
for sess in sess_test:
look_items, pur_items = sess.split(';')
look_items = map(int, look_items.split(','))
if len(pur_items) > 0:
pur_items = map(int, pur_items.split(','))
else:
pur_items = []
sess_test_lp.append([look_items, pur_items])
Explanation: 2. Split datasets by looks and purchases.
End of explanation
# Array of looks
sess_train_l = [row[0] for row in sess_train_lp]
sess_train_l_np = np.array( [id_n for sess in sess_train_l for id_n in sess] )
# Array of unique ids and looks in train data
sess_train_l_cnt = np.transpose(np.unique(sess_train_l_np, return_counts=True))
sess_train_l_cnt
# Array of purchases
sess_train_p = [row[1] for row in sess_train_lp]
sess_train_p_np = np.array( [id_n for sess in sess_train_p for id_n in sess] )
# Array of unique ids and purchases in train dataset
sess_train_p_cnt = np.transpose(np.unique(sess_train_p_np, return_counts=True))
sess_train_p_cnt
# Sorting arrays of looks and purchases by counts
sess_train_l_cnt = sess_train_l_cnt[sess_train_l_cnt[:,1].argsort()][::-1]
sess_train_p_cnt = sess_train_p_cnt[sess_train_p_cnt[:,1].argsort()][::-1]
Explanation: 3. Create and sort arrays of unique ids counters for looks and purchases for train dataset.
End of explanation
def prec_rec_metrics(session, reccomendations, k):
purchase = 0
for ind in reccomendations:
if ind in session:
purchase += 1
precision = purchase / k
recall = purchase / len(session)
return(precision, recall)
# Calculate metrics for train dataset, suggestions based on looks
prec_at_1_tr_l, rec_at_1_tr_l = [], []
prec_at_5_tr_l, rec_at_5_tr_l = [], []
k1, k5 = 1, 5
for i, sess_p in enumerate(sess_train_p):
# skip sessions without purchases
if sess_p == []: continue
# looks ids
sess_l = sess_train_l[i]
# sorted looks ids indices in sess_train_l_cnt array
# sort in accordance with looks counts
l_ind_sess = []
for j in range(len(sess_l)):
l_ind_sess.append(np.where(sess_train_l_cnt[:,0] == sess_l[j])[0][0])
l_ind_sess_sorted = np.unique(l_ind_sess)
# k1 recommendations
num_of_recs_k1 = min(k1, len(sess_l))
if num_of_recs_k1 == 0: continue
recs_k1 = sess_train_l_cnt[l_ind_sess_sorted[:num_of_recs_k1],0]
# k1 metrics
prec_1, rec_1 = prec_rec_metrics(sess_p, recs_k1, k1)
prec_at_1_tr_l.append(prec_1)
rec_at_1_tr_l.append(rec_1)
# k5 recommendations
num_of_recs_k5 = min(k5, len(sess_l))
if num_of_recs_k5 == 0: continue
recs_k5 = sess_train_l_cnt[l_ind_sess_sorted[:num_of_recs_k5],0]
# k5 metrics
prec_5, rec_5 = prec_rec_metrics(sess_p, recs_k5, k5)
prec_at_5_tr_l.append(prec_5)
rec_at_5_tr_l.append(rec_5)
avg_prec_at_1_tr_l = np.mean(prec_at_1_tr_l)
avg_rec_at_1_tr_l = np.mean(rec_at_1_tr_l)
avg_prec_at_5_tr_l = np.mean(prec_at_5_tr_l)
avg_rec_at_5_tr_l = np.mean(rec_at_5_tr_l)
with open('ans1.txt', 'w') as f:
r1 = '%.2f' % round(avg_rec_at_1_tr_l, 2)
p1 = '%.2f' % round(avg_prec_at_1_tr_l, 2)
r5 = '%.2f' % round(avg_rec_at_5_tr_l, 2)
p5 = '%.2f' % round(avg_prec_at_5_tr_l, 2)
ans1 = ' '.join([r1, p1, r5, p5])
print('Answer 1:', ans1)
f.write(ans1)
Explanation: 4. Calculating metrics for train dataset with suggestions based on looks.
End of explanation
# Calculate metrics for train dataset, suggestions based on purchases
prec_at_1_tr_p, rec_at_1_tr_p = [], []
prec_at_5_tr_p, rec_at_5_tr_p = [], []
k1, k5 = 1, 5
for i, sess_p in enumerate(sess_train_p):
# skip sessions without purchases
if sess_p == []: continue
# looks ids
sess_l = sess_train_l[i]
# sorted looks ids indices in sess_train_p_cnt array
# sort in accordance with purchases counts
l_ind_sess = []
for j in range(len(sess_l)):
if sess_l[j] not in sess_train_p_cnt[:,0]: continue
l_ind_sess.append(np.where(sess_train_p_cnt[:,0] == sess_l[j])[0][0])
l_ind_sess_sorted = np.unique(l_ind_sess)
# k1 recommendations
num_of_recs_k1 = min(k1, len(sess_l), len(l_ind_sess_sorted))
if num_of_recs_k1 == 0: continue
recs_k1 = sess_train_p_cnt[l_ind_sess_sorted[:num_of_recs_k1],0]
# k1 metrics
prec_1, rec_1 = prec_rec_metrics(sess_p, recs_k1, k1)
prec_at_1_tr_p.append(prec_1)
rec_at_1_tr_p.append(rec_1)
# k5 recommendations
num_of_recs_k5 = min(k5, len(sess_l), len(l_ind_sess_sorted))
if num_of_recs_k5 == 0: continue
recs_k5 = sess_train_p_cnt[l_ind_sess_sorted[:num_of_recs_k5],0]
# k5 metrics
prec_5, rec_5 = prec_rec_metrics(sess_p, recs_k5, k5)
prec_at_5_tr_p.append(prec_5)
rec_at_5_tr_p.append(rec_5)
avg_prec_at_1_tr_p = np.mean(prec_at_1_tr_p)
avg_rec_at_1_tr_p = np.mean(rec_at_1_tr_p)
avg_prec_at_5_tr_p = np.mean(prec_at_5_tr_p)
avg_rec_at_5_tr_p = np.mean(rec_at_5_tr_p)
with open('ans2.txt', 'w') as f:
r1 = '%.2f' % round(avg_rec_at_1_tr_p, 2)
p1 = '%.2f' % round(avg_prec_at_1_tr_p, 2)
r5 = '%.2f' % round(avg_rec_at_5_tr_p, 2)
p5 = '%.2f' % round(avg_prec_at_5_tr_p, 2)
ans2 = ' '.join([r1, p1, r5, p5])
print('Answer 2:', ans2)
f.write(ans2)
Explanation: 5. Calculating metrics for train dataset with suggestions based on purchases.
End of explanation
# Array of looks
sess_test_l = [row[0] for row in sess_test_lp]
sess_test_l_np = np.array( [id_n for sess in sess_test_l for id_n in sess] )
# Array of unique ids and looks in train data
#sess_test_l_cnt = np.transpose(np.unique(sess_test_l_np, return_counts=True))
sess_test_l_np
#sess_test_l_cnt
# Array of purchases
sess_test_p = [row[1] for row in sess_test_lp]
sess_test_p_np = np.array( [id_n for sess in sess_test_p for id_n in sess] )
# Array of unique ids and purchases in train dataset
#sess_test_p_cnt = np.transpose(np.unique(sess_test_p_np, return_counts=True))
sess_test_p_np
#sess_test_p_cnt
# Sorting arrays of looks and purchases by counts
#sess_train_l_cnt = sess_train_l_cnt[sess_train_l_cnt[:,1].argsort()][::-1]
#sess_train_p_cnt = sess_train_p_cnt[sess_train_p_cnt[:,1].argsort()][::-1]
Explanation: 6. Create and sort arrays of unique ids counters for looks and purchases for test dataset.
End of explanation
# Calculate metrics for test dataset, suggestions based on looks
prec_at_1_tst_l, rec_at_1_tst_l = [], []
prec_at_5_tst_l, rec_at_5_tst_l = [], []
k1, k5 = 1, 5
for i, sess_p in enumerate(sess_test_p):
# skip sessions without purchases
if sess_p == []: continue
# looks ids
sess_l = sess_test_l[i]
# sorted looks ids indices in sess_train_l_cnt array
# sort in accordance with looks counts
l_ind_sess = []
new_ids = []
for j in range(len(sess_l)):
if sess_l[j] not in sess_train_l_cnt[:,0]:
new_ids.append(sess_l[j])
continue
l_ind_sess.append(np.where(sess_train_l_cnt[:,0] == sess_l[j])[0][0])
l_ind_sess_sorted = np.unique(l_ind_sess)
# k1 recommendations
num_of_recs_k1 = min(k1, len(sess_l))
if num_of_recs_k1 == 0: continue
if l_ind_sess != []:
recs_k1 = sess_train_l_cnt[l_ind_sess_sorted[:num_of_recs_k1],0]
else:
recs_k1 = []
recs_k1 = np.concatenate((np.array(recs_k1, dtype='int64'), np.unique(np.array(new_ids, dtype='int64'))))[:num_of_recs_k1]
#recs_k1
# k1 metrics
prec_1, rec_1 = prec_rec_metrics(sess_p, recs_k1, k1)
prec_at_1_tst_l.append(prec_1)
rec_at_1_tst_l.append(rec_1)
# k5 recommendations
num_of_recs_k5 = min(k5, len(sess_l))
if num_of_recs_k5 == 0: continue
if l_ind_sess != []:
recs_k5 = sess_train_l_cnt[l_ind_sess_sorted[:num_of_recs_k5],0]
else:
recs_k5 = []
recs_k5 = np.concatenate((np.array(recs_k5, dtype='int64'), np.unique(np.array(new_ids, dtype='int64'))))[:num_of_recs_k5]
#recs_k5
# k5 metrics
prec_5, rec_5 = prec_rec_metrics(sess_p, recs_k5, k5)
prec_at_5_tst_l.append(prec_5)
rec_at_5_tst_l.append(rec_5)
avg_prec_at_1_tst_l = np.mean(prec_at_1_tst_l)
avg_rec_at_1_tst_l = np.mean(rec_at_1_tst_l)
avg_prec_at_5_tst_l = np.mean(prec_at_5_tst_l)
avg_rec_at_5_tst_l = np.mean(rec_at_5_tst_l)
with open('ans3.txt', 'w') as f:
r1 = '%.2f' % round(avg_rec_at_1_tst_l, 2)
p1 = '%.2f' % round(avg_prec_at_1_tst_l, 2)
r5 = '%.2f' % round(avg_rec_at_5_tst_l, 2)
p5 = '%.2f' % round(avg_prec_at_5_tst_l, 2)
ans3 = ' '.join([r1, p1, r5, p5])
print('Answer 3:', ans3)
f.write(ans3)
Explanation: 7. Calculating metrics for test dataset with suggestions based on looks.
End of explanation
def uniquifier(seq):
seen = set()
return [x for x in seq if not (x in seen or seen.add(x))]
# Calculate metrics for test dataset, suggestions based on purchases
prec_at_1_tst_p, rec_at_1_tst_p = [], []
prec_at_5_tst_p, rec_at_5_tst_p = [], []
k1, k5 = 1, 5
for i, sess_p in enumerate(sess_test_p):
# skip sessions without purchases
if sess_p == []: continue
# looks ids
sess_l = sess_test_l[i]
# sorted looks ids indices in sess_train_p_cnt array
# sort in accordance with purchases counts
l_ind_sess = []
new_ids = []
for j in range(len(sess_l)):
if sess_l[j] not in sess_train_p_cnt[:,0]:
new_ids.append(sess_l[j])
continue
l_ind_sess.append(np.where(sess_train_p_cnt[:,0] == sess_l[j])[0][0])
l_ind_sess_sorted = np.unique(l_ind_sess)
# k1 recommendations
num_of_recs_k1 = min(k1, len(sess_l))
if num_of_recs_k1 == 0: continue
if l_ind_sess != []:
recs_k1 = sess_train_p_cnt[l_ind_sess_sorted[:num_of_recs_k1],0]
else:
recs_k1 = []
recs_k1 = np.concatenate((np.array(recs_k1, dtype='int64'), np.array(uniquifier(np.array(new_ids, dtype='int64')))))[:num_of_recs_k1]
# k1 metrics
prec_1, rec_1 = prec_rec_metrics(sess_p, recs_k1, k1)
prec_at_1_tst_p.append(prec_1)
rec_at_1_tst_p.append(rec_1)
# k5 recommendations
num_of_recs_k5 = min(k5, len(sess_l))
if num_of_recs_k5 == 0: continue
if l_ind_sess != []:
recs_k5 = sess_train_p_cnt[l_ind_sess_sorted[:num_of_recs_k5],0]
else:
recs_k5 = []
recs_k5 = np.concatenate((np.array(recs_k5, dtype='int64'), np.array(uniquifier(np.array(new_ids, dtype='int64')))))[:num_of_recs_k5]
# k5 metrics
prec_5, rec_5 = prec_rec_metrics(sess_p, recs_k5, k5)
prec_at_5_tst_p.append(prec_5)
rec_at_5_tst_p.append(rec_5)
avg_prec_at_1_tst_p = np.mean(prec_at_1_tst_p)
avg_rec_at_1_tst_p = np.mean(rec_at_1_tst_p)
avg_prec_at_5_tst_p = np.mean(prec_at_5_tst_p)
avg_rec_at_5_tst_p = np.mean(rec_at_5_tst_p)
with open('ans4.txt', 'w') as f:
r1 = '%.2f' % round(avg_rec_at_1_tst_p, 2)
p1 = '%.2f' % round(avg_prec_at_1_tst_p, 2)
r5 = '%.2f' % round(avg_rec_at_5_tst_p, 2)
p5 = '%.2f' % round(avg_prec_at_5_tst_p, 2)
ans4 = ' '.join([r1, p1, r5, p5])
print('Answer 4:', ans4)
f.write(ans4)
Explanation: 8. Calculating metrics for test dataset with suggestions based on purchases.
End of explanation |
9,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unsupervised Learning
Davis SML
Step2: TFIDF vectorization
document vectorization counts the proportion of words in document
$X_{i,j}$ is the "proportion" of word j in document i
tfidf indicates term-frequency (proportion of words in document i which is word j), and inverse document frequency (log of inverse frequency of documents containing word j)
Step3: Document clustering
rows are documents, columns are words
clustering with sklearn KMeans
selected 10 clusters arbitrarily
Step4: Word clustering
take the transpose of X
rows are words and columns are documents
clusters of words based on document co-occurrence
Step5: Principle Components for documents
PCA reduces dimensions
sklearn PCA for dense matrices
sklearn TruncatedSVD for sparse (does not center) | Python Code:
from lxml import html, etree
import numpy as np
from sklearn import cluster, feature_extraction, metrics, preprocessing, decomposition
import collections
import nltk
import pandas as pd
import plotnine as p9
# nltk.download()
# Download Corpora -> stopwords, Models -> punkt
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
Explanation: Unsupervised Learning
Davis SML: Lecture 10 Part 3
Prof. James Sharpnack
End of explanation
reu = html.parse("reuters/reut2-000.sgm") #You will have to do this for all sgm files here
def parse_reu(reu):
Parses the etree object and returns a list of dictionary of reuters attr
Output: {'topics': the topic of the article, 'places': where it is located,
'split': training/test split, 'body':the text of the article as a set of words with stopwords removed}
root= reu.getroot()
articles = root.body.getchildren()
stop_words = set(stopwords.words('english'))
reu_pl = []
for a in articles:
reu_parse = {}
if a.attrib['topics'] != 'YES':
next
topics = a.find('topics').findall('d')
if topics:
reu_parse['topics'] = [t.text for t in topics]
else:
reu_parse['topics'] = []
places = a.find('places').findall('d')
if places:
reu_parse['places'] = [t.text for t in places]
reu_parse['split'] = a.attrib['lewissplit']
rtxt = a.find('text')
word_tokens = word_tokenize(rtxt.text_content())
filtered_sentence = " ".join([w.lower() for w in word_tokens if not w in stop_words])
reu_parse['body'] = filtered_sentence
reu_pl.append(reu_parse)
return reu_pl
reu_pl = parse_reu(reu)
print(reu_pl[0]['topics'])
reu_pl[0]['body']
vec = feature_extraction.text.TfidfVectorizer()
X = vec.fit_transform(doc['body'] for doc in reu_pl)
X.shape
Explanation: TFIDF vectorization
document vectorization counts the proportion of words in document
$X_{i,j}$ is the "proportion" of word j in document i
tfidf indicates term-frequency (proportion of words in document i which is word j), and inverse document frequency (log of inverse frequency of documents containing word j)
End of explanation
doc_clust = cluster.KMeans(n_clusters=10)
doc_clust.fit(X)
doc_clust.cluster_centers_.shape
vocab_lookup = {b:a for a,b in vec.vocabulary_.items()}
ccargsort = doc_clust.cluster_centers_.argsort(axis=1)
center_vocab = [[vocab_lookup[row[-i]] for i in range(1,21)] for row in ccargsort]
print("\n\n".join([" ".join(voc) for voc in center_vocab]))
clust_counts = collections.Counter(doc_clust.labels_)
clust_counts
proto_inds = metrics.pairwise_distances_argmin(doc_clust.cluster_centers_,X)
print("\n\n".join([reu_pl[i]['body'] for i in proto_inds]))
Explanation: Document clustering
rows are documents, columns are words
clustering with sklearn KMeans
selected 10 clusters arbitrarily
End of explanation
word_clust = cluster.KMeans(n_clusters=10)
#W = preprocessing.StandardScaler(with_mean=False).fit_transform(X.transpose())
word_clust.fit(X.transpose())
clust_counts = collections.Counter(word_clust.labels_)
clust_counts
[" ".join([vocab_lookup[i] for i in np.where(word_clust.labels_ == i)[0]]) for i in range(1,10)]
proto_inds = metrics.pairwise_distances_argmin(word_clust.cluster_centers_,X.transpose())
print("\n".join([vocab_lookup[i] for i in proto_inds]))
Explanation: Word clustering
take the transpose of X
rows are words and columns are documents
clusters of words based on document co-occurrence
End of explanation
doc_SVD = decomposition.TruncatedSVD(n_components=2)
X_pca = doc_SVD.fit_transform(X)
X_df = pd.DataFrame(X_pca,columns=['pca_1','pca_2'])
X_df['clust'] = doc_clust.labels_
p9.ggplot(X_df, p9.aes(x='pca_1',y='pca_2',color='clust')) + p9.geom_point()
Explanation: Principle Components for documents
PCA reduces dimensions
sklearn PCA for dense matrices
sklearn TruncatedSVD for sparse (does not center)
End of explanation |
9,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Processes in Computational Physics
The contents of this Jupyter Notebook lecture notes are
Step1: Random Processes in Physics
Examples of physical processes that are/can be modelled as random include
Step2: Some basic functions to point out (we'll get to others in a bit)
Step3: Notice you have to use 1-11 for the range. Why?
Step4: In Class Exercise - Rolling Dice
Write a programme that generates and prints out two random numbers between 1 and 6. This simulates the rolling of two dice.
Now modify the programme to simulate making 2 million rolls of two dice. What fraction of the time do you get double six?
Extension
Step5: You might want to do this for | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Random Processes in Computational Physics
The contents of this Jupyter Notebook lecture notes are:
Introduction to Random Numbers in Physics
Random Number Generation
Python Packages for Random Numbers
Coding for Probability (atomic decay example)
Non-uniform random numbers
As usual I recommend you follow along by typing the code snippets into your own file. Don't forget to call the packages etc. at the start of each code file.
End of explanation
#Review the documentation for NumPy's random module:
np.random?
Explanation: Random Processes in Physics
Examples of physical processes that are/can be modelled as random include:
Radioactive decay - we know the probability of decay per unit time from quantum physics, but the exact time of the decay is random.
Brownian motion - if we could track the motion of all atomic particles, this would not actually be random, but appears random as we cannot.
Youtube Video of Brownian Motion: https://www.youtube.com/watch?v=cDcprgWiQEY
Chaotic systems - again not truely random in the sense of radioactive decay, but can be modelled as random.
Human or animal behaviour can also be modelled as random in some circumstances.
Random Number Generation
There are many different ways to generate uniform random numbers over a specified range (such as 0-1). Physically, we can for example:
spin a roulette wheel
draw balls from a lottery
throw darts at a board
thow dice
However, when we wish to use the numbers in a computer, we need a way to generate the numbers algorithmically.
Numerically/arithmetically - use a sequential method where each new number is a deterministic function of the previous numbers.
But: this destroys their true randomness and makes them at best, "pseudo-random".
However, in most cases, it is sufficient if the numbers “look” uniformly distributed and have no correlation between them. i.e. they pass statistical tests and obey the central limit theorem.
For example consider the function:
$x' = (ax + c) \mod m$
where $a$, $c$ and $m$ are integer constants, and $x$ is an integer variable. Recall that "$n \mod m$" means you calculate the remainder when $n$ is divided by $m$.
Now we can use this to generate a sequence of numbers by putting the outcome of this equation ($x'$) back in as the new starting value ($x$). These will act like random numbers. Try it.....
Class Exercise
Starting from $x = 1$ write a short programme which generates 100 values in this sequence and plots them on a graph. Please use the following inputs:
a = 1664525
c = 1013904223
m = 4294967296
Tip 1: python syntax for "mod m" is:
%m
So your base code will look like:
xp = (a*x+c)%m
Extension problem: this won't work for all values of a, c and m. Can you find some which don't generate pseudo-random numbers?
This is an example of a simple pseudo-random number generator (PRNG). Technically it's a "linear congruential random number generator". Things to note:
It's not really random
It can only generate numbers betwen 0 and m-1.
The choices of a, c and m matter.
The choice of x also matters. Do you get the same values for x=2?
For many codes this is sufficient, but you can do better. Fortunately python (Numpy) comes with a number of better versions as in built packages, so we can benefit from the expertise of others in our computational physics codes.
Good Pseudo-Random Number Generators
All pseudo-random number generators (PRNG) should possess a few key properties. Namely, they should
be fast and not memory intensive
be able to reproduce a given stream of random numbers (for debugging/verification of computer programs or so we can use identical numbers to compare different systems)
be able to produce several different independent “streams” of random numbers
have a long periodicity, so that they do not wrap around and produce the same numbers again within a reasonably long window.
To obtain a sequence of pseudo-random numbers:
initilize the state of the generator with a truly random "seed" value
generator uses that seed to create an initial "state", then produces a pseudo-random sequence of numbers from that state.
But note:
* The sequence will eventually repeat when the generator's state returns to that initial one.
The length of the sequence of non-repeating numbers is the period* of the PRNG.
It is relatively easy to build PRNGs with periods long enough for many practical applications, but one must be cautious in applying PRNG's to problems that require very large quantities of random numbers.
Almost all languages and simulation packages have good built-in generators. In Python, we can use the NumPy random library, which is based on the Mersenne-Twister algorithm developed in 1997.
Python Random Number Library
End of explanation
#print 5 uniformly distributed numbers between 0 and 1
print(np.random.random(5))
#print another 5 - should be different
print(np.random.random(5))
#print 5 uniformly distributed integers between 1 and 10
print(np.random.randint(1,11,5))
#print another 5 - should be different
print(np.random.randint(1,11,5))
Explanation: Some basic functions to point out (we'll get to others in a bit):
random() - Uniformly distributed floats over [0, 1]. Will include zero, but not one. If you inclue a number, n in the bracket you get n random floats.
randint(n,m) - A single random integer from n to m-1
End of explanation
#If you want to save a random number for future use:
z=np.random.random()
print("The number is ",z)
#Rerun random
print(np.random.random())
print("The number is still",z)
Explanation: Notice you have to use 1-11 for the range. Why?
End of explanation
np.random.seed(42)
for i in range(4):
print(np.random.random())
np.random.seed(42)
for i in range(4):
print(np.random.random())
np.random.seed(39)
for i in range(4):
print(np.random.random())
Explanation: In Class Exercise - Rolling Dice
Write a programme that generates and prints out two random numbers between 1 and 6. This simulates the rolling of two dice.
Now modify the programme to simulate making 2 million rolls of two dice. What fraction of the time do you get double six?
Extension: Plot a histogram of the frequency of the total of the two dice over the 2 million rolls.
Seeded Random Numbers
Sometimes in computational physics we want to generate the same series of pseudo-random numbers many times. This can be done with 'seeds'.
End of explanation
for i in range(10):
if np.random.random()<0.2:
print("Heads")
else:
print("Tails")
Explanation: You might want to do this for:
Debugging
Code repeatability (i.e. when you hand in code for marking!).
Coding For Probability
In some circumstances you will want to write code which simulates various events, each of which happen with a probability, $p$.
This can be coded with random numbers. You generate a random number between zero and 1, and allow the event to occur if that number is greater than $p$.
For example, consider a biased coin, which returns a head 20% of the time:
End of explanation |
9,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Boston Housing Dataset
Step2: Fit A Linear Regression
Step3: View Intercept Term
Step4: View Coefficients | Python Code:
# Load libraries
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
import warnings
# Suppress Warning
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")
Explanation: Title: Linear Regression Using Scikit-Learn
Slug: linear_regression_using_scikit-learn
Summary: How to conduct linear regression in scikit-learn for machine learning in Python.
Date: 2017-09-18 12:00
Category: Machine Learning
Tags: Linear Regression
Authors: Chris Albon
Preliminaries
End of explanation
# Load data
boston = load_boston()
X = boston.data
y = boston.target
Explanation: Load Boston Housing Dataset
End of explanation
# Create linear regression
regr = LinearRegression()
# Fit the linear regression
model = regr.fit(X, y)
Explanation: Fit A Linear Regression
End of explanation
# View the intercept
model.intercept_
Explanation: View Intercept Term
End of explanation
# View the feature coefficients
model.coef_
Explanation: View Coefficients
End of explanation |
9,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate a random (stable) IIR filter
Step1: Self Tuning Regulator
NLMS IIR System Identification
Step2: Let's try fixed regulation with the estimated filter
Step3: works with some bias
Now we regulate while identifying the system
Step4: Let's try a proper minimal phase self tuning regulator
Equation
Step5: Let's try a model reference adaptive system
Equation | Python Code:
B_len = np.random.randint(5, 10)
A_len = np.random.randint(5, 10)
B = np.random.randn(B_len)
A = np.random.randn(A_len)
def stable_poly(length):
# all roots inside unit circle and real valued polynomial
roots = np.random.rand((length - 1) // 2) * np.exp(np.random.rand((length - 1) // 2) * 2j * np.pi)
roots = np.hstack([roots, np.conj(roots)])
if length % 2 == 0:
roots = np.hstack([roots, np.random.rand(1) * 2 - 1])
return np.poly(roots)
A = stable_poly(A_len)
B = stable_poly(B_len) * np.random.randn(1)
stable = (np.abs(np.roots(A)) < 1).all()
stable
minphase = (np.abs(np.roots(B)) < 1).all()
minphase
N = 100000
def iir_filter(x, B, A, N, y=None):
if y is None:
out = np.zeros(N)
start = 0
else:
start = len(y)
out = np.zeros(N + start)
out[:start] = y
for i in range(start, N + start):
o = 0
for j in range(max(0, i - len(x) + 1), min(i + 1, len(B))):
o += B[j] * x[i - j]
for j in range(1, min(i + 1, len(A))):
o -= A[j] * out[i - j]
out[i] = o / A[0]
return out
out = iir_filter([1], B, A, N)
plt.figure()
plt.plot(out)
#signal = np.random.randn(N)
signal = np.random.rand(N) > 0.5
out = iir_filter(signal, B, A, N)
plt.figure()
plt.plot(out)
Explanation: Generate a random (stable) IIR filter
End of explanation
n = max(A_len, B_len)
mu = 1
omega = np.zeros(A_len + B_len - 1)#np.random.randn(A_len + B_len - 1)
errors = []
for n in range(max(A_len, B_len), N):
z = np.hstack([out[n - 1:n - A_len:-1], signal[n:n - B_len:-1]])
A_est = np.hstack([1, -omega[:A_len - 1]])
B_est = omega[A_len - 1:]
y_est = iir_filter(signal[n - max(A_len, B_len):n + 1], B_est, A_est, 1, out[n - max(A_len, B_len):n])
e = out[n] - y_est[-1]
errors.append(e)
omega = omega + mu / np.dot(z, z) * e * z
plt.figure()
plt.plot(errors)
A_est, A, B_est, B
plt.figure()
plt.plot(signal)
Explanation: Self Tuning Regulator
NLMS IIR System Identification
End of explanation
N = 1000
y_ref = np.ones(N) * 80
u = iir_filter(y_ref, A_est, B_est, N)
y = iir_filter(u, B, A, N)
plt.figure()
plt.plot(y)
plt.ylim(0, 100)
Explanation: Let's try fixed regulation with the estimated filter
End of explanation
mu = 0.01
N = 100000
y_ref = np.ones(N) * 50
#omega = np.zeros(A_len + B_len - 1) # start with zeros => division by zero error
#omega = np.random.randn(A_len + B_len - 1) # start with random noise => unstable controller
#omega = np.hstack([-A[1:], B]) # start with a perfect estimate -> nothing to do for the adaptation algorithm - works
omega = np.hstack([-stable_poly(A_len)[1:], stable_poly(B_len)]) # => start with stable and minimal phase controller - doesn't work :()
u = []
y = []
errors = []
y_ref = []
for n in range(N):
y_ref.append(np.random.randn(1))
if n < max(A_len, B_len):
#u.append(1)
u.append(np.random.randn(1))
y = iir_filter(u, B, A, 1, y)
errors.append(y[-1] - y_ref[-1])
else:
A_est = np.hstack([1, -omega[:A_len - 1]])
B_est = omega[A_len - 1:]
u = iir_filter(y_ref, A_est, B_est, 1, u)
#u[-1] = float(u[-1] > 0.5)
y = iir_filter(u, B, A, 1, y)
z = np.hstack([y[n - 1:n - A_len:-1], u[n:n - B_len:-1]])
y_est = iir_filter(u[n - max(A_len, B_len):n + 1], B_est, A_est, 1, y[n - max(A_len, B_len):n])
e = y[n] - y_est[-1]
errors.append(e)
omega = omega + mu / np.dot(z, z) * e * z
A_est, A, B_est, B
plt.figure()
plt.plot(u)
plt.plot(y)
plt.plot(errors)
plt.legend(['u', 'y', 'e'])
plt.ylim(-100, 100)
Explanation: works with some bias
Now we regulate while identifying the system
End of explanation
mu = 0.01
N = 100000
#omega = np.zeros(A_len + B_len - 1) # start with zeros => division by zero error
#omega = np.random.randn(A_len + B_len - 1) # start with random noise => unstable controller
omega = np.hstack([-A[1:], B]) # start with a perfect estimate -> nothing to do for the adaptation algorithm - works
#omega = np.hstack([-stable_poly(A_len)[1:], stable_poly(B_len)]) # => start with stable and minimal phase controller - doesn't work :()
#omega = np.hstack([stable_poly(A_len - 1), 0, stable_poly(B_len - 1)]) # => start with stable and minimal phase controller - doesn't work :()
A_est = np.hstack([1, -omega[:A_len - 1]])
B_est = omega[A_len - 1:]
D = B_est[1:] / B_est[1]
f = 1 / B_est[1]
C = -A_est[1:] / B_est[1]
(np.abs(np.roots(C)) < 1).all(), (np.abs(np.roots(D)) < 1).all()
u = []
y = []
errors = []
y_ref = []
last = 0
for n in range(N):
y_ref.append(np.random.randn(1))
#y_ref.append(50)
#last = last + 0.01 if last < 1 else -1
#y_ref.append(last)
if n < max(A_len, B_len):
#u.append(1)
u.append(np.random.randn(1))
#u[-1] = float(u[-1] > 0.5)
y = iir_filter(u, B, A, 1, y)
errors.append(y[-1] - y_ref[-1])
else:
A_est = np.hstack([1, -omega[:A_len - 1]])
B_est = omega[A_len - 1:]
D = B_est[1:] / B_est[1]
f = 1 / B_est[1]
C = -A_est[1:] / B_est[1]
u = iir_filter(y, C, D, 1, u)
u[-1] = u[-1] + f * y_ref[-1]
#u[-1] = float(u[-1] > 0.5)
y = iir_filter(u, B, A, 1, y)
z = np.hstack([y[n - 1:n - A_len:-1], u[n:n - B_len:-1]])
y_est = iir_filter(u[n - max(A_len, B_len):n + 1], B_est, A_est, 1, y[n - max(A_len, B_len):n])
e = y[n] - y_est[-1]
# output error
#if n > N // 2:
# e += B_est[0] * u[-1]
errors.append(e)
omega = omega + mu / np.dot(z, z) * e * z
A_est, A, B_est, B
plt.figure()
plt.plot(u)
plt.plot(y)
plt.plot(y_ref)
plt.plot(errors)
plt.legend(['u', 'y', 'y_ref', 'e'])
plt.ylim(-100, 100)
Explanation: Let's try a proper minimal phase self tuning regulator
Equation:
$$D(z)U(z) = -C(z)Y(z) + f U_c(z)$$
End of explanation
mu = 0.0001
N = 100000
D_len = B_len
C_len = A_len
#omega = np.zeros(A_len + B_len - 1) # start with zeros => division by zero error
#omega = np.random.randn(A_len + B_len - 1) # start with random noise => unstable controller
#omega = np.hstack([-A[1:], B]) # start with a perfect estimate -> nothing to do for the adaptation algorithm - works
omega = np.hstack([stable_poly(C_len), stable_poly(D_len)[1:], 1]) # => start with stable and minimal phase controller - doesn't work :()
C = omega[:C_len]
D = np.hstack([1, omega[C_len:-1]])
f = omega[-1]
u = []
y = []
errors = []
y_ref = []
last = 0
z = None
for n in range(N):
#y_ref.append(np.random.randn(1))
#y_ref.append(50)
last = last + 0.01 if last < 1 else -1
y_ref.append(last)
if n < max(A_len, B_len):
u.append(1)
#u.append(np.random.randn(1))
#u[-1] = float(u[-1] > 0.5)
y = iir_filter(u, B, A, 1, y)
errors.append(y[-1] - y_ref[-1])
else:
C = omega[:C_len]
D = np.hstack([1, omega[C_len:-1]])
f = omega[-1]
u = iir_filter(y, C, D, 1, u)
u[-1] = u[-1] + f * y_ref[-1]
#u[-1] = float(u[-1] > 0.5)
y = iir_filter(u, B, A, 1, y)
e = y[n] - y_ref[-2]
errors.append(e)
z_last = z
z = np.hstack([y[n:n - C_len:-1], u[n - 1:n - D_len:-1], y_ref[-1]])
if z_last is not None:
#print(y[n], y_ref[-2], z, z_last, omega)
omega = omega + mu / np.dot(z_last, z_last) * e * z
#print(e, z, z_last, omega)
#break
if u[-1] > 100 or y[-1] > 100:
break
C, D, A, B
plt.figure()
plt.plot(u)
plt.plot(y)
plt.plot(y_ref)
plt.plot(errors)
plt.legend(['u', 'y', 'y_ref', 'e'])
plt.ylim(-100, 100)
Explanation: Let's try a model reference adaptive system
Equation:
$$D(z)U(z) = -C(z)Y(z) + f U_c(z)$$
End of explanation |
9,141 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Is there any package in Python that does data transformation like Box-Cox transformation to eliminate skewness of data? | Problem:
import numpy as np
import pandas as pd
import sklearn
data = load_data()
assert type(data) == np.ndarray
from sklearn import preprocessing
pt = preprocessing.PowerTransformer(method="box-cox")
box_cox_data = pt.fit_transform(data) |
9,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: Select Name And Ages Only When The Name Is Known | Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
Explanation: Title: Ignoring Null or Missing Values
Slug: ignoring_null_values
Summary: Ignoring Null or Missing Values in SQL.
Date: 2017-01-16 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, NULL, 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, NULL, 23, 'F', 'San Francisco', 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'San Francisco', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Petaluma', 0);
Explanation: Create Data
End of explanation
%%sql
-- Select name and average age,
SELECT name, age
-- from the table 'criminals',
FROM criminals
-- if age is not a null value
WHERE name IS NOT NULL
Explanation: Select Name And Ages Only When The Name Is Known
End of explanation |
9,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Few things to keep aware of.
People acessing things at the same time, take the earlier user's request, then discard the other guys request and then continue the chain.
Step2: create a function that access the db w/ the name of the character and gets its char_id
process
for every frame, update the text in the db nothing markov
if frames out of bound bail | Python Code:
#Intialization - creates db and then sends an empty response
s = {
"Details": {
"Username" : "Anonymous",
"Story" : "Rabbit Story"
},
"Characters": [
{
"Name": "Rabbit",
"Position": "Right",
"Social" : 0.3,
"Emotion": "Happy"
},
{
"Name": "Turtle",
"Position": "Left",
"Social" : 0.2,
"Emotion": "Sad"
}
]
}
import json
string = json.loads(s)
username = string['Details']['Username']
story = string['Details']['Story']
num_char = len(string['Characters'])
matrices, closeness_vectors = initialize(num_char)
#Initialize DB
character_id = 0
for elem in string['Characters']:
#DB[Character objs] puts the following
char_id = character_id
name = elem['Name']
position = elem['Position']
socialbility = elem['Social']
emotion = translate_to_num(elem['Emotion'])
personality = matrices[character_id]
impact = matrices[character_id + num_char]
c_vector = closeness_vectors[character_id]
print char_id
print name
print position
print socialbility
print emotion
print personality
print impact
print c_vector
print
character_id += 1
return generate_response(username,story,0,0)
#Update Text Request
#update text in db
#send response back
s = {
"Details": {
"Story": "Rabbit Story",
"Username": "Anonymous",
"Frame_start": 1,
"Frame_end": 1
},
"Frames": [
{
"Characters":[
{
"Name": "Rabbit",
"Text": "Hello Turtle1"
},
{
"Name": "Turtle",
"Text": "Hello Rabbit1"
}
]
},
{
"Characters":[
{
"Name": "Rabbit",
"Text": "Hello Turtle2"
},
{
"Name": "Turtle",
"Text": "Hello Rabbit2"
}
]
}
]
}
Explanation: Few things to keep aware of.
People acessing things at the same time, take the earlier user's request, then discard the other guys request and then continue the chain.
End of explanation
import json
string = json.loads(s)
username = string['Details']['Username']
story = string['Details']['Story']
start = string['Details']['Frame_start']
end = string['Details']['Frame_end']
#get number of the last frame from db
if last_known_frame < int(end):
#return malformed JSON
else:
#return JSON wanted
for elem in string['Frames']:
for sub_elem in elem['Characters']:
char_name = sub_elem['Name']
text = sub_elem['Text']
#puts in db[story][start][char_name][TEXT ENTRY] = text
start += 1
if start > end:
break
return generate_response(username,story,start,end)
#continue past Story
#Check for continuing or querying
#continuing
#Samples the markov chain for each character in the frame and then updates the db
#then send the response back
#querying
#query db and send info w/ text back
s = {
"Details": {
"Story": "Rabbit Story",
"Username": "Anonymous",
"Frame_start": 1,
"Frame_end": 3
}
}
import json
string = json.loads(s)
username = string['Details']['Username']
story = string['Details']['Story']
start = string['Details']['Frame_start']
end = string['Details']['Frame_end']
#run a for
#check if frame is in db
#continue
#else:
#pull num of characters in story something like db.story.characterobjs
#for 2 characters chosen at random:
#sample markov chain for one and update in db
#return generate_response(username, story, start, end)
import numpy as np
import json
total_influence = np.matrix([[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0]])
s = json.dumps(total_influence.tolist())
print total_influence
print np.matrix(json.loads(s))
Explanation: create a function that access the db w/ the name of the character and gets its char_id
process
for every frame, update the text in the db nothing markov
if frames out of bound bail
End of explanation |
9,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PROV-O Diagram Rendering Example
This example takes a PROV-O activity graph and uses the PROV Python library, which is an implementation of the Provenance Data Model by the World Wide Web Consortium, to create a graphical representations like PNG, SVG, PDF.
Prerequisites
python libraries - prov[dot]
jupyter
graphviz
Step1: Read a simple provenance document
We will use the Example 1 available on https
Step2: Create some setup variables filename and basename which will be used for the encoding of the outputs
Step3: Use the prov library to deserialize the example document
Step4: Graphics export (PNG and PDF)
In addition to the PROV-N output (as above), the document can be exported into a graphical representation with the help of the GraphViz. It is provided as a software package in popular Linux distributions, or can be downloaded for Windows and Mac.
Once you have GraphViz installed and the dot command available in your operating system's paths, you can save the document we have so far into a PNG file as follows.
Step5: The above saves the PNG file as article-prov.png in your current folder. If you're runing this tutorial in Jupyter Notebook, you can see it here as well.
Step6: Similarly, the above saves the document into a PDF file in your current working folder. Graphviz supports a wide ranges of raster and vector outputs, to which you can export your provenance documents created by the library. To find out what formats are available from your version, run dot -T? at the command line.
PROV-JSON export
PROV-JSON is a JSON representation for PROV that was designed for the ease of accessing various PROV elements in a PROV document and to work well with web applications. The format is natively supported by the library and is its default serialisation format.
Step7: You can also serialize the document directly to a file by providing a filename (below) or a Python File object. | Python Code:
#if you need to install dependencies, do so in this cell
!pip install pydot prov
!conda install -y python-graphviz
Explanation: PROV-O Diagram Rendering Example
This example takes a PROV-O activity graph and uses the PROV Python library, which is an implementation of the Provenance Data Model by the World Wide Web Consortium, to create a graphical representations like PNG, SVG, PDF.
Prerequisites
python libraries - prov[dot]
jupyter
graphviz
End of explanation
from prov.model import ProvDocument
import prov.model as pm
Explanation: Read a simple provenance document
We will use the Example 1 available on https://www.w3.org/TR/prov-o/ e.g. https://www.w3.org/TR/prov-o/#narrative-example-simple-1
To create a provenance document (a package of provenance statements or assertions), import ProvDocument class from prov.model:
End of explanation
filename = "https://raw.githubusercontent.com/oznome/jupyter-examples/master/prov/rdf/prov-ex1.ttl"
basename = "prov-ex1"
import urllib.request
url = filename
data = urllib.request.urlopen(url).read()
Explanation: Create some setup variables filename and basename which will be used for the encoding of the outputs
End of explanation
# Create a new provenance document
d1 = pm.ProvDocument.deserialize(content=data, format="rdf")
Explanation: Use the prov library to deserialize the example document
End of explanation
basename
from prov.dot import prov_to_dot
d = prov_to_dot(d1)
d.write_png(basename+'.png')
Explanation: Graphics export (PNG and PDF)
In addition to the PROV-N output (as above), the document can be exported into a graphical representation with the help of the GraphViz. It is provided as a software package in popular Linux distributions, or can be downloaded for Windows and Mac.
Once you have GraphViz installed and the dot command available in your operating system's paths, you can save the document we have so far into a PNG file as follows.
End of explanation
from IPython.display import Image
Image(filename=basename+'.png')
# Or save to a PDF
d.write_pdf(basename + '.pdf')
Explanation: The above saves the PNG file as article-prov.png in your current folder. If you're runing this tutorial in Jupyter Notebook, you can see it here as well.
End of explanation
print(d1.serialize(indent=2))
Explanation: Similarly, the above saves the document into a PDF file in your current working folder. Graphviz supports a wide ranges of raster and vector outputs, to which you can export your provenance documents created by the library. To find out what formats are available from your version, run dot -T? at the command line.
PROV-JSON export
PROV-JSON is a JSON representation for PROV that was designed for the ease of accessing various PROV elements in a PROV document and to work well with web applications. The format is natively supported by the library and is its default serialisation format.
End of explanation
d1.serialize(basename + '.json')
Explanation: You can also serialize the document directly to a file by providing a filename (below) or a Python File object.
End of explanation |
9,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamics of multiple spin enembles
Step1: 1) Collective processes only (QuTiP $\texttt{jmat}$)
System properties - QuTiP jmat()
QuTiP's jmat() functions span the symmetric (N+1)-dimensional Hilbert space. They can be used to efficiently investigate the collective dynamics only.
Step2: Time integration
Step3: Visualization
Step4: 2) Local-collective processes in the Dicke basis (PIQS + QuTiP)
System general and collective properties - QuTiP in the Dicke basis
Step5: System local properties - Building local Lindbladians with PIQS
Step6: Visualization with parameter dependence
Step7: We have studied the dissipative dynamics of two ensembles of TLSs, exploring the possibility of the systems to undergo local dephasing, collective emission of the single ensembles, collective emission of the two ensembles coupled to the same reservoir and local de-excitations. We have found that in the general casse spin exchange between antisymmetrically prepared ensemble is transient [1].
References
[1] N. Shammah, S. Ahmed, N. Lambert, S. De Liberato, and F. Nori, https | Python Code:
from qutip import *
from qutip.piqs import *
import matplotlib.pyplot as plt
from scipy import constants
Explanation: Dynamics of multiple spin enembles: two driven-dissipative ensembles
Notebook author: Nathan Shammah (nathan.shammah at gmail.com)
We use the Permutational Invariant Quantum Solver (PIQS) library, imported in QuTiP as $\texttt{qutip.piqs}$ to study the driven-dissipative open quantum dynamics of multiple two-level-system (TLS), or spin, ensembles.
We consider a system of two TLS ensembles with populations $N_1$ and $N_2$ with identical frequency $\omega_{0}$ with collective pumping and collective emission at identical rates, $\gamma_\text{CE}=(1+\bar{n})\gamma_0$ and $\gamma_\text{CP}=\bar{n}\gamma_0$, respectively, with $\bar{n}=\frac{1}{e^{\hbar\omega_0/k_\mathrm{B}T}-1}$ and
\begin{eqnarray}
\dot{\rho} &=&
-i\lbrack \omega_{0}\left(J_z^{(1)}+J_z^{(2)}\right),\rho \rbrack
+\frac{\gamma_\text {CE}}{2}\mathcal{L}{J{-}^{(1)}+ J_{-}^{(2)}}[\rho]
+\frac{\gamma_\text {CP}}{2}\mathcal{L}{J{+}^{(1)}+J_{+}^{(2)}}[\rho]
\end{eqnarray}
Ref. [2] has shown that for $N_1<N_2$, if the system is initialized in the state $|{\psi_0}\rangle=|{\downarrow\cdots\downarrow}\rangle_1\otimes|{\uparrow\cdots\uparrow}\rangle_2$, the system relaxes to a steady state for which the first subsystem is excited, i.e. $\langle J_z^{(1)}(\infty)\rangle>0$ and for some parameters $\frac{\langle J_z^{(1)}(\infty)\rangle}{(N_1/2)}\rightarrow 0.5$, also in the limit of zero temperature, $T\rightarrow 0$.
Notice that $\mathcal{L}{J{-}^{(1)}+ J_{-}^{(2)}}[\rho]\neq \mathcal{L}{J{-}^{(1)}}[\rho]+\mathcal{L}{ J{-}^{(2)}}[\rho]$, which is a case treated in Ref. [3] two obtain syncronized ensembles of atoms.
Here we explore what happens when to the master equation of Eq. (1) one adds also collective and local terms relative to single ensembles,
\begin{eqnarray}
\dot{\rho} &=&
-i\lbrack \omega_{0}\left(J_z^{(1)}+J_z^{(2)}\right),\rho \rbrack
+\frac{\gamma_\text{CE}}{2}\mathcal{L}{J{-}^{(1)}+ J_{-}^{(2)}}[\rho]
+\frac{\gamma_\text{CP}}{2}\mathcal{L}{J{+}^{(1)}+J_{+}^{(2)}}[\rho]\
&& +\frac{\gamma_\text{CEi}}{2}\mathcal{L}{J{-}^{(1)}}[\rho]
+\frac{\gamma_\text{CEi}}{2}\mathcal{L}{J{-}^{(2)}}[\rho]
+\sum_{n}^{N_1}\frac{\gamma_\text{E}}{2}\mathcal{L}{J{-,n}^{(1)}}[\rho]+\frac{\gamma_\text{D}}{2}\mathcal{L}{J{z,n}^{(1)}}[\rho]+\sum_{n}^{N_2}\frac{\gamma_\text{E}}{2}\mathcal{L}{J{-,n}^{(2)}}[\rho]+\frac{\gamma_\text{D}}{2}\mathcal{L}{J{z,n}^{(2)}}[\rho]
\end{eqnarray}
where $\gamma_\text {CEi}$ is the rate of superradiant decay for the individual ensembles of TLSs, $\gamma_\text{E}$ and $\gamma_\text{D}$ are the rates of local emission and dephasing.
Firstly, we will show how the collective dynamics of Eq. (1) can be investigated in a simple way using QuTiP's [4] $\texttt{jmat}$ function, which defines collective spins for maximally symmetric states in a Hilbert space of dimension $N_i+1$.
Secondly, we will exploit the permutational invariance of the local processes in Eq. (2) to investigate the exact dynamics using the Dicke basis, $\rho = \sum_{j,m,m'}p_{jmm'}|j,m\rangle\langle j,m'|$ [1], where $p_{jmm'}$ is a probability density. We will do so numerically using the PIQS library [1].
In the following we might use in plots thefollowing equivalent notation $\gamma_\text {CE}=\gamma_\Downarrow$ (gCE),
$\gamma_\text {CP}=\gamma_\Uparrow$ (gCP), $\gamma_\text {E}=\gamma_\downarrow$ (gE), $\gamma_\text {P}=\gamma_\uparrow$ (gP), and
$\gamma_\text {D}=\gamma_\phi$ (gD).
End of explanation
# Number of TLSs in the two ensembles
N1 = 1
N2 = 4
N = N1 + N2
# TLSs bare frequency
w0 = 1
# Bose-Einstein distribution determines the occupation number
giga = 10**(6)
frequency_hertz = w0*10*giga
temperature_kelvin = 10**(2)
x = (frequency_hertz / temperature_kelvin) * (constants.hbar / constants.Boltzmann)
n0 = 1/(np.exp(x)-1)
print("n0 =",n0)
# set collective pumping and collective emission rates (coupled ensembles)
g0 = 1
gCE = g0 * (1 + n0)
gCP = g0 * n0
print("gCE =", gCE)
print("gCP =", gCP)
# define identity operators and norms in the tensor space
dim1_mat = N1 + 1
dim2_mat = N2 + 1
id1_mat = qeye(dim1_mat)
id2_mat = qeye(dim2_mat)
norm2 = id2_mat.tr()
norm1 = id1_mat.tr()
# build collective spin operators for N1 and N2
jx1_mat = jmat(N1/2,"x")
jx2_mat = jmat(N2/2,"x")
jy1_mat = jmat(N1/2,"y")
jy2_mat = jmat(N2/2,"y")
jz1_mat = jmat(N1/2,"z")
jz2_mat = jmat(N2/2,"z")
jm1_mat = jmat(N1/2,"-")
jm2_mat = jmat(N2/2,"-")
# place collective spin operators in tensor space (N1 + N2)
jz1_tot = tensor(jz1_mat, id2_mat)
jz2_tot = tensor(id1_mat, jz2_mat)
jx12_mat = tensor(jx1_mat, id2_mat) + tensor(id1_mat, jx2_mat)
jy12_mat = tensor(jy1_mat, id2_mat) + tensor(id1_mat, jy2_mat)
jz12_mat = tensor(jz1_mat, id2_mat) + tensor(id1_mat, jz2_mat)
jm12_mat = tensor(jm1_mat, id2_mat) + tensor(id1_mat, jm2_mat)
jp12_mat = jm12_mat.dag()
# define Hamiltonian
h1_mat = w0 * jz1_mat
h2_mat = w0 * jz2_mat
htot = tensor(h1_mat, id2_mat) + tensor(id1_mat, h2_mat)
# build Liouvillian using QuTiP
collapse_operators = [np.sqrt(gCE)*jm12_mat, np.sqrt(gCP)*jp12_mat]
L_collective = liouvillian(htot, collapse_operators)
#Check the algebra of the spin operators in the tensor space
print(jp12_mat*jm12_mat - jm12_mat*jp12_mat == 2*jz12_mat)
print(jx12_mat*jy12_mat - jy12_mat*jx12_mat == 1j*jz12_mat)
Explanation: 1) Collective processes only (QuTiP $\texttt{jmat}$)
System properties - QuTiP jmat()
QuTiP's jmat() functions span the symmetric (N+1)-dimensional Hilbert space. They can be used to efficiently investigate the collective dynamics only.
End of explanation
# set superradiant delay time for the excited ensemble (N2)
td0 = np.log(N2)/(N2*gCE)
tmax = 30 * td0
nt = 1001
t = np.linspace(0, tmax, nt)
#set initial tensor state for spins (Use QuTiP's jmat() basis)
excited1 = np.zeros(jz1_mat.shape)
excited2 = np.zeros(jz2_mat.shape)
ground1 = np.zeros(jz1_mat.shape)
ground2 = np.zeros(jz2_mat.shape)
excited1[0,0] = 1
excited2[0,0] = 1
ground1[-1,-1] = 1
ground2[-1,-1] = 1
excited1 = Qobj(excited1)
excited2 = Qobj(excited2)
ground1 = Qobj(ground1)
ground2 = Qobj(ground2)
sdp = tensor(excited1, excited2)
sdap = tensor(ground1, excited2)
ground12 = tensor(ground1, ground2)
rho0 = sdap
#solve using qutip (using QuTiP's jmat() basis)
result = mesolve(L_collective, rho0, t, [],
e_ops = [jz12_mat, jz1_tot, jz2_tot],
options = Options(store_states=True))
rhot = result.states
jzt = result.expect[0]
jz1t = result.expect[1]
jz2t = result.expect[2]
Explanation: Time integration
End of explanation
# plot jz1t, jz2t, jz12t
j2max = (0.5 * N + 1) * (0.5 * N)
jmax = 0.5 * N
j1max = 0.5 * N1
j2max = 0.5 * N2
label_size = 20
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize = label_size)
plt.rc('ytick', labelsize = label_size)
fig_size = (12, 6)
lw = 2
fig1 = plt.figure(figsize = fig_size)
plt.plot(t/td0, jzt/jmax, '-', label = r"$\langle J_{z,\mathrm{ tot}}\rangle$", linewidth = 2*lw)
plt.plot(t/td0, jz1t/j1max, '--', label = r"$\langle J_{z,1}\rangle$", linewidth = lw)
plt.plot(t/td0, jz2t/j2max, '-.', label = r"$\langle J_{z,2}\rangle$", linewidth = lw)
plt.xlabel(r'$t/t_\text{D}$', fontsize = label_size)
plt.ylabel(r'$\langle J_z(t)\rangle$', fontsize = label_size)
plt.xticks([0, (tmax/2)/td0, tmax/td0])
plt.legend(fontsize = label_size)
plt.show()
plt.close()
# check partial traces
print(jz12_mat.ptrace(0)/norm2 == jz1_mat)
print(jz12_mat.ptrace(1)/norm1 == jz2_mat)
rho1pt = rho0.ptrace(0)
rho2pt = rho0.ptrace(1)
Explanation: Visualization
End of explanation
# Number of TLSs in the two ensembles
N1 = 5
N2 = 15
N = N1 + N2
# local-collective simulations with this system size take approx 5 minutes on a MacBook Pro for time integration
# TLSs bare frequency
w0 = 1
# Bose-Einstein distribution determines the occupation number
# low temperature limit
frequency_hertz = 10**(13)
temperature_kelvin = 10**(1)
x = (frequency_hertz / temperature_kelvin) * (constants.hbar / constants.Boltzmann)
n0 = 1/(np.exp(x) -1)
print("occupation number, n0 = ",n0)
# set collective pumping and collective emission rates (coupled ensembles)
g0 = 1
gCE = g0 * (1 + n0)
gCP = g0 * n0
# Local rates
gE = 1
gD = 1
# Collective rates of the single ensembles
gCEi = 1
# Algebra in the Dicke basis
[jx1_dicke, jy1_dicke, jz1_dicke] = jspin(N1)
jp1_dicke = jspin(N1,"+")
jm1_dicke = jp1_dicke.dag()
[jx2_dicke, jy2_dicke, jz2_dicke] = jspin(N2)
jp2_dicke = jspin(N2,"+")
jm2_dicke = jp2_dicke.dag()
# Bulding the tensor space for N1 + N2
dim1_dicke = num_dicke_states(N1)
dim2_dicke = num_dicke_states(N2)
id1_dicke = qeye(dim1_dicke)
id2_dicke = qeye(dim2_dicke)
norm2_dicke = id2_dicke.tr()
norm1_dicke = id1_dicke.tr()
# Place operators of a single ensemble (N1 or N2) in the tensor space
jz1_dicke_tot = tensor(jz1_dicke, id2_dicke)
jz2_dicke_tot = tensor(id1_dicke, jz2_dicke)
# Place operators of two ensemble (N1 + N2) in the tensor space
jx12_dicke = tensor(jx1_dicke, id2_dicke) + tensor(id1_dicke, jx2_dicke)
jy12_dicke = tensor(jy1_dicke, id2_dicke) + tensor(id1_dicke, jy2_dicke)
jz12_dicke = tensor(jz1_dicke, id2_dicke) + tensor(id1_dicke, jz2_dicke)
jm12_dicke = tensor(jm1_dicke, id2_dicke) + tensor(id1_dicke, jm2_dicke)
jp12_dicke = jm12_dicke.dag()
h1_dicke = w0 * jz1_dicke
h2_dicke = w0 * jz2_dicke
htot = tensor(h1_dicke, id2_dicke) + tensor(id1_dicke, h2_dicke)
# Build the collective Liovillian (Hamiltonian + collective Lindbladian)
L_collective_dicke = liouvillian(htot,[np.sqrt(gCE)*jm12_dicke, np.sqrt(gCP)*jp12_dicke])
# check algebra relations in tensor space
print(jp12_dicke * jm12_dicke - jm12_dicke * jp12_dicke == 2*jz12_dicke)
print(jx12_dicke * jy12_dicke - jy12_dicke * jx12_dicke == 1j*jz12_dicke)
Explanation: 2) Local-collective processes in the Dicke basis (PIQS + QuTiP)
System general and collective properties - QuTiP in the Dicke basis
End of explanation
## Define Piqs objects
# case 1: only collective coupled processes (already defined above)
system1 = Dicke(N = N1)
system2 = Dicke(N = N2)
# case 2: collective coupled processes + dephasing
system1gD = Dicke(N = N1)
system2gD = Dicke(N = N2)
system1gD.dephasing = gD
system2gD.dephasing = gD
# case 3: collective coupled processes + local emission
system1gE = Dicke(N = N1)
system2gE = Dicke(N = N2)
system1gE.emission = gE
system2gE.emission = gE
# case 4: collective coupled processes + collective emission of single ensembles
system1gCEi = Dicke(N = N1)
system2gCEi = Dicke(N = N2)
system1gCEi.collective_emission = gCEi
system2gCEi.collective_emission = gCEi
# Define identity operators in tensor space
id_tls1 = to_super(qeye(dim1_dicke))
id_tls2 = to_super(qeye(dim2_dicke))
###Build the Lindbladians
## case 1
L1_local_dicke = system1.liouvillian()
L2_local_dicke = system2.liouvillian()
print("case 1")
# Build local Lindbladians in tensor space
L_local_dicke = super_tensor(L1_local_dicke, id_tls2) + super_tensor(id_tls1, L2_local_dicke)
# Total local-collective Liouvillian in tensor space
L_dicke_tot = L_collective_dicke + L_local_dicke
## case 2
L1gD_local_dicke = system1gD.liouvillian()
L2gD_local_dicke = system2gD.liouvillian()
print("case 2")
# Build local Lindbladians in tensor space
LgD_local_dicke = super_tensor(L1gD_local_dicke, id_tls2) + super_tensor(id_tls1, L2gD_local_dicke)
# Total local-collective Liouvillian in tensor space
LgD_dicke_tot = L_collective_dicke + LgD_local_dicke
## case 3
L1gE_local_dicke = system1gE.liouvillian()
L2gE_local_dicke = system2gE.liouvillian()
print("case 3")
# Build local Lindbladians in tensor space
LgE_local_dicke = super_tensor(L1gE_local_dicke, id_tls2) + super_tensor(id_tls1, L2gE_local_dicke)
# Total local-collective Liouvillian in tensor space
LgE_dicke_tot = L_collective_dicke + LgE_local_dicke
## case 4
L1gCEi_local_dicke = system1gCEi.liouvillian()
L2gCEi_local_dicke = system2gCEi.liouvillian()
# Build local Lindbladians in tensor space
LgCEi_local_dicke = super_tensor(L1gCEi_local_dicke, id_tls2) + super_tensor(id_tls1, L2gCEi_local_dicke)
# Total local-collective Liouvillian in tensor space
LgCEi_dicke_tot = L_collective_dicke + LgCEi_local_dicke
print("case 4")
## Initial conditions
# set superradiant delay time for the excited ensemble (N2)
td0 = np.log(N2)/(N2*gCE)
tmax = 30 * td0
nt = 1001
t = np.linspace(0, tmax, nt)
# set initial tensor state for spins (Use QuTiP's jmat() basis)
excited1_dicke = excited(N1)
excited2_dicke = excited(N2)
ground1_dicke = ground(N1)
ground2_dicke = ground(N2)
sdp_dicke = tensor(excited1_dicke, excited2_dicke)
sdap_dicke = tensor(ground1_dicke, excited2_dicke)
ground12_dicke = tensor(ground1_dicke, ground2_dicke)
rho0_dicke = sdap_dicke
## Solve using qutip (using the Dicke basis)
# case 1
result_0 = mesolve(L_dicke_tot, rho0_dicke, t, [],
e_ops = [jz12_dicke, jz1_dicke_tot, jz2_dicke_tot],
options = Options(store_states=True))
rhot_0 = result_0.states
jzt_0 = result_0.expect[0]
jz1t_0 = result_0.expect[1]
jz2t_0 = result_0.expect[2]
print("case 1")
# case 2
result_gD = mesolve(LgD_dicke_tot, rho0_dicke, t, [],
e_ops = [jz12_dicke, jz1_dicke_tot, jz2_dicke_tot],
options = Options(store_states=True))
rhot_gD = result_gD.states
jzt_gD = result_gD.expect[0]
jz1t_gD = result_gD.expect[1]
jz2t_gD = result_gD.expect[2]
print("case 2")
# case 3
result_gE = mesolve(LgE_dicke_tot, rho0_dicke, t, [],
e_ops = [jz12_dicke, jz1_dicke_tot, jz2_dicke_tot],
options = Options(store_states=True))
rhot_gE = result_gE.states
jzt_gE = result_gE.expect[0]
jz1t_gE = result_gE.expect[1]
jz2t_gE = result_gE.expect[2]
print("case 3")
# case 4
result_gCEi = mesolve(LgCEi_dicke_tot, rho0_dicke, t, [],
e_ops = [jz12_dicke, jz1_dicke_tot, jz2_dicke_tot],
options = Options(store_states=True))
rhot_gCEi = result_gCEi.states
jzt_gCEi = result_gCEi.expect[0]
jz1t_gCEi = result_gCEi.expect[1]
jz2t_gCEi = result_gCEi.expect[2]
print("case 4")
Explanation: System local properties - Building local Lindbladians with PIQS
End of explanation
## Plots jz1t, jz2t, jz12t in the Dicke basis for different parameter values
#spin normalization constants
j2_max = (0.5 * N + 1) * (0.5 * N)
jmax = 0.5 * N
j1max = 0.5 * N1
j2max = 0.5 * N2
#plot graphics properties
plt.rc('text', usetex = True)
label_size = 20
fig_size = (14, 7)
lw = 2
lw1 = 1*lw
lw2 = 1*lw
lw3 = 1*lw
fig1 = plt.figure(figsize=(7,4))
plt.rc('xtick', labelsize = label_size)
plt.rc('ytick', labelsize = label_size)
plt.plot(t/td0, jz1t_0/j1max, '-k', label = r"$\gamma_\Downarrow$ Only", linewidth = lw)
plt.plot(t/td0, jz2t_0/j2max, '-r', linewidth = lw)
plt.plot(t/td0, jz1t_gE/j1max, '--k', label = r"$\gamma_\downarrow=\gamma_\Downarrow$", linewidth = lw2)
plt.plot(t/td0, jz2t_gE/j2max, '--r', linewidth = lw2)
plt.rcParams['text.latex.preamble']=[r"\usepackage{xcolor}"]
plt.xlabel(r'$t/t_\text{D}$', fontsize = label_size)
#make double label y-axis - STARTS
left = -5.5
center = 0
yshift = -0.4
#label Jz1
plt.text(left, center+yshift,r'$\langle J_{z}^{(1)}(t)\rangle$,',
horizontalalignment = 'right',
verticalalignment='center',
color = "k", rotation='vertical',fontsize = label_size)
#label Jz2
plt.text(left, center-yshift, r' $\langle J_{z}^{(2)}(t)\rangle$',
horizontalalignment='right', verticalalignment='center',
color = "r", rotation='vertical',fontsize = label_size)
#make double label y-axis - ENDS
plt.xticks([0, (tmax/2)/td0, tmax/td0])
plt.yticks([-1, -0.5, 0, 0.5, 1])
plt.legend(fontsize = label_size)
plt.title(r'Two ensembles', fontsize = label_size)
plt.show()
plt.close()
## Second Figure
plt.rc('xtick', labelsize = label_size)
plt.rc('ytick', labelsize = label_size)
fig2 = plt.figure(figsize=(7,4))
plt.plot(t/td0, jz1t_gCEi/j1max, '-.k', label = r"$\gamma_{\Downarrow,i}=\gamma_\Downarrow$",
linewidth = lw3)
plt.plot(t/td0, jz2t_gCEi/j2max, '-.r', linewidth = lw3)
plt.plot(t/td0, jz1t_gD/j1max, ':k', label = r"$\gamma_\phi=\gamma_\Downarrow$", linewidth = lw1)
plt.plot(t/td0, jz2t_gD/j2max, ':r',linewidth = lw1)
plt.rcParams['text.latex.preamble']=[r"\usepackage{xcolor}"]
plt.xlabel(r'$t/t_\text{D}$', fontsize = label_size)
#make double label y-axis - STARTS
#label Jz1
plt.text(left, center+yshift,r'$\langle J_{z}^{(1)}(t)\rangle$,',
horizontalalignment = 'right',
verticalalignment='center',
color = "k", rotation='vertical',fontsize = label_size)
#label Jz2
plt.text(left, center-yshift, r' $\langle J_{z}^{(2)}(t)\rangle$',
horizontalalignment='right', verticalalignment='center',
color = "r", rotation='vertical',fontsize = label_size)
#make double label y-axis - ENDS
plt.xticks([0, (tmax/2)/td0, tmax/td0])
plt.yticks([-1, -0.5, 0, 0.5, 1])
plt.legend(fontsize = label_size)
plt.title(r'Two ensembles', fontsize = label_size)
plt.show()
plt.close()
Explanation: Visualization with parameter dependence
End of explanation
qutip.about()
Explanation: We have studied the dissipative dynamics of two ensembles of TLSs, exploring the possibility of the systems to undergo local dephasing, collective emission of the single ensembles, collective emission of the two ensembles coupled to the same reservoir and local de-excitations. We have found that in the general casse spin exchange between antisymmetrically prepared ensemble is transient [1].
References
[1] N. Shammah, S. Ahmed, N. Lambert, S. De Liberato, and F. Nori, https://arxiv.org/abs/1805.05129
Open quantum systems with local and collective incoherent processes: Efficient numerical simulation using permutational invariance
[2] Y. Hama, W.J. Munro, and K. Nemoto, Phys. Rev. Lett. 120, 060403 (2018)
Relaxation to Negative Temperatures in Double Domain Systems
[3] M. Xu, D.A. Tieri, E.C. Fine, J.K. Thompson, and M.J. Holland, Phys. Rev. Lett. 113, 154101 (2014)
Synchronization of Two Ensembles of Atoms
[4] B.A. Chase and J.M Geremia, Phys. Rev. A 78, 052101 (2010)
Collective processes of an ensemble of spin-1/2 particles
[5] J.R. Johansson, P.D. Nation, and F. Nori, Comp. Phys. Comm. 183, 1760 (2012)
http://qutip.org
End of explanation |
9,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
week1 - Доверительные интервалы - quiz
1.
Давайте уточним правило трёх сигм. Утверждение
Step1: Task 5
Step2: Task 6, 7
Step3: Task 8 | Python Code:
import scipy.stats
scipy.stats.norm.ppf(0.9985)
Explanation: week1 - Доверительные интервалы - quiz
1.
Давайте уточним правило трёх сигм. Утверждение: 99.7% вероятностной массы случайной величины X∼N(μ,σ2) лежит в интервале μ±c⋅σ. Чему равно точное значение константы c? Округлите ответ до четырёх знаков после десятичной точки.
5.
В пятилетнем рандомизированном исследовании Гарвардской медицинской школы 11037 испытуемых через день принимали аспирин, а ещё 11034 — плацебо. Исследование было слепым, то есть, испытуемые не знали, что именно они принимают. За 5 лет инфаркт случился у 104 испытуемых, принимавших аспирин, и у 189 принимавших плацебо. Оцените, насколько вероятность инфаркта снижается при приёме аспирина. Округлите ответ до четырёх знаков после десятичной точки.
6.
Постройте теперь 95% доверительный интервал для снижения вероятности инфаркта при приёме аспирина. Чему равна его верхняя граница? Округлите ответ до четырёх знаков после десятичной точки.
7.
Продолжим анализировать данные эксперимента Гарвардской медицинской школы.
Для бернуллиевских случайных величин X∼Ber(p) часто вычисляют величину p1−p, которая называется шансами (odds). Чтобы оценить шансы по выборке, вместо p нужно подставить p^. Например, шансы инфаркта в контрольной группе, принимавшей плацебо, можно оценить как ≈0.0174
Оцените, во сколько раз понижаются шансы инфаркта при регулярном приёме аспирина. Округлите ответ до четырёх знаков после десятичной точки.
8.
Величина, которую вы оценили в предыдущем вопросе, называется отношением шансов. Постройте для отношения шансов 95% доверительный интервал с помощью бутстрепа. Чему равна его нижняя граница? Округлите ответ до 4 знаков после десятичной точки.
Чтобы получить в точности такой же доверительный интервал, как у нас:
составьте векторы исходов в контрольной и тестовой выборках так, чтобы в начале шли все единицы, а потом все нули;
установите random seed=0;
сделайте по 1000 псевдовыборок из каждой группы пациентов.
Task 1
End of explanation
import numpy as np
n1=11037
k1=104
p1=float(k1)/n1
n2=11034
k2=189
p2=float(k2)/n2
p2-p1+1.96*np.sqrt(p2*(1-p2)/n2+p1*(1-p1)/n1)
Explanation: Task 5
End of explanation
odds1=p1/(1-p1)
odds2=p2/(1-p2)
print odds1, odds2
print odds2/odds1
Explanation: Task 6, 7
End of explanation
def get_bootstrap_samples(data, n_samples):
indices = np.random.randint(0, len(data), (n_samples, len(data)))
samples = data[indices]
return samples
def stat_intervals(stat, alpha):
boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)])
return boundaries
def get_p(k,n):
return float(k)/n
def get_odds(p):
return p/(1-p)
def get_chances_relation(sample1, sample2):
return get_odds(get_p(sum(sample2), len(sample2)))/get_odds(get_p(sum(sample1), len(sample1)))
data1 = np.zeros(n1)
data1[:k1]=1
data2 = np.zeros(n2)
data2[:k2]=1
np.random.seed(0)
samples1 = get_bootstrap_samples(data1, 1000)
samples2 = get_bootstrap_samples(data2, 1000)
rel_scores = []
for i in xrange(1000):
rel_scores.append(get_chances_relation(samples1[i], samples2[i]))
print "95% confidence interval:", stat_intervals(rel_scores, 0.05)
Explanation: Task 8
End of explanation |
9,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Methods for testing and plotting the functios in Potapov.py
Step1: Tests
In each of these tests, we construct a Blaschke-Potapov product. We apply our procedure on the resulting function and reconstruct the product.
Step2: Testing the code to generate an ABCD model. | Python Code:
import Potapov as P
import numpy as np
import matplotlib.pyplot as plt
import numpy.linalg as la
%pylab inline
def plot(L,dx,func,(i,j),*args):
'''
This function plots func(F(z)) for z*1j from -L to L for each function
F in args.
'''
x = np.linspace(-L,L,2.*L/dx)
for arg in args:
plt.plot(x,[func(arg(x_el*1j)[i,j]) for x_el in x ])
return
def make_plots(T1,T2):
'''
This function plots the absolute value T1 and T2 from -20 to 20 along
the components [0,1].
The purpose of this function is to compare the T1 and T2 functions.
'''
plt.figure(1)
plt.subplot(211)
plot(20,0.1,abs,[0,1],T1 )
plt.subplot(212)
plot(20,0.1,abs,[0,1],T2 )
plt.show()
return
def test(vals,vecs):
'''
Generate a rational transfer function based on some eigenvalues and
eigenvectors by generating a Blaschke-Potapov product. Test the
zero-pole interpolation method by finding the resides along the poles,
computing the eigenvalues and eigenvectors, and reconstructing the
Blaschke-Potapov product.
'''
T = P.finite_transfer_function(np.eye(2),vecs,vals)
T_test = P.get_Potapov(T,vals)
make_plots(T,T_test)
return
Explanation: Methods for testing and plotting the functios in Potapov.py
End of explanation
vals = [1-1j,-1+1j, 2+2j]
vecs = [ P.normalize(np.matrix([-5.,4j])).T, P.normalize(np.matrix([1j,3.]).T),
P.normalize(np.matrix([2j,7.]).T)]
test(vals,vecs)
vals = [ -2+1j,1-1j,-1+1j,2+2j,]
vecs = [ P.normalize(np.matrix([-5.,4j])).T, P.normalize(np.matrix([1j,3.]).T),\
P.normalize(np.matrix([-2.,4j]).T),P.normalize(np.matrix([2j,7.]).T), ]
test(vals,vecs)
vals = [1-1j,-1+1j, 2+2j,-2+1j]
vecs = [ P.normalize(np.matrix([-5.,4j])).T, P.normalize(np.matrix([1j,3.]).T),\
P.normalize(np.matrix([2j,7.]).T), P.normalize(np.matrix([-2.,4j]).T)]
test(vals,vecs)
Explanation: Tests
In each of these tests, we construct a Blaschke-Potapov product. We apply our procedure on the resulting function and reconstruct the product.
End of explanation
vals = [1-1j,-1+1j, 2+2j]
vecs = [ P.normalize(np.matrix([-5.,4j])).T, P.normalize(np.matrix([1j,3.]).T),
P.normalize(np.matrix([2j,7.]).T)]
[A,B,C,D] = P.get_Potapov_ABCD(vals,vecs)
M = A.shape[0]
T = P.finite_transfer_function(np.eye(2),vecs,vals)
T_ABCD = lambda z: D+C*la.inv(z*np.eye(M) - A)*B
make_plots(T,T_ABCD)
Explanation: Testing the code to generate an ABCD model.
End of explanation |
9,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dénes Csala
MCC, 2022
Based on Elements of Data Science (Allen B. Downey, 2021) and Python Data Science Handbook (Jake VanderPlas, 2018)
License
Step1: Introducing Principal Component Analysis
Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset
Step2: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution
Step3: To see what these numbers mean, let's view them as vectors plotted on top of the data
Step4: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
The explained variance quantifies this measure of "importance" in direction.
Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance
Step5: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression
Step6: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%!
This is the sense in which "dimensionality reduction" works
Step7: This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels.
What do the Components Mean?
PCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via eigenvectors.
The input data is represented as a vector
Step8: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like
$$
image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
$$
What PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation.
The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum
Step9: Here we see that with only six PCA components, we recover a reasonable approximation of the input!
Thus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well.
Choosing the Number of Components
But how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components
Step10: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
PCA as data compression
As we mentioned, PCA can be used for is a sort of data compression. Using a small n_components allows you to represent a high dimensional point as a sum of just a few principal vectors.
Here's what a single digit looks like as you change the number of components
Step11: Let's take another look at this by using IPython's interact functionality to view the reconstruction of several images at once | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
Explanation: Dénes Csala
MCC, 2022
Based on Elements of Data Science (Allen B. Downey, 2021) and Python Data Science Handbook (Jake VanderPlas, 2018)
License: MIT
Dimensionality Reduction: Principal Component Analysis in-depth
Here we'll explore Principal Component Analysis, which is an extremely useful linear dimensionality reduction technique.
We'll start with our standard set of initial imports:
End of explanation
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
Explanation: Introducing Principal Component Analysis
Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset:
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
Explanation: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution:
End of explanation
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
Explanation: To see what these numbers mean, let's view them as vectors plotted on top of the data:
End of explanation
clf = PCA(0.95) # keep 95% of variance
X_trans = clf.fit_transform(X)
print(X.shape)
print(X_trans.shape)
Explanation: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
The explained variance quantifies this measure of "importance" in direction.
Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance:
End of explanation
X_new = clf.inverse_transform(X_trans)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2)
plt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8)
plt.axis('equal');
Explanation: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
print(X[0][:8])
print(X[0][8:16])
print(X[0][16:24])
print(X[0][24:32])
print(X[0][32:40])
print(X[0][40:48])
pca = PCA(2) # project from 64 to 2 dimensions
Xproj = pca.fit_transform(X)
print(X.shape)
print(Xproj.shape)
(1797*2)/(1797*64)
plt.scatter(Xproj[:, 0], Xproj[:, 1], c=y, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.colorbar();
Explanation: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%!
This is the sense in which "dimensionality reduction" works: if you can approximate a data set in a lower dimension, you can often have an easier time visualizing it or fitting complicated models to the data.
Application of PCA to Digits
The dimensionality reduction might seem a bit abstract in two dimensions, but the projection and dimensionality reduction can be extremely useful when visualizing high-dimensional data. Let's take a quick look at the application of PCA to the digits data we looked at before:
End of explanation
from fig_code.figures import plot_image_components
with plt.style.context('seaborn-white'):
plot_image_components(digits.data[0])
Explanation: This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels.
What do the Components Mean?
PCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via eigenvectors.
The input data is represented as a vector: in the case of the digits, our data is
$$
x = [x_1, x_2, x_3 \cdots]
$$
but what this really means is
$$
image(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots
$$
If we reduce the dimensionality in the pixel space to (say) 6, we recover only a partial image:
End of explanation
from fig_code.figures import plot_pca_interactive
plot_pca_interactive(digits.data)
Explanation: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like
$$
image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
$$
What PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation.
The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum:
End of explanation
pca = PCA().fit(X)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
Explanation: Here we see that with only six PCA components, we recover a reasonable approximation of the input!
Thus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well.
Choosing the Number of Components
But how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components:
End of explanation
fig, axes = plt.subplots(8, 8, figsize=(8, 8))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
pca = PCA(i + 1).fit(X)
im = pca.inverse_transform(pca.transform(X[25:26]))
ax.imshow(im.reshape((8, 8)), cmap='binary')
ax.text(0.95, 0.05, 'n = {0}'.format(i + 1), ha='right',
transform=ax.transAxes, color='green')
ax.set_xticks([])
ax.set_yticks([])
Explanation: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
PCA as data compression
As we mentioned, PCA can be used for is a sort of data compression. Using a small n_components allows you to represent a high dimensional point as a sum of just a few principal vectors.
Here's what a single digit looks like as you change the number of components:
End of explanation
from ipywidgets import interact
def plot_digits(n_components):
fig = plt.figure(figsize=(8, 8))
plt.subplot(1, 1, 1, frameon=False, xticks=[], yticks=[])
nside = 10
pca = PCA(n_components).fit(X)
Xproj = pca.inverse_transform(pca.transform(X[:nside ** 2]))
Xproj = np.reshape(Xproj, (nside, nside, 8, 8))
total_var = pca.explained_variance_ratio_.sum()
im = np.vstack([np.hstack([Xproj[i, j] for j in range(nside)])
for i in range(nside)])
plt.imshow(im)
plt.grid(False)
plt.title("n = {0}, variance = {1:.2f}".format(n_components, total_var),
size=18)
plt.clim(0, 16)
interact(plot_digits, n_components=[1, 15, 20, 25, 32, 40, 64], nside=[1, 8]);
Explanation: Let's take another look at this by using IPython's interact functionality to view the reconstruction of several images at once:
End of explanation |
9,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pycellerator demo.ipynb
demonstrates some basic features of pycellerator
Step1: Read, Translate, and Solve a Cellerator Model
Step2: print first 10 values of t and s to demonstrate content of output
Step3: <h1>Parametric Scan</h1>
Step4: <h1>Parametric Plots of Variables against one another </h1>
Step5: <h2> Example of changing the default figure size
Step11: <h1>Edit a model as text in the notebook rather than in an external file
Step12: The function newModel creates a model from text strings rather than by reading a file
Step13: <H1>Export Model file to SBML
Step14: Generate a Cellerator Mathematica Notebook from a Model
The inverse translation is implemented in the Cellerator function TextArrow; from within Mathematica, to generate the arrow forms for a python model file use "TextArrow/@model"
Step15: The PrintMathematican function serves no operational function, it just gives a pretty-print look into the contents of the file. | Python Code:
from cellerator import cellerator as c
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
%matplotlib inline
Explanation: pycellerator demo.ipynb
demonstrates some basic features of pycellerator
End of explanation
model="Gold1.model"
c.PrintModel(model)
c.PrintODES(model)
t, v, s = c.Solve(model, step=.2)
someplot=c.PlotAll(t,v,s,loc="lower right",bg="white")
Explanation: Read, Translate, and Solve a Cellerator Model
End of explanation
t[:10], v, s[:10]
Explanation: print first 10 values of t and s to demonstrate content of output
End of explanation
variables, pscan=c.Solve(model, scan=["Kc",0.1,0.4,0.01])
print variables
pscan = np.array(pscan)
kvals=pscan[:,0]; MVals=pscan[:,2]
plt.plot(kvals, MVals)
plt.xlabel("Kc"); plt.ylabel("Final Value of M")
Explanation: <h1>Parametric Scan</h1>
End of explanation
c.PlotParametric(t,v,s,1,2, color="Green", bg="pink")
c.PlotParametric(t,v,s,"X","C",color="Purple", bg="white")
c.PlotSize(9, 2) # change the plot size
someplots=c.PlotColumns(t,v,s, ncols=3, colors=["red","blue","green"])
Explanation: <h1>Parametric Plots of Variables against one another </h1>
End of explanation
mpl.rcParams["figure.figsize"]=15, 3 # another way of changing the plot size
someotherplots=c.PlotColumns(t,v,s, colors=["red","blue","green"])
Explanation: <h2> Example of changing the default figure size
End of explanation
r=
[C <-> Nil, rates[kd, vi]]
[C |--> Nil, mod[X], Hill[vd, 1, Kd, 0,1 ]]
[M |--> Nil, mod[Nil], Hill[v2, 1, K2, 0, 1]]
[X |--> Nil, mod[Nil], Hill[v4, 1, K4, 0, 1]]
[Nil -> X, P]
[C |-> M, Hill["vm1*g(M)", 1, Kc, 0, 1]]
ic=C = 0.1;M = 0.2; X = 0.3
rates=
vd = 0.1
vi = 0.023
v2 = 0.167; v4 = 0.1
vm1 = 0.5; vm3 = 0.2
kd = 0.00333
K1 = 0.1; K2 = 0.1; K3 = 0.1
K4 = 0.1
Kc = 0.3
Kd = 0.02
ass=P = M * (1-X)/(K3+1-X)
func=g(m) = (1-m)/(K1+1-m)
Explanation: <h1>Edit a model as text in the notebook rather than in an external file
End of explanation
q = c.newModel(r, ic, rates, func, ass)
t,v,s=c.Solve(q,step=1, duration=200)
c.PlotSize(20,3)
newplot=c.PlotAll(t,v,s,bg="lightgreen")
Explanation: The function newModel creates a model from text strings rather than by reading a file
End of explanation
newsbmlfile = c.GenerateSBML("Gold1.model", output="foo.xml")
print newsbmlfile
c.PrintSBML(newsbmlfile)
newmodelfile=c.ConvertSBML("foo.xml")
print newmodelfile
T,V,S=c.Solve(newmodelfile)
mpl.rcParams["figure.figsize"]=15, 3;
c.PlotAll(T,V,S)
Explanation: <H1>Export Model file to SBML
End of explanation
nb = c.ToMathematica("Gold1.model")
print nb
Explanation: Generate a Cellerator Mathematica Notebook from a Model
The inverse translation is implemented in the Cellerator function TextArrow; from within Mathematica, to generate the arrow forms for a python model file use "TextArrow/@model"
End of explanation
c.PrintMathematica(nb)
Explanation: The PrintMathematican function serves no operational function, it just gives a pretty-print look into the contents of the file.
End of explanation |
9,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
By listing the first six prime numbers
Step1: <!-- TEASER_END -->
Step2: This implementation scales quite well, and has good space and time complexity. | Python Code:
from itertools import count, islice
from collections import defaultdict
def _sieve_of_eratosthenes():
factors = defaultdict(set)
for n in count(2):
if factors[n]:
for m in factors.pop(n):
factors[n+m].add(m)
else:
factors[n*n].add(n)
yield n
list(islice(_sieve_of_eratosthenes(), 20))
Explanation: By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13.
What is the 10001st prime number?
Sieve of Eratosthenes
Previously, we implemented the Sieve of Eratosthenes. However, our implementation demands an integer $m$ and can only generate primes less than $m$. While some approximation algorithms for determining the $n$th prime are available, we would like to produce an exact solution. Hence, we must implement a prime sieve that does not require an upper bound.
End of explanation
get_prime = lambda n: next(islice(_sieve_of_eratosthenes(), n, n+1))
# The Project Euler problem
# uses the 1-based index.
get_prime(10001-1)
Explanation: <!-- TEASER_END -->
End of explanation
get_prime(10**6)
Explanation: This implementation scales quite well, and has good space and time complexity.
End of explanation |
9,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Graph Neural Nets with JAX/jraph
Lisa Wang, DeepMind ([email protected]), Nikola Jovanović, ETH Zurich ([email protected])
Colab Runtime
Step2: Fundamental Graph Concepts
A graph consists of a set of nodes and a set of edges, where edges form connections between nodes.
More formally, a graph is defined as $ \mathcal{G} = (\mathcal{V}, \mathcal{E})$ where $\mathcal{V}$ is the set of vertices / nodes, and $\mathcal{E}$ is the set of edges.
In an undirected graph, each edge is an unordered pair of two nodes $ \in \mathcal{V}$. E.g. a friend network can be represented as an undirected graph, assuming that the relationship "A is friends with B" implies "B is friends with A".
In a directed graph, each edge is an ordered pair of nodes $ \in \mathcal{V}$. E.g. a citation network would be best represented with a directed graph, since the relationship "A cites B" does not imply "B cites A".
The degree of a node is defined as the number of edges incident on it, i.e. the sum of incoming and outgoing edges for that node.
The in-degree is the sum of incoming edges only, and the out-degree is the sum of outgoing edges only.
There are several ways to represent $\mathcal{E}$
Step3: Inspecting the GraphsTuple
Step4: Visualizing the Graph
To visualize the graph structure of the graph we created above, we will use the networkx library because it already has functions for drawing graphs.
We first convert the jraph.GraphsTuple to a networkx.DiGraph.
Step5: Graph Convolutional Network (GCN) Layer
Now let's implement our first graph network!
The graph convolutional network, introduced by by Kipf et al. (2017) in https
Step6: We can now run the graph convolution on our toy graph from before.
Step7: Here is the visualized graph.
Step8: Since we used the identity function for updating nodes and sum aggregation, we can verify the results pretty easily. As a reminder, in this toy graph, the node features are the same as the node index.
Node 0
Step9: Add Trainable Parameters to GCN layer
So far our graph convolution operation doesn't have any learnable parameters.
Let's add an MLP block to the update function to make it trainable.
Step10: Check outputs of update_node_fn with MLP Block
Step11: As output, we expect the updated node features. We should see one array of dim 4 for each of the 4 nodes, which is the result of applying a single MLP block to the features of each node individually.
Step13: Add Self-Edges (Edges connecting a node to itself)
For each node, add an edge of the node onto itself. This way, nodes will include themselves in the aggregation step.
Step16: Add Symmetric Normalization
Note that the nodes may have different numbers of neighbors / degrees.
This could lead to instabilities during neural network training, e.g. exploding or vanishing gradients. To address that, normalization is a commonly used method. In this case, we will normalize by node degrees.
As a first attempt, we could count the number of incoming edges (including self-edge) and divide by that value.
More formally, let $A$ be the adjacency matrix defining the edges of the graph.
Then we define the degree matrix $D$ as a diagonal matrix with $D_{ii} = \sum_jA_{ij}$ (the degree of node $i$)
Now we can normalize $AH$ by dividing it by the node degrees
Step17: Test General GCN Layer
Step19: Build GCN Model with Multiple Layers
With a single GCN layer, a node's representation after the GCN layer is only
influenced by its direct neighbourhood. However, we may want to consider larger neighbourhoods, i.e. more than just 1 hop away. To achieve that, we can stack
multiple GCN layers, similar to how stacking CNN layers expands the input region.
We will define a network with three GCN layers
Step23: Node Classification with GCN on Karate Club Dataset
Time to try out our GCN on our first graph prediction task!
Zachary's Karate Club Dataset
Zachary's karate club is a small dataset commonly used as an example for a social graph. It's great for demo purposes, as it's easy to visualize and quick to train a model on it.
A node represents a student or instructor in the club. An edge means that those two people have interacted outside of the class. There are two instructors in the club.
Each student is assigned to one of two instructors.
Optimizing the GCN on the Karate Club Node Classification Task
The task is to predict the assignment of students to instructors, given the social graph and only knowing the assignment of two nodes (the two instructors) a priori.
In other words, out of the 34 nodes, only two nodes are labeled, and we are trying to optimize the assignment of the other 32 nodes, by maximizing the log-likelihood of the two known node assignments.
We will compute the accuracy of our node assignments by comparing to the ground-truth assignments. Note that the ground-truth for the 32 student nodes is not used in the loss function itself.
Let's load the dataset
Step24: Visualize the karate club graph with circular node layout
Step26: Define the GCN with the GraphConvolution layers we implemented
Step29: Training and evaluation code
Step30: Let's train the GCN! We expect this model reach an accuracy of about 0.91.
Step31: Try modifying the model parameters to see if you can improve the accuracy!
You can also modify the dataset itself, and see how that influences model training.
Node assignments predicted by the model at the end of training
Step32: Visualize ground truth and predicted node assignments
Step35: Graph Attention (GAT) Layer
While the GCN we covered in the previous section can learn meaningful representations, it also has some shortcomings. Can you think of any?
In the GCN layer, the messages from all its neighbours and the node itself are equally weighted. This may lead to loss of node-specific information. E.g., consider the case when a set of nodes shares the same set of neighbors, and start out with different node features. Then because of averaging, their resulting output features would be the same. Adding self-edges mitigates this issue by a small amount, but this problem is magnified with increasing number of GCN layers and number of edges connecting to a node.
The graph attention (GAT) mechanism, as proposed by Velickovic et al. ( 2017), allows the network to learn how to weigh / assign importance to the node features from the neighbourhood when computing the new node features. This is very similar to the idea of using attention in Transformers, which were introduced in Vaswani et al. (2017).
(One could even argue that Transformers are graph attention networks operating on the special case of fully-connected graphs.)
In the figure below, $\vec{h}$ are the node features and $\vec{\alpha}$ are the learned attention weights.
<image src="https
Step36: Test GAT Layer
Step38: Train GAT Model on Karate Club Dataset
We will now repeat the karate club experiment with a GAT network.
Step39: Let's train the model!
We expect the model to reach an accuracy of about 0.97.
Step40: The final node assignment predicted by the trained model
Step41: Graph Classification on MUTAG (Molecules)
In the previous section, we used our GCN and GAT networks on a node classification problem. Now, let's use the same model architectures on a graph classification task.
The main difference from our previous setup is that instead of observing individual node latents, we are now attempting to summarize them into one embedding vector, representative of the entire graph, which we then use to predict the class of this graph.
We will do this on one of the most common tasks of this type -- molecular property prediction, where molecules are represented as graphs. Nodes correspond to atoms, and edges represent the bonds between them.
We will use the MUTAG dataset for this example, a common dataset from the TUDatasets collection.
We have converted this dataset to be compatible with jraph and will download it in the cell below.
Citation for TUDatasets
Step42: The dataset is saved as a list of examples, each example is a dictionary containing an input_graph and its corresponding target.
Step43: We see that there are 188 graphs, to be classified in one of 2 classes, representing "their mutagenic effect on a specific gram negative bacterium". Node features represent the 1-hot encoding of the atom type (0=C, 1=N, 2=O, 3=F, 4=I, 5=Cl, 6=Br). Edge features (edge_attr) represent the bond type, which we will here ignore.
Let's split the dataset to use the first 150 graphs as the training set (and the rest as the test set).
Step46: Padding Graphs to Speed Up Training
Since jax recompiles the program for each graph size, training would take a long time due to recompilation for different graph sizes. To address that, we pad the number of nodes and edges in the graphs to nearest power of two. Since jax maintains a cache
of compiled programs, the compilation cost is amortized.
Step50: Graph Network Model Definition
We will use jraph.GraphNetwork() to build our graph model. The GraphNetwork architecture is defined in Battaglia et al. (2018).
We first define update functions for nodes, edges, and the full graph (global). We will use MLP blocks for all three.
Step52: Loss and Accuracy Function
Define the classification cross-entropy loss and accuracy function.
Step55: Training and Evaluation Functions
Step56: We converge at ~76% test accuracy. We could of course further tune the parameters to improve this result.
Link prediction on CORA (Citation Network)
The final problem type we will explore is link prediction, an instance of an edge-level task. Given a graph, our goal is to predict whether a certain edge $(u,v)$ should be present or not. This is often useful in the recommender system settings (e.g., propose new friends in a social network, propose a movie to a user).
As before, the first step is to obtain node latents $h_i$ using a GNN. In this context we will use the autoencoder language and call this GNN encoder. Then, we learn a binary classifier $f
Step58: Splitting Edges and Adding "Negative" Edges
For the link prediction task, we split the edges into train, val and test sets and also add "negative" examples (edges that do not correspond to a citation). We will ignore the topic classes.
For the validation and test splits, we add the same number of existing edges ("positive examples") and non-existing edges ("negative examples").
In contrast to the validation and test splits, the training split only contains positive examples (set $T_+$). The $|T_+|$ negative examples to be used during training will be sampled ad hoc in each epoch and uniformly at random from all edges that are not in $T_+$. This allows the model to see a wider range of negative examples.
Step59: Test the Edge Splitting Function
Step63: Note
Step67: To evaluate our model, we first apply the sigmoid function to obtained dot products to get a score $s_{i,j} \in [0,1]$ for each edge. Now, we can pick a threshold $\tau$ and say that we predict all pairs $(i,j)$ s.t. $s_{i,j} \geq \tau$ as edges (and all the rest as non-edges).
Loss and ROC-AUC-Metric Function
Define the binary classification cross-entropy loss.
To aggregate the results over all choices of $\tau$, we will use ROC-AUC (the area under the ROC curve) as our evaluation metric.
Step69: Helper function for sampling negative edges during training.
Step71: Let's write the training loop
Step72: Let's train the model! We expect the model to reach roughly test_roc_auc of 0.84.
(Note that ROC-AUC is a scalar between 0 and 1, with 1 being the ROC-AUC of a perfect classifier.) | Python Code:
!pip install git+https://github.com/deepmind/jraph.git
!pip install flax
!pip install dm-haiku
# Imports
%matplotlib inline
import functools
import matplotlib.pyplot as plt
import jax
import jax.numpy as jnp
import jax.tree_util as tree
import jraph
import flax
import haiku as hk
import optax
import pickle
import numpy as onp
import networkx as nx
from typing import Any, Callable, Dict, List, Optional, Tuple
Explanation: Introduction to Graph Neural Nets with JAX/jraph
Lisa Wang, DeepMind ([email protected]), Nikola Jovanović, ETH Zurich ([email protected])
Colab Runtime:
If possible, please use a GPU hardware accelerator to run this colab. You can choose that under Runtime > Change Runtime Type.
Prerequisites:
* Some familiarity with JAX, you can refer to this colab for an introduction to JAX.
* Neural network basics
* Graph theory basics (MIT Open Courseware slides by Amir Ajorlou)
We recommend watching the Theoretical Foundations of Graph Neural Networks Lecture by Petar Veličković before working through this colab. The talk provides a theoretical introduction to Graph Neural Networks (GNNs), historical context and motivating examples.
Outline:
* Fundamental Graph Concepts
* Graph Prediction Tasks
* Intro to the jraph Library
* Graph Convolutional Network (GCN) Layer
* Build GCN Model with Multiple Layers
* Node Classification with GCN on Karate Club Dataset
* Graph Attention (GAT) Layer
* Train GAT Model on Karate Club Dataset
* Graph Classification on MUTAG (Molecules)
* Link Prediction on CORA (Citation Network)
* Bonus: Intro to Graph Adversarial Attacks
Additional Resources:
Battaglia et al. (2018): Relational inductive biases, deep learning, and graph networks
Some sections in this colab build on the GraphNets Tutorial colab in pytorch by Nikola Jovanović.
We would like to thank Razvan Pascanu and Petar Veličković for their valuable input and feedback.
Copyright 2022 by the Authors.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Setup: Install and Import libraries
End of explanation
def build_toy_graph() -> jraph.GraphsTuple:
Define a four node graph, each node has a scalar as its feature.
# Nodes are defined implicitly by their features.
# We will add four nodes, each with a feature, e.g.
# node 0 has feature [0.],
# node 1 has featre [2.] etc.
# len(node_features) is the number of nodes.
node_features = jnp.array([[0.], [2.], [4.], [6.]])
# We will now specify 5 directed edges connecting the nodes we defined above.
# We define this with `senders` (source node indices) and `receivers`
# (destination node indices).
# For example, to add an edge from node 0 to node 1, we append 0 to senders,
# and 1 to receivers.
# We can do the same for all 5 edges:
# 0 -> 1
# 1 -> 2
# 2 -> 0
# 3 -> 0
# 0 -> 3
senders = jnp.array([0, 1, 2, 3, 0])
receivers = jnp.array([1, 2, 0, 0, 3])
# You can optionally add edge attributes to the 5 edges.
edges = jnp.array([[5.], [6.], [7.], [8.], [8.]])
# We then save the number of nodes and the number of edges.
# This information is used to make running GNNs over multiple graphs
# in a GraphsTuple possible.
n_node = jnp.array([4])
n_edge = jnp.array([5])
# Optionally you can add `global` information, such as a graph label.
global_context = jnp.array([[1]]) # Same feature dims as nodes and edges.
graph = jraph.GraphsTuple(
nodes=node_features,
edges=edges,
senders=senders,
receivers=receivers,
n_node=n_node,
n_edge=n_edge,
globals=global_context
)
return graph
graph = build_toy_graph()
Explanation: Fundamental Graph Concepts
A graph consists of a set of nodes and a set of edges, where edges form connections between nodes.
More formally, a graph is defined as $ \mathcal{G} = (\mathcal{V}, \mathcal{E})$ where $\mathcal{V}$ is the set of vertices / nodes, and $\mathcal{E}$ is the set of edges.
In an undirected graph, each edge is an unordered pair of two nodes $ \in \mathcal{V}$. E.g. a friend network can be represented as an undirected graph, assuming that the relationship "A is friends with B" implies "B is friends with A".
In a directed graph, each edge is an ordered pair of nodes $ \in \mathcal{V}$. E.g. a citation network would be best represented with a directed graph, since the relationship "A cites B" does not imply "B cites A".
The degree of a node is defined as the number of edges incident on it, i.e. the sum of incoming and outgoing edges for that node.
The in-degree is the sum of incoming edges only, and the out-degree is the sum of outgoing edges only.
There are several ways to represent $\mathcal{E}$:
1. As a list of edges: a list of pairs $(u,v)$, where $(u,v)$ means that there is an edge going from node $u$ to node $v$.
2. As an adjacency matrix: a binary square matrix $A$ of size $|\mathcal{V}| \times |\mathcal{V}|$, where $A_{u,v}=1$ iff there is a connection between nodes $u$ and $v$.
3. As an adjacency list: An array of $|\mathcal{V}|$ unordered lists, where the $i$th list corresponds to the $i$th node, and contains all the nodes directly connected to node $i$.
Example: Below is a directed graph with four nodes and five edges.
<image src="https://storage.googleapis.com/dm-educational/assets/graph-nets/toy_graph.png" width="400px">
The arrows on the edges indicate the direction of each edge, e.g. there is an edge going from node 0 to node 1. Between node 0 and node 3, there are two edges: one going from node 0 to node 3 and one from node 3 to node 0.
Node 0 has out-degree of 2, since it has two outgoing edges, and an in-degree of 2, since it has two incoming edges.
The list of edges is:
$$[(0, 1), (0, 3), (1, 2), (2, 0), (3, 0)]$$
As adjacency matrix:
$$\begin{array}{l|llll}
source \setminus dest & n_0 & n_1 & n_2 & n_3 \ \hline
n_0 & 0 & 1 & 0 & 1 \
n_1 & 0 & 0 & 1 & 0 \
n_2 & 1 & 0 & 0 & 0 \
n_3 & 1 & 0 & 0 & 0
\end{array}$$
As adjacency list:
$$[{1, 3}, {2}, {0}, {0}]$$
Graph Prediction Tasks
What are the kinds of problems we want to solve on graphs?
The tasks fall into roughly three categories:
Node Classification: E.g. what is the topic of a paper given a citation network of papers?
Link Prediction / Edge Classification: E.g. are two people in a social network friends?
Graph Classification: E.g. is this protein molecule (represented as a graph) likely going to be effective?
<image src="https://storage.googleapis.com/dm-educational/assets/graph-nets/graph_tasks.png" width="700px">
The three main graph learning tasks. Image source: Petar Veličković.
Which examples of graph prediction tasks come to your mind? Which task types do they correspond to?
We will create and train models on all three task types in this tutorial.
Intro to the jraph Library
In the following sections, we will learn how to represent graphs and build GNNs in Python. We will use
jraph, a lightweight library for working with GNNs in JAX.
Representing a graph in jraph
In jraph, a graph is represented with a GraphsTuple object. In addition to defining the graph structure of nodes and edges, you can also store node features, edge features and global graph features in a GraphsTuple.
In the GraphsTuple, edges are represented in two aligned arrays of node indices: senders (source nodes) and receivers (destinaton nodes).
Each index corresponds to one edge, e.g. edge i goes from senders[i] to receivers[i].
You can even store multiple graphs in one GraphsTuple object.
We will start with creating a simple directed graph with 4 nodes and 5 edges. We will also add toy features to the nodes, using 2*node_index as the feature.
We will later use this toy graph in the GCN demo.
End of explanation
# Number of nodes
# Note that `n_node` returns an array. The length of `n_node` corresponds to
# the number of graphs stored in one `GraphsTuple`.
# In this case, we only have one graph, so n_node has length 1.
graph.n_node
# Number of edges
graph.n_edge
# Node features
graph.nodes
# Edge features
graph.edges
# Edges
graph.senders
graph.receivers
# Graph-level features
graph.globals
Explanation: Inspecting the GraphsTuple
End of explanation
def convert_jraph_to_networkx_graph(jraph_graph: jraph.GraphsTuple) -> nx.Graph:
nodes, edges, receivers, senders, _, _, _ = jraph_graph
nx_graph = nx.DiGraph()
if nodes is None:
for n in range(jraph_graph.n_node[0]):
nx_graph.add_node(n)
else:
for n in range(jraph_graph.n_node[0]):
nx_graph.add_node(n, node_feature=nodes[n])
if edges is None:
for e in range(jraph_graph.n_edge[0]):
nx_graph.add_edge(int(senders[e]), int(receivers[e]))
else:
for e in range(jraph_graph.n_edge[0]):
nx_graph.add_edge(
int(senders[e]), int(receivers[e]), edge_feature=edges[e])
return nx_graph
def draw_jraph_graph_structure(jraph_graph: jraph.GraphsTuple) -> None:
nx_graph = convert_jraph_to_networkx_graph(jraph_graph)
pos = nx.spring_layout(nx_graph)
nx.draw(
nx_graph, pos=pos, with_labels=True, node_size=500, font_color='yellow')
draw_jraph_graph_structure(graph)
Explanation: Visualizing the Graph
To visualize the graph structure of the graph we created above, we will use the networkx library because it already has functions for drawing graphs.
We first convert the jraph.GraphsTuple to a networkx.DiGraph.
End of explanation
def apply_simplified_gcn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple:
# Unpack GraphsTuple
nodes, _, receivers, senders, _, _, _ = graph
# 1. Update node features
# For simplicity, we will first use an identify function here, and replace it
# with a trainable MLP block later.
update_node_fn = lambda nodes: nodes
nodes = update_node_fn(nodes)
# 2. Aggregate node features over nodes in neighborhood
# Equivalent to jnp.sum(n_node), but jittable
total_num_nodes = tree.tree_leaves(nodes)[0].shape[0]
aggregate_nodes_fn = jax.ops.segment_sum
# Compute new node features by aggregating messages from neighboring nodes
nodes = tree.tree_map(lambda x: aggregate_nodes_fn(x[senders], receivers,
total_num_nodes), nodes)
out_graph = graph._replace(nodes=nodes)
return out_graph
Explanation: Graph Convolutional Network (GCN) Layer
Now let's implement our first graph network!
The graph convolutional network, introduced by by Kipf et al. (2017) in https://arxiv.org/abs/1609.02907, is one of the basic graph network architectures. We will build its core building block, the graph convolutional layer.
In a convolutional neural network (CNN), a convolutional filter (e.g. 3x3) is applied repeatedly to different parts of a larger input (e.g. 64x64) by striding across the input.
In a GCN, a convolution filter is applied to the neighbourhoods around a node in a graph.
However, there are also some differences to point out:
In contrast to the CNN filter, the neighbourhoods in a GCN can be of different sizes, and there is no ordering of inputs. To see that, note that the CNN filter performs a weighted sum aggregation over the inputs with learnable weights, where each filter input has its own weight. In the GCN, the same weight is applied to all neighbours and the aggregation function is not learned. In other words, in a GCN, each neighbor contributes equally. This is why the CNN filter is not order-invariant, but the GCN filter is.
<image src="https://storage.googleapis.com/dm-educational/assets/graph-nets/cnn_vs_gnn.png" width="400px">
Comparison of CNN and GCN filters.
Image source: https://arxiv.org/pdf/1901.00596.pdf
More specifically, the GCN layer performs two steps:
Compute messages / update node features: Create a feature vector $\vec{h}_n$ for each node $n$ (e.g. with an MLP). This is going to be the message that this node will pass to neighboring nodes.
Message-passing / aggregate node features: For each node, calculate a new feature vector $\vec{h}'_n$ based on the messages (features) from the nodes in its neighborhood. In a directed graph, only nodes from incoming edges are counted as neighbors. The image below shows this aggregation step. There are multiple options for aggregation in a GCN, e.g. taking the mean, the sum, the min or max. (Later in this tutorial, we will also see how we can make the aggregation function dependent on the node features by adding an attention mechanism in the Graph Attention Network.)
<image src="https://storage.googleapis.com/dm-educational/assets/graph-nets/graph_conv.png" width="500px">
\"A generic overview of a graph convolution operation, highlighting the relevant information for deriving the next-level features for every node in the graph.\" Image source: Petar Veličković (https://github.com/PetarV-/TikZ)
Simple GCN Layer
End of explanation
graph = build_toy_graph()
Explanation: We can now run the graph convolution on our toy graph from before.
End of explanation
draw_jraph_graph_structure(graph)
out_graph = apply_simplified_gcn(graph)
Explanation: Here is the visualized graph.
End of explanation
out_graph.nodes
Explanation: Since we used the identity function for updating nodes and sum aggregation, we can verify the results pretty easily. As a reminder, in this toy graph, the node features are the same as the node index.
Node 0: sum of features from node 2 and node 3 $\rightarrow$ 10.
Node 1: sum of features from node 0 $\rightarrow$ 0.
Node 2: sum of features from node 1 $\rightarrow$ 2.
Node 3: sum of features from node 0 $\rightarrow$ 0.
End of explanation
class MLP(hk.Module):
def __init__(self, features: jnp.ndarray):
super().__init__()
self.features = features
def __call__(self, x: jnp.ndarray) -> jnp.ndarray:
layers = []
for feat in self.features[:-1]:
layers.append(hk.Linear(feat))
layers.append(jax.nn.relu)
layers.append(hk.Linear(self.features[-1]))
mlp = hk.Sequential(layers)
return mlp(x)
# Use MLP block to define the update node function
update_node_fn = lambda x: MLP(features=[8, 4])(x)
Explanation: Add Trainable Parameters to GCN layer
So far our graph convolution operation doesn't have any learnable parameters.
Let's add an MLP block to the update function to make it trainable.
End of explanation
graph = build_toy_graph()
update_node_module = hk.without_apply_rng(hk.transform(update_node_fn))
params = update_node_module.init(jax.random.PRNGKey(42), graph.nodes)
out = update_node_module.apply(params, graph.nodes)
Explanation: Check outputs of update_node_fn with MLP Block
End of explanation
out
Explanation: As output, we expect the updated node features. We should see one array of dim 4 for each of the 4 nodes, which is the result of applying a single MLP block to the features of each node individually.
End of explanation
def add_self_edges_fn(receivers: jnp.ndarray, senders: jnp.ndarray,
total_num_nodes: int) -> Tuple[jnp.ndarray, jnp.ndarray]:
Adds self edges. Assumes self edges are not in the graph yet.
receivers = jnp.concatenate((receivers, jnp.arange(total_num_nodes)), axis=0)
senders = jnp.concatenate((senders, jnp.arange(total_num_nodes)), axis=0)
return receivers, senders
Explanation: Add Self-Edges (Edges connecting a node to itself)
For each node, add an edge of the node onto itself. This way, nodes will include themselves in the aggregation step.
End of explanation
# Adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L506
def GraphConvolution(update_node_fn: Callable,
aggregate_nodes_fn: Callable = jax.ops.segment_sum,
add_self_edges: bool = False,
symmetric_normalization: bool = True) -> Callable:
Returns a method that applies a Graph Convolution layer.
Graph Convolutional layer as in https://arxiv.org/abs/1609.02907,
NOTE: This implementation does not add an activation after aggregation.
If you are stacking layers, you may want to add an activation between
each layer.
Args:
update_node_fn: function used to update the nodes. In the paper a single
layer MLP is used.
aggregate_nodes_fn: function used to aggregates the sender nodes.
add_self_edges: whether to add self edges to nodes in the graph as in the
paper definition of GCN. Defaults to False.
symmetric_normalization: whether to use symmetric normalization. Defaults to
True.
Returns:
A method that applies a Graph Convolution layer.
def _ApplyGCN(graph: jraph.GraphsTuple) -> jraph.GraphsTuple:
Applies a Graph Convolution layer.
nodes, _, receivers, senders, _, _, _ = graph
# First pass nodes through the node updater.
nodes = update_node_fn(nodes)
# Equivalent to jnp.sum(n_node), but jittable
total_num_nodes = tree.tree_leaves(nodes)[0].shape[0]
if add_self_edges:
# We add self edges to the senders and receivers so that each node
# includes itself in aggregation.
# In principle, a `GraphsTuple` should partition by n_edge, but in
# this case it is not required since a GCN is agnostic to whether
# the `GraphsTuple` is a batch of graphs or a single large graph.
conv_receivers, conv_senders = add_self_edges_fn(receivers, senders,
total_num_nodes)
else:
conv_senders = senders
conv_receivers = receivers
# pylint: disable=g-long-lambda
if symmetric_normalization:
# Calculate the normalization values.
count_edges = lambda x: jax.ops.segment_sum(
jnp.ones_like(conv_senders), x, total_num_nodes)
sender_degree = count_edges(conv_senders)
receiver_degree = count_edges(conv_receivers)
# Pre normalize by sqrt sender degree.
# Avoid dividing by 0 by taking maximum of (degree, 1).
nodes = tree.tree_map(
lambda x: x * jax.lax.rsqrt(jnp.maximum(sender_degree, 1.0))[:, None],
nodes,
)
# Aggregate the pre-normalized nodes.
nodes = tree.tree_map(
lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers,
total_num_nodes), nodes)
# Post normalize by sqrt receiver degree.
# Avoid dividing by 0 by taking maximum of (degree, 1).
nodes = tree.tree_map(
lambda x:
(x * jax.lax.rsqrt(jnp.maximum(receiver_degree, 1.0))[:, None]),
nodes,
)
else:
nodes = tree.tree_map(
lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers,
total_num_nodes), nodes)
# pylint: enable=g-long-lambda
return graph._replace(nodes=nodes)
return _ApplyGCN
Explanation: Add Symmetric Normalization
Note that the nodes may have different numbers of neighbors / degrees.
This could lead to instabilities during neural network training, e.g. exploding or vanishing gradients. To address that, normalization is a commonly used method. In this case, we will normalize by node degrees.
As a first attempt, we could count the number of incoming edges (including self-edge) and divide by that value.
More formally, let $A$ be the adjacency matrix defining the edges of the graph.
Then we define the degree matrix $D$ as a diagonal matrix with $D_{ii} = \sum_jA_{ij}$ (the degree of node $i$)
Now we can normalize $AH$ by dividing it by the node degrees:
$${D}^{-1}AH$$
To take both the in and out degrees into account, we can use symmetric normalization, which is also what Kipf and Welling proposed in their paper:
$$D^{-\frac{1}{2}}AD^{-\frac{1}{2}}H$$
General GCN Layer
Now we can write a more general and configurable version of the Graph Convolution layer, allowing the caller to specify:
update_node_fn: Function to use to update node features (e.g. the MLP block version we just implemented)
aggregate_nodes_fn: Aggregation function to use to aggregate messages from neighbourhood.
add_self_edges: Whether to add self edges for aggregation step.
symmetric_normalization: Whether to add symmetric normalization.
End of explanation
gcn_layer = GraphConvolution(
update_node_fn=lambda n: MLP(features=[8, 4])(n),
aggregate_nodes_fn=jax.ops.segment_sum,
add_self_edges=True,
symmetric_normalization=True
)
graph = build_toy_graph()
network = hk.without_apply_rng(hk.transform(gcn_layer))
params = network.init(jax.random.PRNGKey(42), graph)
out_graph = network.apply(params, graph)
out_graph.nodes
Explanation: Test General GCN Layer
End of explanation
def gcn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple:
Defines a graph neural network with 3 GCN layers.
Args:
graph: GraphsTuple the network processes.
Returns:
output graph with updated node values.
gn = GraphConvolution(
update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)),
add_self_edges=True)
graph = gn(graph)
gn = GraphConvolution(
update_node_fn=lambda n: jax.nn.relu(hk.Linear(4)(n)),
add_self_edges=True)
graph = gn(graph)
gn = GraphConvolution(
update_node_fn=hk.Linear(2))
graph = gn(graph)
return graph
graph = build_toy_graph()
network = hk.without_apply_rng(hk.transform(gcn))
params = network.init(jax.random.PRNGKey(42), graph)
out_graph = network.apply(params, graph)
out_graph.nodes
Explanation: Build GCN Model with Multiple Layers
With a single GCN layer, a node's representation after the GCN layer is only
influenced by its direct neighbourhood. However, we may want to consider larger neighbourhoods, i.e. more than just 1 hop away. To achieve that, we can stack
multiple GCN layers, similar to how stacking CNN layers expands the input region.
We will define a network with three GCN layers:
End of explanation
Zachary's karate club example.
From https://github.com/deepmind/jraph/blob/master/jraph/examples/zacharys_karate_club.py.
Here we train a graph neural network to process Zachary's karate club.
https://en.wikipedia.org/wiki/Zachary%27s_karate_club
Zachary's karate club is used in the literature as an example of a social graph.
Here we use a graphnet to optimize the assignments of the students in the
karate club to two distinct karate instructors (Mr. Hi and John A).
def get_zacharys_karate_club() -> jraph.GraphsTuple:
Returns GraphsTuple representing Zachary's karate club.
social_graph = [
(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2),
(4, 0), (5, 0), (6, 0), (6, 4), (6, 5), (7, 0), (7, 1),
(7, 2), (7, 3), (8, 0), (8, 2), (9, 2), (10, 0), (10, 4),
(10, 5), (11, 0), (12, 0), (12, 3), (13, 0), (13, 1), (13, 2),
(13, 3), (16, 5), (16, 6), (17, 0), (17, 1), (19, 0), (19, 1),
(21, 0), (21, 1), (25, 23), (25, 24), (27, 2), (27, 23),
(27, 24), (28, 2), (29, 23), (29, 26), (30, 1), (30, 8),
(31, 0), (31, 24), (31, 25), (31, 28), (32, 2), (32, 8),
(32, 14), (32, 15), (32, 18), (32, 20), (32, 22), (32, 23),
(32, 29), (32, 30), (32, 31), (33, 8), (33, 9), (33, 13),
(33, 14), (33, 15), (33, 18), (33, 19), (33, 20), (33, 22),
(33, 23), (33, 26), (33, 27), (33, 28), (33, 29), (33, 30),
(33, 31), (33, 32)]
# Add reverse edges.
social_graph += [(edge[1], edge[0]) for edge in social_graph]
n_club_members = 34
return jraph.GraphsTuple(
n_node=jnp.asarray([n_club_members]),
n_edge=jnp.asarray([len(social_graph)]),
# One-hot encoding for nodes, i.e. argmax(nodes) = node index.
nodes=jnp.eye(n_club_members),
# No edge features.
edges=None,
globals=None,
senders=jnp.asarray([edge[0] for edge in social_graph]),
receivers=jnp.asarray([edge[1] for edge in social_graph]))
def get_ground_truth_assignments_for_zacharys_karate_club() -> jnp.ndarray:
Returns ground truth assignments for Zachary's karate club.
return jnp.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1,
0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
graph = get_zacharys_karate_club()
print(f'Number of nodes: {graph.n_node[0]}')
print(f'Number of edges: {graph.n_edge[0]}')
Explanation: Node Classification with GCN on Karate Club Dataset
Time to try out our GCN on our first graph prediction task!
Zachary's Karate Club Dataset
Zachary's karate club is a small dataset commonly used as an example for a social graph. It's great for demo purposes, as it's easy to visualize and quick to train a model on it.
A node represents a student or instructor in the club. An edge means that those two people have interacted outside of the class. There are two instructors in the club.
Each student is assigned to one of two instructors.
Optimizing the GCN on the Karate Club Node Classification Task
The task is to predict the assignment of students to instructors, given the social graph and only knowing the assignment of two nodes (the two instructors) a priori.
In other words, out of the 34 nodes, only two nodes are labeled, and we are trying to optimize the assignment of the other 32 nodes, by maximizing the log-likelihood of the two known node assignments.
We will compute the accuracy of our node assignments by comparing to the ground-truth assignments. Note that the ground-truth for the 32 student nodes is not used in the loss function itself.
Let's load the dataset:
End of explanation
nx_graph = convert_jraph_to_networkx_graph(graph)
pos = nx.circular_layout(nx_graph)
plt.figure(figsize=(6, 6))
nx.draw(nx_graph, pos=pos, with_labels = True, node_size=500, font_color='yellow')
Explanation: Visualize the karate club graph with circular node layout:
End of explanation
def gcn_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple:
Defines a GCN for the karate club task.
Args:
graph: GraphsTuple the network processes.
Returns:
output graph with updated node values.
gn = GraphConvolution(
update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)),
add_self_edges=True)
graph = gn(graph)
gn = GraphConvolution(
update_node_fn=hk.Linear(2)) # output dim is 2 because we have 2 output classes.
graph = gn(graph)
return graph
Explanation: Define the GCN with the GraphConvolution layers we implemented:
End of explanation
def optimize_club(network: hk.Transformed, num_steps: int) -> jnp.ndarray:
Solves the karate club problem by optimizing the assignments of students.
zacharys_karate_club = get_zacharys_karate_club()
labels = get_ground_truth_assignments_for_zacharys_karate_club()
params = network.init(jax.random.PRNGKey(42), zacharys_karate_club)
@jax.jit
def predict(params: hk.Params) -> jnp.ndarray:
decoded_graph = network.apply(params, zacharys_karate_club)
return jnp.argmax(decoded_graph.nodes, axis=1)
@jax.jit
def prediction_loss(params: hk.Params) -> jnp.ndarray:
decoded_graph = network.apply(params, zacharys_karate_club)
# We interpret the decoded nodes as a pair of logits for each node.
log_prob = jax.nn.log_softmax(decoded_graph.nodes)
# The only two assignments we know a-priori are those of Mr. Hi (Node 0)
# and John A (Node 33).
return -(log_prob[0, 0] + log_prob[33, 1])
opt_init, opt_update = optax.adam(1e-2)
opt_state = opt_init(params)
@jax.jit
def update(params: hk.Params, opt_state) -> Tuple[hk.Params, Any]:
Returns updated params and state.
g = jax.grad(prediction_loss)(params)
updates, opt_state = opt_update(g, opt_state)
return optax.apply_updates(params, updates), opt_state
@jax.jit
def accuracy(params: hk.Params) -> jnp.ndarray:
decoded_graph = network.apply(params, zacharys_karate_club)
return jnp.mean(jnp.argmax(decoded_graph.nodes, axis=1) == labels)
for step in range(num_steps):
print(f"step {step} accuracy {accuracy(params).item():.2f}")
params, opt_state = update(params, opt_state)
return predict(params)
Explanation: Training and evaluation code:
End of explanation
network = hk.without_apply_rng(hk.transform(gcn_definition))
result = optimize_club(network, num_steps=15)
Explanation: Let's train the GCN! We expect this model reach an accuracy of about 0.91.
End of explanation
result
Explanation: Try modifying the model parameters to see if you can improve the accuracy!
You can also modify the dataset itself, and see how that influences model training.
Node assignments predicted by the model at the end of training:
End of explanation
zacharys_karate_club = get_zacharys_karate_club()
nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club)
pos = nx.circular_layout(nx_graph)
fig = plt.figure(figsize=(15, 7))
ax1 = fig.add_subplot(121)
nx.draw(
nx_graph,
pos=pos,
with_labels=True,
node_size=500,
node_color=result.tolist(),
font_color='white')
ax1.title.set_text('Predicted Node Assignments with GCN')
gt_labels = get_ground_truth_assignments_for_zacharys_karate_club()
ax2 = fig.add_subplot(122)
nx.draw(
nx_graph,
pos=pos,
with_labels=True,
node_size=500,
node_color=gt_labels.tolist(),
font_color='white')
ax2.title.set_text('Ground-Truth Node Assignments')
fig.suptitle('Do you spot the difference? 😐', y=-0.01)
plt.show()
Explanation: Visualize ground truth and predicted node assignments:
What do you think of the results?
End of explanation
# GAT implementation adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L442.
def GAT(attention_query_fn: Callable,
attention_logit_fn: Callable,
node_update_fn: Optional[Callable] = None,
add_self_edges: bool = True) -> Callable:
Returns a method that applies a Graph Attention Network layer.
Graph Attention message passing as described in
https://arxiv.org/pdf/1710.10903.pdf. This model expects node features as a
jnp.array, may use edge features for computing attention weights, and
ignore global features. It does not support nests.
Args:
attention_query_fn: function that generates attention queries from sender
node features.
attention_logit_fn: function that converts attention queries into logits for
softmax attention.
node_update_fn: function that updates the aggregated messages. If None, will
apply leaky relu and concatenate (if using multi-head attention).
Returns:
A function that applies a Graph Attention layer.
# pylint: disable=g-long-lambda
if node_update_fn is None:
# By default, apply the leaky relu and then concatenate the heads on the
# feature axis.
node_update_fn = lambda x: jnp.reshape(
jax.nn.leaky_relu(x), (x.shape[0], -1))
def _ApplyGAT(graph: jraph.GraphsTuple) -> jraph.GraphsTuple:
Applies a Graph Attention layer.
nodes, edges, receivers, senders, _, _, _ = graph
# Equivalent to the sum of n_node, but statically known.
try:
sum_n_node = nodes.shape[0]
except IndexError:
raise IndexError('GAT requires node features')
# Pass nodes through the attention query function to transform
# node features, e.g. with an MLP.
nodes = attention_query_fn(nodes)
total_num_nodes = tree.tree_leaves(nodes)[0].shape[0]
if add_self_edges:
# We add self edges to the senders and receivers so that each node
# includes itself in aggregation.
receivers, senders = add_self_edges_fn(receivers, senders,
total_num_nodes)
# We compute the softmax logits using a function that takes the
# embedded sender and receiver attributes.
sent_attributes = nodes[senders]
received_attributes = nodes[receivers]
att_softmax_logits = attention_logit_fn(sent_attributes,
received_attributes, edges)
# Compute the attention softmax weights on the entire tree.
att_weights = jraph.segment_softmax(
att_softmax_logits, segment_ids=receivers, num_segments=sum_n_node)
# Apply attention weights.
messages = sent_attributes * att_weights
# Aggregate messages to nodes.
nodes = jax.ops.segment_sum(messages, receivers, num_segments=sum_n_node)
# Apply an update function to the aggregated messages.
nodes = node_update_fn(nodes)
return graph._replace(nodes=nodes)
# pylint: enable=g-long-lambda
return _ApplyGAT
Explanation: Graph Attention (GAT) Layer
While the GCN we covered in the previous section can learn meaningful representations, it also has some shortcomings. Can you think of any?
In the GCN layer, the messages from all its neighbours and the node itself are equally weighted. This may lead to loss of node-specific information. E.g., consider the case when a set of nodes shares the same set of neighbors, and start out with different node features. Then because of averaging, their resulting output features would be the same. Adding self-edges mitigates this issue by a small amount, but this problem is magnified with increasing number of GCN layers and number of edges connecting to a node.
The graph attention (GAT) mechanism, as proposed by Velickovic et al. ( 2017), allows the network to learn how to weigh / assign importance to the node features from the neighbourhood when computing the new node features. This is very similar to the idea of using attention in Transformers, which were introduced in Vaswani et al. (2017).
(One could even argue that Transformers are graph attention networks operating on the special case of fully-connected graphs.)
In the figure below, $\vec{h}$ are the node features and $\vec{\alpha}$ are the learned attention weights.
<image src="https://storage.googleapis.com/dm-educational/assets/graph-nets/gat1.png" width="400px">
Figure Credit: Velickovic et al. ( 2017).
(Detail: This image is showing multi-headed attention with 3 heads, each color corresponding to a different head. At the end, an aggregation function is applied over all the heads.)
To obtain the output node features of the GAT layer, we compute:
$$ \vec{h}'i = \sum {j \in \mathcal{N}(i)}\alpha_{ij} \mathbf{W} \vec{h}_j$$
Here, $\mathbf{W}$ is a weight matrix which performs a linear transformation on the input.
How do we obtain $\alpha$, or in other words, learn what to pay attention to?
Intuitively, the attention coefficient $\alpha_{ij}$ should rely on both the transformed features from nodes $i$ and $j$. So let's first define an attention mechanism function $\mathrm{attention_fn}$ that computes the intermediary attention coefficients $e_{ij}$:
$$ e_{ij} = \mathrm{attention_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j)$$
To obtain normalized attention weights $\alpha$, we apply a softmax:
$$\alpha_{ij} = \frac{\exp(e_{ij})}{\sum {j \in \mathcal{N}(i)}\exp(e{ij})}$$
For the function $a$, the authors of the GAT paper chose to concatenate the transformed node features (denoted by $||$) and apply a single-layer feedforward network, parameterized by a weight vector $\vec{\mathbf{a}}$ and with LeakyRelu as non-linearity.
In the implementation below, we refer to $\mathbf{W}$ as attention_query_fn and $att_fn$ as attention_logit_fn.
$$\mathrm{attention_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j) = \text{LeakyReLU}(\vec{\mathbf{a}}(\mathbf{W}\vec{h}_i || \mathbf{W}\vec{h}_j))$$
The figure below summarizes this attention mechanism visually.
<image src="https://storage.googleapis.com/dm-educational/assets/graph-nets/gat2.png" width="300px">
Figure Credit: Petar Velickovic.
<!-- $\sum_{j \in \mathcal{N}(i)}\vec{\alpha}_{ij} \stackrel{!}{=}
1 $ -->
End of explanation
def attention_logit_fn(sender_attr: jnp.ndarray, receiver_attr: jnp.ndarray,
edges: jnp.ndarray) -> jnp.ndarray:
del edges
x = jnp.concatenate((sender_attr, receiver_attr), axis=1)
return hk.Linear(1)(x)
gat_layer = GAT(
attention_query_fn=lambda n: hk.Linear(8)
(n), # Applies W to the node features
attention_logit_fn=attention_logit_fn,
node_update_fn=None,
add_self_edges=True,
)
graph = build_toy_graph()
network = hk.without_apply_rng(hk.transform(gat_layer))
params = network.init(jax.random.PRNGKey(42), graph)
out_graph = network.apply(params, graph)
out_graph.nodes
Explanation: Test GAT Layer
End of explanation
def gat_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple:
Defines a GAT network for the karate club node classification task.
Args:
graph: GraphsTuple the network processes.
Returns:
output graph with updated node values.
def _attention_logit_fn(sender_attr: jnp.ndarray, receiver_attr: jnp.ndarray,
edges: jnp.ndarray) -> jnp.ndarray:
del edges
x = jnp.concatenate((sender_attr, receiver_attr), axis=1)
return hk.Linear(1)(x)
gn = GAT(
attention_query_fn=lambda n: hk.Linear(8)(n),
attention_logit_fn=_attention_logit_fn,
node_update_fn=None,
add_self_edges=True)
graph = gn(graph)
gn = GAT(
attention_query_fn=lambda n: hk.Linear(8)(n),
attention_logit_fn=_attention_logit_fn,
node_update_fn=hk.Linear(2),
add_self_edges=True)
graph = gn(graph)
return graph
Explanation: Train GAT Model on Karate Club Dataset
We will now repeat the karate club experiment with a GAT network.
End of explanation
network = hk.without_apply_rng(hk.transform(gat_definition))
result = optimize_club(network, num_steps=15)
Explanation: Let's train the model!
We expect the model to reach an accuracy of about 0.97.
End of explanation
result
zacharys_karate_club = get_zacharys_karate_club()
nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club)
pos = nx.circular_layout(nx_graph)
fig = plt.figure(figsize=(15, 7))
ax1 = fig.add_subplot(121)
nx.draw(
nx_graph,
pos=pos,
with_labels=True,
node_size=500,
node_color=result.tolist(),
font_color='white')
ax1.title.set_text('Predicted Node Assignments with GAT')
gt_labels = get_ground_truth_assignments_for_zacharys_karate_club()
ax2 = fig.add_subplot(122)
nx.draw(
nx_graph,
pos=pos,
with_labels=True,
node_size=500,
node_color=gt_labels.tolist(),
font_color='white')
ax2.title.set_text('Ground-Truth Node Assignments')
fig.suptitle('Do you spot the difference? 😐', y=-0.01)
plt.show()
Explanation: The final node assignment predicted by the trained model:
End of explanation
# Download jraph version of MUTAG.
!wget -P /tmp/ https://storage.googleapis.com/dm-educational/assets/graph-nets/jraph_datasets/mutag.pickle
with open('/tmp/mutag.pickle', 'rb') as f:
mutag_ds = pickle.load(f)
Explanation: Graph Classification on MUTAG (Molecules)
In the previous section, we used our GCN and GAT networks on a node classification problem. Now, let's use the same model architectures on a graph classification task.
The main difference from our previous setup is that instead of observing individual node latents, we are now attempting to summarize them into one embedding vector, representative of the entire graph, which we then use to predict the class of this graph.
We will do this on one of the most common tasks of this type -- molecular property prediction, where molecules are represented as graphs. Nodes correspond to atoms, and edges represent the bonds between them.
We will use the MUTAG dataset for this example, a common dataset from the TUDatasets collection.
We have converted this dataset to be compatible with jraph and will download it in the cell below.
Citation for TUDatasets: Morris, Christopher, et al. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663. 2020.
End of explanation
len(mutag_ds)
# Inspect the first graph
g = mutag_ds[0]['input_graph']
print(f'Number of nodes: {g.n_node[0]}')
print(f'Number of edges: {g.n_edge[0]}')
print(f'Node features shape: {g.nodes.shape}')
print(f'Edge features shape: {g.edges.shape}')
draw_jraph_graph_structure(g)
# Target for first graph
print(f"Target: {mutag_ds[0]['target']}")
Explanation: The dataset is saved as a list of examples, each example is a dictionary containing an input_graph and its corresponding target.
End of explanation
train_mutag_ds = mutag_ds[:150]
test_mutag_ds = mutag_ds[150:]
Explanation: We see that there are 188 graphs, to be classified in one of 2 classes, representing "their mutagenic effect on a specific gram negative bacterium". Node features represent the 1-hot encoding of the atom type (0=C, 1=N, 2=O, 3=F, 4=I, 5=Cl, 6=Br). Edge features (edge_attr) represent the bond type, which we will here ignore.
Let's split the dataset to use the first 150 graphs as the training set (and the rest as the test set).
End of explanation
# Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py
def _nearest_bigger_power_of_two(x: int) -> int:
Computes the nearest power of two greater than x for padding.
y = 2
while y < x:
y *= 2
return y
def pad_graph_to_nearest_power_of_two(
graphs_tuple: jraph.GraphsTuple) -> jraph.GraphsTuple:
Pads a batched `GraphsTuple` to the nearest power of two.
For example, if a `GraphsTuple` has 7 nodes, 5 edges and 3 graphs, this method
would pad the `GraphsTuple` nodes and edges:
7 nodes --> 8 nodes (2^3)
5 edges --> 8 edges (2^3)
And since padding is accomplished using `jraph.pad_with_graphs`, an extra
graph and node is added:
8 nodes --> 9 nodes
3 graphs --> 4 graphs
Args:
graphs_tuple: a batched `GraphsTuple` (can be batch size 1).
Returns:
A graphs_tuple batched to the nearest power of two.
# Add 1 since we need at least one padding node for pad_with_graphs.
pad_nodes_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_node)) + 1
pad_edges_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_edge))
# Add 1 since we need at least one padding graph for pad_with_graphs.
# We do not pad to nearest power of two because the batch size is fixed.
pad_graphs_to = graphs_tuple.n_node.shape[0] + 1
return jraph.pad_with_graphs(graphs_tuple, pad_nodes_to, pad_edges_to,
pad_graphs_to)
Explanation: Padding Graphs to Speed Up Training
Since jax recompiles the program for each graph size, training would take a long time due to recompilation for different graph sizes. To address that, we pad the number of nodes and edges in the graphs to nearest power of two. Since jax maintains a cache
of compiled programs, the compilation cost is amortized.
End of explanation
# Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py
@jraph.concatenated_args
def edge_update_fn(feats: jnp.ndarray) -> jnp.ndarray:
Edge update function for graph net.
net = hk.Sequential(
[hk.Linear(128), jax.nn.relu,
hk.Linear(128)])
return net(feats)
@jraph.concatenated_args
def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray:
Node update function for graph net.
net = hk.Sequential(
[hk.Linear(128), jax.nn.relu,
hk.Linear(128)])
return net(feats)
@jraph.concatenated_args
def update_global_fn(feats: jnp.ndarray) -> jnp.ndarray:
Global update function for graph net.
# MUTAG is a binary classification task, so output pos neg logits.
net = hk.Sequential(
[hk.Linear(128), jax.nn.relu,
hk.Linear(2)])
return net(feats)
def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple:
# Add a global paramater for graph classification.
graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1]))
embedder = jraph.GraphMapFeatures(
hk.Linear(128), hk.Linear(128), hk.Linear(128))
net = jraph.GraphNetwork(
update_node_fn=node_update_fn,
update_edge_fn=edge_update_fn,
update_global_fn=update_global_fn)
return net(embedder(graph))
Explanation: Graph Network Model Definition
We will use jraph.GraphNetwork() to build our graph model. The GraphNetwork architecture is defined in Battaglia et al. (2018).
We first define update functions for nodes, edges, and the full graph (global). We will use MLP blocks for all three.
End of explanation
def compute_loss(params: hk.Params, graph: jraph.GraphsTuple, label: jnp.ndarray,
net: jraph.GraphsTuple) -> Tuple[jnp.ndarray, jnp.ndarray]:
Computes loss and accuracy.
pred_graph = net.apply(params, graph)
preds = jax.nn.log_softmax(pred_graph.globals)
targets = jax.nn.one_hot(label, 2)
# Since we have an extra 'dummy' graph in our batch due to padding, we want
# to mask out any loss associated with the dummy graph.
# Since we padded with `pad_with_graphs` we can recover the mask by using
# get_graph_padding_mask.
mask = jraph.get_graph_padding_mask(pred_graph)
# Cross entropy loss.
loss = -jnp.mean(preds * targets * mask[:, None])
# Accuracy taking into account the mask.
accuracy = jnp.sum(
(jnp.argmax(pred_graph.globals, axis=1) == label) * mask) / jnp.sum(mask)
return loss, accuracy
Explanation: Loss and Accuracy Function
Define the classification cross-entropy loss and accuracy function.
End of explanation
# Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py
def train(dataset: List[Dict[str, Any]], num_train_steps: int) -> hk.Params:
Training loop.
# Transform impure `net_fn` to pure functions with hk.transform.
net = hk.without_apply_rng(hk.transform(net_fn))
# Get a candidate graph and label to initialize the network.
graph = dataset[0]['input_graph']
# Initialize the network.
params = net.init(jax.random.PRNGKey(42), graph)
# Initialize the optimizer.
opt_init, opt_update = optax.adam(1e-4)
opt_state = opt_init(params)
compute_loss_fn = functools.partial(compute_loss, net=net)
# We jit the computation of our loss, since this is the main computation.
# Using jax.jit means that we will use a single accelerator. If you want
# to use more than 1 accelerator, use jax.pmap. More information can be
# found in the jax documentation.
compute_loss_fn = jax.jit(jax.value_and_grad(
compute_loss_fn, has_aux=True))
for idx in range(num_train_steps):
graph = dataset[idx % len(dataset)]['input_graph']
label = dataset[idx % len(dataset)]['target']
# Jax will re-jit your graphnet every time a new graph shape is encountered.
# In the limit, this means a new compilation every training step, which
# will result in *extremely* slow training. To prevent this, pad each
# batch of graphs to the nearest power of two. Since jax maintains a cache
# of compiled programs, the compilation cost is amortized.
graph = pad_graph_to_nearest_power_of_two(graph)
# Since padding is implemented with pad_with_graphs, an extra graph has
# been added to the batch, which means there should be an extra label.
label = jnp.concatenate([label, jnp.array([0])])
(loss, acc), grad = compute_loss_fn(params, graph, label)
updates, opt_state = opt_update(grad, opt_state, params)
params = optax.apply_updates(params, updates)
if idx % 50 == 0:
print(f'step: {idx}, loss: {loss}, acc: {acc}')
print('Training finished')
return params
def evaluate(dataset: List[Dict[str, Any]],
params: hk.Params) -> Tuple[jnp.ndarray, jnp.ndarray]:
Evaluation Script.
# Transform impure `net_fn` to pure functions with hk.transform.
net = hk.without_apply_rng(hk.transform(net_fn))
# Get a candidate graph and label to initialize the network.
graph = dataset[0]['input_graph']
accumulated_loss = 0
accumulated_accuracy = 0
compute_loss_fn = jax.jit(functools.partial(compute_loss, net=net))
for idx in range(len(dataset)):
graph = dataset[idx]['input_graph']
label = dataset[idx]['target']
graph = pad_graph_to_nearest_power_of_two(graph)
label = jnp.concatenate([label, jnp.array([0])])
loss, acc = compute_loss_fn(params, graph, label)
accumulated_accuracy += acc
accumulated_loss += loss
if idx % 100 == 0:
print(f'Evaluated {idx + 1} graphs')
print('Completed evaluation.')
loss = accumulated_loss / idx
accuracy = accumulated_accuracy / idx
print(f'Eval loss: {loss}, accuracy {accuracy}')
return loss, accuracy
params = train(train_mutag_ds, num_train_steps=500)
evaluate(test_mutag_ds, params)
Explanation: Training and Evaluation Functions
End of explanation
# Download jraph version of Cora.
!wget -P /tmp/ https://storage.googleapis.com/dm-educational/assets/graph-nets/jraph_datasets/cora.pickle
with open('/tmp/cora.pickle', 'rb') as f:
cora_ds = pickle.load(f)
Explanation: We converge at ~76% test accuracy. We could of course further tune the parameters to improve this result.
Link prediction on CORA (Citation Network)
The final problem type we will explore is link prediction, an instance of an edge-level task. Given a graph, our goal is to predict whether a certain edge $(u,v)$ should be present or not. This is often useful in the recommender system settings (e.g., propose new friends in a social network, propose a movie to a user).
As before, the first step is to obtain node latents $h_i$ using a GNN. In this context we will use the autoencoder language and call this GNN encoder. Then, we learn a binary classifier $f: (h_i, h_j) \to z_{i,j}$ (decoder), predicting if an edge $(i,j)$ should exist or not. While we could use a more elaborate decoder (e.g., an MLP), a common approach we will also use here is to focus on obtaining good node embeddings, and for the decoder simply use the similarity between node latents, i.e. $z_{i,j} = h_i^T h_j$.
For this problem we will use the Cora dataset, a citation graph containing 2708 scientific publications. For each publication we have a 1433-dimensional feature vector, which is a bag-of-words representation (with a small, fixed dictionary) of the paper text. The edges in this graph represent citations, and are commonly treated as undirected. Each paper is in one of seven topics (classes) so you can also use this dataset for node classification.
Similar to MUTAG, we have converted this dataset to jraph for you.
Citation for the use of the Cora dataset:
- Qing Lu and Lise Getoor. Link-Based Classification. International Conference on Machine Learning. 2003.
- Sen, Prithviraj, et al. Collective classification in network data. AI magazine 29.3. 2008.
- Dataset download link
End of explanation
def train_val_test_split_edges(graph: jraph.GraphsTuple,
val_perc: float = 0.05,
test_perc: float = 0.1):
Split edges in input graph into train, val and test splits.
For val and test sets, also include negative edges.
Based on torch_geometric.utils.train_test_split_edges.
mask = graph.senders < graph.receivers
senders = graph.senders[mask]
receivers = graph.receivers[mask]
num_val = int(val_perc * senders.shape[0])
num_test = int(test_perc * senders.shape[0])
permuted_indices = onp.random.permutation(range(senders.shape[0]))
senders = senders[permuted_indices]
receivers = receivers[permuted_indices]
if graph.edges is not None:
edges = graph.edges[permuted_indices]
val_senders = senders[:num_val]
val_receivers = receivers[:num_val]
if graph.edges is not None:
val_edges = edges[:num_val]
test_senders = senders[num_val:num_val + num_test]
test_receivers = receivers[num_val:num_val + num_test]
if graph.edges is not None:
test_edges = edges[num_val:num_val + num_test]
train_senders = senders[num_val + num_test:]
train_receivers = receivers[num_val + num_test:]
train_edges = None
if graph.edges is not None:
train_edges = edges[num_val + num_test:]
# make training edges undirected by adding reverse edges back in
train_senders_undir = jnp.concatenate((train_senders, train_receivers))
train_receivers_undir = jnp.concatenate((train_receivers, train_senders))
train_senders = train_senders_undir
train_receivers = train_receivers_undir
# Negative edges.
num_nodes = graph.n_node[0]
# Create a negative adjacency mask, s.t. mask[i, j] = True iff edge i->j does
# not exist in the original graph.
neg_adj_mask = onp.ones((num_nodes, num_nodes), dtype=onp.uint8)
# upper triangular part
neg_adj_mask = onp.triu(neg_adj_mask, k=1)
neg_adj_mask[graph.senders, graph.receivers] = 0
neg_adj_mask = neg_adj_mask.astype(onp.bool)
neg_senders, neg_receivers = neg_adj_mask.nonzero()
perm = onp.random.permutation(range(len(neg_senders)))
neg_senders = neg_senders[perm]
neg_receivers = neg_receivers[perm]
val_neg_senders = neg_senders[:num_val]
val_neg_receivers = neg_receivers[:num_val]
test_neg_senders = neg_senders[num_val:num_val + num_test]
test_neg_receivers = neg_receivers[num_val:num_val + num_test]
train_graph = jraph.GraphsTuple(
nodes=graph.nodes,
edges=train_edges,
senders=train_senders,
receivers=train_receivers,
n_node=graph.n_node,
n_edge=jnp.array([len(train_senders)]),
globals=graph.globals)
return train_graph, neg_adj_mask, val_senders, val_receivers, val_neg_senders, val_neg_receivers, test_senders, test_receivers, test_neg_senders, test_neg_receivers
Explanation: Splitting Edges and Adding "Negative" Edges
For the link prediction task, we split the edges into train, val and test sets and also add "negative" examples (edges that do not correspond to a citation). We will ignore the topic classes.
For the validation and test splits, we add the same number of existing edges ("positive examples") and non-existing edges ("negative examples").
In contrast to the validation and test splits, the training split only contains positive examples (set $T_+$). The $|T_+|$ negative examples to be used during training will be sampled ad hoc in each epoch and uniformly at random from all edges that are not in $T_+$. This allows the model to see a wider range of negative examples.
End of explanation
graph = cora_ds[0]['input_graph']
train_graph, neg_adj_mask, val_pos_senders, val_pos_receivers, val_neg_senders, val_neg_receivers, test_pos_senders, test_pos_receivers, test_neg_senders, test_neg_receivers = train_val_test_split_edges(graph)
print(f'Train set: {train_graph.senders.shape[0]} positive edges, we will sample the same number of negative edges at runtime')
print(f'Val set: {val_pos_senders.shape[0]} positive edges, {val_neg_senders.shape[0]} negative edges')
print(f'Test set: {test_pos_senders.shape[0]} positive edges, {test_neg_senders.shape[0]} negative edges')
print(f'Negative adjacency mask shape: {neg_adj_mask.shape}')
print(f'Numbe of negative edges to sample from: {neg_adj_mask.sum()}')
Explanation: Test the Edge Splitting Function
End of explanation
@jraph.concatenated_args
def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray:
Node update function for graph net.
net = hk.Sequential([hk.Linear(128), jax.nn.relu, hk.Linear(64)])
return net(feats)
def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple:
Network definition.
graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1]))
net = jraph.GraphNetwork(
update_node_fn=node_update_fn, update_edge_fn=None, update_global_fn=None)
return net(graph)
def decode(pred_graph: jraph.GraphsTuple, senders: jnp.ndarray,
receivers: jnp.ndarray) -> jnp.ndarray:
Given a set of candidate edges, take dot product of respective nodes.
Args:
pred_graph: input graph.
senders: Senders of candidate edges.
receivers: Receivers of candidate edges.
Returns:
For each edge, computes dot product of the features of the two nodes.
return jnp.squeeze(
jnp.sum(pred_graph.nodes[senders] * pred_graph.nodes[receivers], axis=1))
Explanation: Note: It will often happen during training that as a negative example, we sample an initially existing edge (that is now e.g. a positive example in the test set). We are however not allowed to check for this, as we should be unaware of the existence of test edges during training.
Assuming our dot product decoder, we are essentially attempting to bring the latents of endpoints of edges from $T_+$ closer together, and make the latents of all other pairs of nodes as distant as possible. As this is impossible to fully satisfy, the hope is that the model will "fail" to distance those pairs of nodes where the edges should actually exist (positive examples from the test set).
Graph Network Model Definition
We will use jraph.GraphNetwork to build our graph net model.
We first define update functions for node features. We are not using edge or global features for this task.
End of explanation
from sklearn.metrics import roc_auc_score
def compute_bce_with_logits_loss(x: jnp.ndarray, y: jnp.ndarray) -> jnp.ndarray:
Computes binary cross-entropy with logits loss.
Combines sigmoid and BCE, and uses log-sum-exp trick for numerical stability.
See https://stackoverflow.com/a/66909858 if you want to learn more.
Args:
x: Predictions (logits).
y: Labels.
Returns:
Binary cross-entropy loss with mean aggregation.
max_val = jnp.clip(x, 0, None)
loss = x - x * y + max_val + jnp.log(
jnp.exp(-max_val) + jnp.exp((-x - max_val)))
return loss.mean()
def compute_loss(params: hk.Params, graph: jraph.GraphsTuple,
senders: jnp.ndarray, receivers: jnp.ndarray,
labels: jnp.ndarray,
net: hk.Transformed) -> Tuple[jnp.ndarray, jnp.ndarray]:
Computes loss.
pred_graph = net.apply(params, graph)
preds = decode(pred_graph, senders, receivers)
loss = compute_bce_with_logits_loss(preds, labels)
return loss, preds
def compute_roc_auc_score(preds: jnp.ndarray,
labels: jnp.ndarray) -> jnp.ndarray:
Computes roc auc (area under the curve) score for classification.
s = jax.nn.sigmoid(preds)
roc_auc = roc_auc_score(labels, s)
return roc_auc
Explanation: To evaluate our model, we first apply the sigmoid function to obtained dot products to get a score $s_{i,j} \in [0,1]$ for each edge. Now, we can pick a threshold $\tau$ and say that we predict all pairs $(i,j)$ s.t. $s_{i,j} \geq \tau$ as edges (and all the rest as non-edges).
Loss and ROC-AUC-Metric Function
Define the binary classification cross-entropy loss.
To aggregate the results over all choices of $\tau$, we will use ROC-AUC (the area under the ROC curve) as our evaluation metric.
End of explanation
def negative_sampling(
graph: jraph.GraphsTuple, num_neg_samples: int,
key: jnp.DeviceArray) -> Tuple[jnp.DeviceArray, jnp.DeviceArray]:
Samples negative edges, i.e. edges that don't exist in the input graph.
num_nodes = graph.n_node[0]
total_possible_edges = num_nodes**2
# convert 2D edge indices to 1D representation.
pos_idx = graph.senders * num_nodes + graph.receivers
# Percentage to oversample edges, so most likely will sample enough neg edges.
alpha = jnp.abs(1 / (1 - 1.1 *
(graph.senders.shape[0] / total_possible_edges)))
perm = jax.random.randint(
key,
shape=(int(alpha * num_neg_samples),),
minval=0,
maxval=total_possible_edges,
dtype=jnp.uint32)
# mask where sampled edges are positive edges.
mask = jnp.isin(perm, pos_idx)
# remove positive edges.
perm = perm[~mask][:num_neg_samples]
# convert 1d back to 2d edge indices.
neg_senders = perm // num_nodes
neg_receivers = perm % num_nodes
return neg_senders, neg_receivers
Explanation: Helper function for sampling negative edges during training.
End of explanation
def train(dataset: List[Dict[str, Any]], num_epochs: int) -> hk.Params:
Training loop.
key = jax.random.PRNGKey(42)
# Transform impure `net_fn` to pure functions with hk.transform.
net = hk.without_apply_rng(hk.transform(net_fn))
# Get a candidate graph and label to initialize the network.
graph = dataset[0]['input_graph']
train_graph, _, val_pos_s, val_pos_r, val_neg_s, val_neg_r, test_pos_s, \
test_pos_r, test_neg_s, test_neg_r = train_val_test_split_edges(
graph)
# Prepare the validation and test data.
val_senders = jnp.concatenate((val_pos_s, val_neg_s))
val_receivers = jnp.concatenate((val_pos_r, val_neg_r))
val_labels = jnp.concatenate(
(jnp.ones(len(val_pos_s)), jnp.zeros(len(val_neg_s))))
test_senders = jnp.concatenate((test_pos_s, test_neg_s))
test_receivers = jnp.concatenate((test_pos_r, test_neg_r))
test_labels = jnp.concatenate(
(jnp.ones(len(test_pos_s)), jnp.zeros(len(test_neg_s))))
# Initialize the network.
params = net.init(key, train_graph)
# Initialize the optimizer.
opt_init, opt_update = optax.adam(1e-4)
opt_state = opt_init(params)
compute_loss_fn = functools.partial(compute_loss, net=net)
# We jit the computation of our loss, since this is the main computation.
# Using jax.jit means that we will use a single accelerator. If you want
# to use more than 1 accelerator, use jax.pmap. More information can be
# found in the jax documentation.
compute_loss_fn = jax.jit(jax.value_and_grad(compute_loss_fn, has_aux=True))
for epoch in range(num_epochs):
num_neg_samples = train_graph.senders.shape[0]
train_neg_senders, train_neg_receivers = negative_sampling(
train_graph, num_neg_samples=num_neg_samples, key=key)
train_senders = jnp.concatenate((train_graph.senders, train_neg_senders))
train_receivers = jnp.concatenate(
(train_graph.receivers, train_neg_receivers))
train_labels = jnp.concatenate(
(jnp.ones(len(train_graph.senders)), jnp.zeros(len(train_neg_senders))))
(train_loss,
train_preds), grad = compute_loss_fn(params, train_graph, train_senders,
train_receivers, train_labels)
updates, opt_state = opt_update(grad, opt_state, params)
params = optax.apply_updates(params, updates)
if epoch % 10 == 0 or epoch == (num_epochs - 1):
train_roc_auc = compute_roc_auc_score(train_preds, train_labels)
val_loss, val_preds = compute_loss(params, train_graph, val_senders,
val_receivers, val_labels, net)
val_roc_auc = compute_roc_auc_score(val_preds, val_labels)
print(f'epoch: {epoch}, train_loss: {train_loss:.3f}, '
f'train_roc_auc: {train_roc_auc:.3f}, val_loss: {val_loss:.3f}, '
f'val_roc_auc: {val_roc_auc:.3f}')
test_loss, test_preds = compute_loss(params, train_graph, test_senders,
test_receivers, test_labels, net)
test_roc_auc = compute_roc_auc_score(test_preds, test_labels)
print('Training finished')
print(
f'epoch: {epoch}, test_loss: {test_loss:.3f}, test_roc_auc: {test_roc_auc:.3f}'
)
return params
Explanation: Let's write the training loop:
End of explanation
params = train(cora_ds, num_epochs=200)
Explanation: Let's train the model! We expect the model to reach roughly test_roc_auc of 0.84.
(Note that ROC-AUC is a scalar between 0 and 1, with 1 being the ROC-AUC of a perfect classifier.)
End of explanation |
9,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
Step2: Step 1 - collect data (and/or generate them)
Step3: Step 2 - Build model
Step4: Conclusion
There is no way this graph makes much sense but let's give it a try to see how bad really is
Step 3 training the network
RECALL
Step5: Conclusion
The initial price difference of the predictions is still not as good as we would expect, perhaps using an EOS as they do in machine translation models is not the best architecture for our case
GRU cell - without EOS
Step6: Conclusion
???
GRU cell - without EOS - 800 units | Python Code:
from __future__ import division
import tensorflow as tf
from os import path
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from mylibs.tf_helper import getDefaultGPUconfig
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
from common import get_or_run_nn
from data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider
from models.price_history_seq2seq_native import PriceHistorySeq2SeqNative
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
%matplotlib inline
Explanation: https://www.youtube.com/watch?v=ElmBrKyMXxs
https://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb
https://github.com/ematvey/tensorflow-seq2seq-tutorials
End of explanation
num_epochs = 10
num_features = 1
num_units = 400 #state size
input_len = 60
target_len = 30
batch_size = 47
#trunc_backprop_len = ??
Explanation: Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
End of explanation
npz_path = '../price_history_03_dp_60to30_from_fixed_len.npz'
dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_path, batch_size=batch_size, with_EOS=False)
dp.inputs.shape, dp.targets.shape
aa, bb = dp.next()
aa.shape, bb.shape
Explanation: Step 1 - collect data (and/or generate them)
End of explanation
model = PriceHistorySeq2SeqNative(rng=random_state, dtype=dtype, config=config, with_EOS=False)
graph = model.getGraph(batch_size=batch_size,
num_units=num_units,
input_len=input_len,
target_len=target_len)
#show_graph(graph)
Explanation: Step 2 - Build model
End of explanation
model = PriceHistorySeq2SeqNative(rng=random_state, dtype=dtype, config=config, with_EOS=False)
rnn_cell = PriceHistorySeq2SeqNative.RNN_CELLS.BASIC_RNN
num_epochs = 20
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='008_rnn_seq2seq_native_noEOS_60to30_20epochs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
Explanation: Conclusion
There is no way this graph makes much sense but let's give it a try to see how bad really is
Step 3 training the network
RECALL: baseline is around 4 for huber loss for current problem, anything above 4 should be considered as major errors
Basic RNN cell (without EOS)
End of explanation
rnn_cell = PriceHistorySeq2SeqNative.RNN_CELLS.GRU
num_epochs = 50
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
#dyn_stats = experiment()
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='008_gru_seq2seq_native_noEOS_60to30_50epochs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
Explanation: Conclusion
The initial price difference of the predictions is still not as good as we would expect, perhaps using an EOS as they do in machine translation models is not the best architecture for our case
GRU cell - without EOS
End of explanation
rnn_cell = PriceHistorySeq2SeqNative.RNN_CELLS.GRU
num_epochs = 30
num_units = 800
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
#dyn_stats = experiment()
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='008_gru_seq2seq_native_noEOS_60to30_30epochs_800units')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
Explanation: Conclusion
???
GRU cell - without EOS - 800 units
End of explanation |
9,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PerfForesightConsumerType
Step1: The module HARK.ConsumptionSaving.ConsIndShockModel concerns consumption-saving models with idiosyncratic shocks to (non-capital) income. All of the models assume CRRA utility with geometric discounting, no bequest motive, and income shocks are fully transitory or fully permanent.
ConsIndShockModel currently includes three models
Step2: Solving and examining the solution of the perfect foresight model
With the dictionary we have just defined, we can create an instance of PerfForesightConsumerType by passing the dictionary to the class (as if the class were a function). This instance can then be solved by invoking its solve method.
Step3: The $\texttt{solve}$ method fills in the instance's attribute solution as a time-varying list of solutions to each period of the consumer's problem. In this case, solution will be a list with exactly one instance of the class ConsumerSolution, representing the solution to the infinite horizon model we specified.
Step4: Each element of solution has a few attributes. To see all of them, we can use the \texttt{vars} built in function
Step5: The two most important attributes of a single period solution of this model are the (normalized) consumption function $\texttt{cFunc}$ and the (normalized) value function $\texttt{vFunc}$. Let's plot those functions near the lower bound of the permissible state space (the attribute $\texttt{mNrmMin}$ tells us the lower bound of $m_t$ where the consumption function is defined).
Step6: An element of solution also includes the (normalized) marginal value function $\texttt{vPfunc}$, and the lower and upper bounds of the marginal propensity to consume (MPC) $\texttt{MPCmin}$ and $\texttt{MPCmax}$. Note that with a linear consumption function, the MPC is constant, so its lower and upper bound are identical.
Liquidity constrained perfect foresight example
Without an artificial borrowing constraint, a perfect foresight consumer is free to borrow against the PDV of his entire future stream of labor income-- his "human wealth" $\texttt{hNrm}$-- and he will consume a constant proportion of his total wealth (market resources plus human wealth). If we introduce an artificial borrowing constraint, both of these features vanish. In the cell below, we define a parameter dictionary that prevents the consumer from borrowing at all, create and solve a new instance of PerfForesightConsumerType with it, and then plot its consumption function.
Step7: Simulating the perfect foresight consumer model
Suppose we wanted to simulate many consumers who share the parameter values that we passed to PerfForesightConsumerType-- an ex ante homogeneous type of consumers. To do this, our instance would have to know how many agents there are of this type, as well as their initial levels of assets $a_t$ and permanent income $P_t$.
Setting simulation parameters
Let's fill in this information by passing another dictionary to PFexample with simulation parameters. The table below lists the parameters that an instance of PerfForesightConsumerType needs in order to successfully simulate its model using the simulate method.
| Description | Code | Example value |
|
Step8: To generate simulated data, we need to specify which variables we want to track the "history" of for this instance. To do so, we set the track_vars attribute of our PerfForesightConsumerType instance to be a list of strings with the simulation variables we want to track.
In this model, valid arguments to track_vars include $\texttt{mNrm}$, $\texttt{cNrm}$, $\texttt{aNrm}$, and $\texttt{pLvl}$. Because this model has no idiosyncratic shocks, our simulated data will be quite boring.
Generating simulated data
Before simulating, the initialize_sim method must be invoked. This resets our instance back to its initial state, drawing a set of initial $\texttt{aNrm}$ and $\texttt{pLvl}$ values from the specified distributions and storing them in the attributes $\texttt{aNrmNow_init}$ and $\texttt{pLvlNow_init}$. It also resets this instance's internal random number generator, so that the same initial states will be set every time initialize_sim is called. In models with non-trivial shocks, this also ensures that the same sequence of shocks will be generated on every simulation run.
Finally, the simulate method can be called.
Step9: A perfect foresight consumer can borrow against the PDV of his future income-- his human wealth-- and thus as time goes on, our simulated agents approach the (very negative) steady state level of $m_t$ while being steadily replaced with consumers with roughly $m_t=1$.
The slight wiggles in the plotted curve are due to consumers randomly dying and being replaced; their replacement will have an initial state drawn from the distributions specified by the user. To see the current distribution of ages, we can look at the attribute $\texttt{t_age}$.
Step10: The distribution is (discretely) exponential, with a point mass at 120 with consumers who have survived since the beginning of the simulation.
One might wonder why HARK requires users to call initialize_sim before calling simulate | Python Code:
# Initial imports and notebook setup, click arrow to show
from copy import copy
import matplotlib.pyplot as plt
import numpy as np
from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType
from HARK.utilities import plot_funcs
mystr = lambda number: "{:.4f}".format(number)
Explanation: PerfForesightConsumerType: Perfect foresight consumption-saving
End of explanation
PerfForesightDict = {
# Parameters actually used in the solution method
"CRRA": 2.0, # Coefficient of relative risk aversion
"Rfree": 1.03, # Interest factor on assets
"DiscFac": 0.96, # Default intertemporal discount factor
"LivPrb": [0.98], # Survival probability
"PermGroFac": [1.01], # Permanent income growth factor
"BoroCnstArt": None, # Artificial borrowing constraint
"aXtraCount": 200, # Maximum number of gridpoints in consumption function
# Parameters that characterize the nature of time
"T_cycle": 1, # Number of periods in the cycle for this agent type
"cycles": 0, # Number of times the cycle occurs (0 --> infinitely repeated)
}
Explanation: The module HARK.ConsumptionSaving.ConsIndShockModel concerns consumption-saving models with idiosyncratic shocks to (non-capital) income. All of the models assume CRRA utility with geometric discounting, no bequest motive, and income shocks are fully transitory or fully permanent.
ConsIndShockModel currently includes three models:
1. A very basic "perfect foresight" model with no uncertainty.
2. A model with risk over transitory and permanent income shocks.
3. The model described in (2), with an interest rate for debt that differs from the interest rate for savings.
This notebook provides documentation for the first of these three models.
$\newcommand{\CRRA}{\rho}$
$\newcommand{\DiePrb}{\mathsf{D}}$
$\newcommand{\PermGroFac}{\Gamma}$
$\newcommand{\Rfree}{\mathsf{R}}$
$\newcommand{\DiscFac}{\beta}$
Statement of perfect foresight consumption-saving model
The PerfForesightConsumerType class the problem of a consumer with Constant Relative Risk Aversion utility
${\CRRA}$
\begin{equation}
U(C) = \frac{C^{1-\CRRA}}{1-\rho},
\end{equation}
has perfect foresight about everything except whether he will die between the end of period $t$ and the beginning of period $t+1$, which occurs with probability $\DiePrb_{t+1}$. Permanent labor income $P_t$ grows from period $t$ to period $t+1$ by factor $\PermGroFac_{t+1}$.
At the beginning of period $t$, the consumer has an amount of market resources $M_t$ (which includes both market wealth and currrent income) and must choose how much of those resources to consume $C_t$ and how much to retain in a riskless asset $A_t$, which will earn return factor $\Rfree$. The consumer cannot necessarily borrow arbitarily; instead, he might be constrained to have a wealth-to-income ratio at least as great as some "artificial borrowing constraint" $\underline{a} \leq 0$.
The agent's flow of future utility $U(C_{t+n})$ from consumption is geometrically discounted by factor $\DiscFac$ per period. If the consumer dies, he receives zero utility flow for the rest of time.
The agent's problem can be written in Bellman form as:
\begin{eqnarray}
V_t(M_t,P_t) &=& \max_{C_t}~U(C_t) ~+ \DiscFac (1 - \DiePrb_{t+1}) V_{t+1}(M_{t+1},P_{t+1}), \
& s.t. & \
A_t &=& M_t - C_t, \
A_t/P_t &\geq& \underline{a}, \
M_{t+1} &=& \Rfree A_t + Y_{t+1}, \
Y_{t+1} &=& P_{t+1}, \
P_{t+1} &=& \PermGroFac_{t+1} P_t.
\end{eqnarray}
The consumer's problem is characterized by a coefficient of relative risk aversion $\CRRA$, an intertemporal discount factor $\DiscFac$, an interest factor $\Rfree$, and age-varying sequences of the permanent income growth factor $\PermGroFac_t$ and survival probability $(1 - \DiePrb_t)$.
While it does not reduce the computational complexity of the problem (as permanent income is deterministic, given its initial condition $P_0$), HARK represents this problem with normalized variables (represented in lower case), dividing all real variables by permanent income $P_t$ and utility levels by $P_t^{1-\CRRA}$. The Bellman form of the model thus reduces to:
\begin{eqnarray}
v_t(m_t) &=& \max_{c_t}~U(c_t) ~+ \DiscFac (1 - \DiePrb_{t+1}) \PermGroFac_{t+1}^{1-\CRRA} v_{t+1}(m_{t+1}), \
& s.t. & \
a_t &=& m_t - c_t, \
a_t &\geq& \underline{a}, \
m_{t+1} &=& \Rfree/\PermGroFac_{t+1} a_t + 1.
\end{eqnarray}
Solution method for PerfForesightConsumerType
Because of the assumptions of CRRA utility, no risk other than mortality, and no artificial borrowing constraint, the problem has a closed form solution. In fact, the consumption function is perfectly linear, and the value function composed with the inverse utility function is also linear. The mathematical solution of this model is described in detail in the lecture notes PerfForesightCRRA.
The one period problem for this model is solved by the function solveConsPerfForesight, which creates an instance of the class ConsPerfForesightSolver. To construct an instance of the class PerfForesightConsumerType, several parameters must be passed to its constructor as shown in the table below.
Example parameter values to construct an instance of PerfForesightConsumerType
| Parameter | Description | Code | Example value | Time-varying? |
| :---: | --- | --- | --- | --- |
| $\DiscFac$ |Intertemporal discount factor | $\texttt{DiscFac}$ | $0.96$ | |
| $\CRRA $ |Coefficient of relative risk aversion | $\texttt{CRRA}$ | $2.0$ | |
| $\Rfree$ | Risk free interest factor | $\texttt{Rfree}$ | $1.03$ | |
| $1 - \DiePrb_{t+1}$ |Survival probability | $\texttt{LivPrb}$ | $[0.98]$ | $\surd$ |
|$\PermGroFac_{t+1}$|Permanent income growth factor|$\texttt{PermGroFac}$| $[1.01]$ | $\surd$ |
|$\underline{a}$|Artificial borrowing constraint|$\texttt{BoroCnstArt}$| $None$ | |
|$(none)$|Maximum number of gridpoints in consumption function |$\texttt{aXtraCount}$| $200$ | |
|$T$| Number of periods in this type's "cycle" |$\texttt{T_cycle}$| $1$ | |
|(none)| Number of times the "cycle" occurs |$\texttt{cycles}$| $0$ | |
Note that the survival probability and income growth factor have time subscripts; likewise, the example values for these parameters are lists rather than simply single floats. This is because those parameters are time-varying: their values can depend on which period of the problem the agent is in. All time-varying parameters must be specified as lists, even if the same value occurs in each period for this type.
The artificial borrowing constraint can be any non-positive float, or it can be None to indicate no artificial borrowing constraint. The maximum number of gridpoints in the consumption function is only relevant if the borrowing constraint is not None; without an upper bound on the number of gridpoints, kinks in the consumption function will propagate indefinitely in an infinite horizon model if there is a borrowing constraint, eventually resulting in an overflow error. If there is no artificial borrowing constraint, then the number of gridpoints used to represent the consumption function is always exactly two.
The last two parameters in the table specify the "nature of time" for this type: the number of (non-terminal) periods in this type's "cycle", and the number of times that the "cycle" occurs. Every subclass of AgentType uses these two code parameters to define the nature of time. Here, T_cycle has the value $1$, indicating that there is exactly one period in the cycle, while cycles is $0$, indicating that the cycle is repeated in infinite number of times-- it is an infinite horizon model, with the same "kind" of period repeated over and over.
In contrast, we could instead specify a life-cycle model by setting T_cycle to $1$, and specifying age-varying sequences of income growth and survival probability. In all cases, the number of elements in each time-varying parameter should exactly equal $\texttt{T_cycle}$.
The parameter $\texttt{AgentCount}$ specifies how many consumers there are of this type-- how many individuals have these exact parameter values and are ex ante homogeneous. This information is not relevant for solving the model, but is needed in order to simulate a population of agents, introducing ex post heterogeneity through idiosyncratic shocks. Of course, simulating a perfect foresight model is quite boring, as there are no idiosyncratic shocks other than death!
The cell below defines a dictionary that can be passed to the constructor method for PerfForesightConsumerType, with the values from the table here.
End of explanation
PFexample = PerfForesightConsumerType(**PerfForesightDict)
PFexample.cycles = 0
PFexample.solve()
Explanation: Solving and examining the solution of the perfect foresight model
With the dictionary we have just defined, we can create an instance of PerfForesightConsumerType by passing the dictionary to the class (as if the class were a function). This instance can then be solved by invoking its solve method.
End of explanation
print(PFexample.solution)
Explanation: The $\texttt{solve}$ method fills in the instance's attribute solution as a time-varying list of solutions to each period of the consumer's problem. In this case, solution will be a list with exactly one instance of the class ConsumerSolution, representing the solution to the infinite horizon model we specified.
End of explanation
print(vars(PFexample.solution[0]))
Explanation: Each element of solution has a few attributes. To see all of them, we can use the \texttt{vars} built in function:
the consumption functions reside in the attribute $\texttt{cFunc}$ of each element of ConsumerType.solution. This method creates a (time varying) attribute $\texttt{cFunc}$ that contains a list of consumption functions.
End of explanation
print("Linear perfect foresight consumption function:")
mMin = PFexample.solution[0].mNrmMin
plot_funcs(PFexample.solution[0].cFunc, mMin, mMin + 10.0)
print("Perfect foresight value function:")
plot_funcs(PFexample.solution[0].vFunc, mMin + 0.1, mMin + 10.1)
Explanation: The two most important attributes of a single period solution of this model are the (normalized) consumption function $\texttt{cFunc}$ and the (normalized) value function $\texttt{vFunc}$. Let's plot those functions near the lower bound of the permissible state space (the attribute $\texttt{mNrmMin}$ tells us the lower bound of $m_t$ where the consumption function is defined).
End of explanation
LiqConstrDict = copy(PerfForesightDict)
LiqConstrDict["BoroCnstArt"] = 0.0 # Set the artificial borrowing constraint to zero
LiqConstrExample = PerfForesightConsumerType(**LiqConstrDict)
LiqConstrExample.cycles = 0 # Make this type be infinite horizon
LiqConstrExample.solve()
print("Liquidity constrained perfect foresight consumption function:")
plot_funcs(LiqConstrExample.solution[0].cFunc, 0.0, 10.0)
# At this time, the value function for a perfect foresight consumer with an artificial borrowing constraint is not computed nor included as part of its $\texttt{solution}$.
Explanation: An element of solution also includes the (normalized) marginal value function $\texttt{vPfunc}$, and the lower and upper bounds of the marginal propensity to consume (MPC) $\texttt{MPCmin}$ and $\texttt{MPCmax}$. Note that with a linear consumption function, the MPC is constant, so its lower and upper bound are identical.
Liquidity constrained perfect foresight example
Without an artificial borrowing constraint, a perfect foresight consumer is free to borrow against the PDV of his entire future stream of labor income-- his "human wealth" $\texttt{hNrm}$-- and he will consume a constant proportion of his total wealth (market resources plus human wealth). If we introduce an artificial borrowing constraint, both of these features vanish. In the cell below, we define a parameter dictionary that prevents the consumer from borrowing at all, create and solve a new instance of PerfForesightConsumerType with it, and then plot its consumption function.
End of explanation
SimulationParams = {
"AgentCount": 10000, # Number of agents of this type
"T_sim": 120, # Number of periods to simulate
"aNrmInitMean": -6.0, # Mean of log initial assets
"aNrmInitStd": 1.0, # Standard deviation of log initial assets
"pLvlInitMean": 0.0, # Mean of log initial permanent income
"pLvlInitStd": 0.0, # Standard deviation of log initial permanent income
"PermGroFacAgg": 1.0, # Aggregate permanent income growth factor
"T_age": None, # Age after which simulated agents are automatically killed
}
PFexample.assign_parameters(**SimulationParams)
Explanation: Simulating the perfect foresight consumer model
Suppose we wanted to simulate many consumers who share the parameter values that we passed to PerfForesightConsumerType-- an ex ante homogeneous type of consumers. To do this, our instance would have to know how many agents there are of this type, as well as their initial levels of assets $a_t$ and permanent income $P_t$.
Setting simulation parameters
Let's fill in this information by passing another dictionary to PFexample with simulation parameters. The table below lists the parameters that an instance of PerfForesightConsumerType needs in order to successfully simulate its model using the simulate method.
| Description | Code | Example value |
| :---: | --- | --- |
| Number of consumers of this type | $\texttt{AgentCount}$ | $10000$ |
| Number of periods to simulate | $\texttt{T_sim}$ | $120$ |
| Mean of initial log (normalized) assets | $\texttt{aNrmInitMean}$ | $-6.0$ |
| Stdev of initial log (normalized) assets | $\texttt{aNrmInitStd}$ | $1.0$ |
| Mean of initial log permanent income | $\texttt{pLvlInitMean}$ | $0.0$ |
| Stdev of initial log permanent income | $\texttt{pLvlInitStd}$ | $0.0$ |
| Aggregrate productivity growth factor | $\texttt{PermGroFacAgg}$ | $1.0$ |
| Age after which consumers are automatically killed | $\texttt{T_age}$ | $None$ |
We have specified the model so that initial assets and permanent income are both distributed lognormally, with mean and standard deviation of the underlying normal distributions provided by the user.
The parameter $\texttt{PermGroFacAgg}$ exists for compatibility with more advanced models that employ aggregate productivity shocks; it can simply be set to 1.
In infinite horizon models, it might be useful to prevent agents from living extraordinarily long lives through a fortuitous sequence of mortality shocks. We have thus provided the option of setting $\texttt{T_age}$ to specify the maximum number of periods that a consumer can live before they are automatically killed (and replaced with a new consumer with initial state drawn from the specified distributions). This can be turned off by setting it to None.
The cell below puts these parameters into a dictionary, then gives them to PFexample. Note that all of these parameters could have been passed as part of the original dictionary; we omitted them above for simplicity.
End of explanation
PFexample.track_vars = ['mNrm']
PFexample.initialize_sim()
PFexample.simulate()
# Each simulation variable $\texttt{X}$ named in $\texttt{track_vars}$ will have the *history* of that variable for each agent stored in the attribute $\texttt{X_hist}$ as an array of shape $(\texttt{T_sim},\texttt{AgentCount})$. To see that the simulation worked as intended, we can plot the mean of $m_t$ in each simulated period:
plt.plot(np.mean(PFexample.history['mNrm'], axis=1))
plt.xlabel("Time")
plt.ylabel("Mean normalized market resources")
plt.show()
Explanation: To generate simulated data, we need to specify which variables we want to track the "history" of for this instance. To do so, we set the track_vars attribute of our PerfForesightConsumerType instance to be a list of strings with the simulation variables we want to track.
In this model, valid arguments to track_vars include $\texttt{mNrm}$, $\texttt{cNrm}$, $\texttt{aNrm}$, and $\texttt{pLvl}$. Because this model has no idiosyncratic shocks, our simulated data will be quite boring.
Generating simulated data
Before simulating, the initialize_sim method must be invoked. This resets our instance back to its initial state, drawing a set of initial $\texttt{aNrm}$ and $\texttt{pLvl}$ values from the specified distributions and storing them in the attributes $\texttt{aNrmNow_init}$ and $\texttt{pLvlNow_init}$. It also resets this instance's internal random number generator, so that the same initial states will be set every time initialize_sim is called. In models with non-trivial shocks, this also ensures that the same sequence of shocks will be generated on every simulation run.
Finally, the simulate method can be called.
End of explanation
N = PFexample.AgentCount
F = np.linspace(0.0, 1.0, N)
plt.plot(np.sort(PFexample.t_age), F)
plt.xlabel("Current age of consumers")
plt.ylabel("Cumulative distribution")
plt.show()
Explanation: A perfect foresight consumer can borrow against the PDV of his future income-- his human wealth-- and thus as time goes on, our simulated agents approach the (very negative) steady state level of $m_t$ while being steadily replaced with consumers with roughly $m_t=1$.
The slight wiggles in the plotted curve are due to consumers randomly dying and being replaced; their replacement will have an initial state drawn from the distributions specified by the user. To see the current distribution of ages, we can look at the attribute $\texttt{t_age}$.
End of explanation
PFexample.initialize_sim()
PFexample.simulate(80)
PFexample.state_prev['aNrm'] += -5.0 # Adjust all simulated consumers' assets downward by 5
PFexample.simulate(40)
plt.plot(np.mean(PFexample.history['mNrm'], axis=1))
plt.xlabel("Time")
plt.ylabel("Mean normalized market resources")
plt.show()
Explanation: The distribution is (discretely) exponential, with a point mass at 120 with consumers who have survived since the beginning of the simulation.
One might wonder why HARK requires users to call initialize_sim before calling simulate: Why doesn't simulate just call initialize_sim as its first step? We have broken up these two steps so that users can simulate some number of periods, change something in the environment, and then resume the simulation.
When called with no argument, simulate will simulate the model for $\texttt{T_sim}$ periods. The user can optionally pass an integer specifying the number of periods to simulate (which should not exceed $\texttt{T_sim}$).
In the cell below, we simulate our perfect foresight consumers for 80 periods, then seize a bunch of their assets (dragging their wealth even more negative), then simulate for the remaining 40 periods.
The state_prev attribute of an AgenType stores the values of the model's state variables in the previous period of the simulation.
End of explanation |
9,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with Hyperparameter Optimization (scikit-learn)
<a href="https
Step1: Prepare Data
Step2: Prepare Hyperparameters
Step3: Run Validation
Step4: Pick the best hyperparameters and train the full data
Step5: Calculate Accuracy on Full Training Set | Python Code:
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
import itertools
import time
import numpy as np
import pandas as pd
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
Explanation: Logistic Regression with Hyperparameter Optimization (scikit-learn)
<a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/examples-without-verta/notebooks/sklearn-census.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Imports
End of explanation
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
train_data_url = "http://s3.amazonaws.com/verta-starter/census-train.csv"
train_data_filename = wget.download(train_data_url)
test_data_url = "http://s3.amazonaws.com/verta-starter/census-test.csv"
test_data_filename = wget.download(test_data_url)
df_train = pd.read_csv("census-train.csv")
X_train = df_train.iloc[:,:-1].values
y_train = df_train.iloc[:, -1]
df_train.head()
Explanation: Prepare Data
End of explanation
hyperparam_candidates = {
'C': [1e-4, 1e-1, 1, 10, 1e3],
'solver': ['liblinear', 'lbfgs'],
'max_iter': [15, 28],
}
# total models 20
# create hyperparam combinations
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
Explanation: Prepare Hyperparameters
End of explanation
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
def run_experiment(hyperparams):
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
print(hyperparams, end=' ')
print("Validation accuracy: {:.4f}".format(val_acc))
# NOTE: run_experiment() could also be defined in a module, and executed in parallel
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
Explanation: Run Validation
End of explanation
best_hyperparams = {}
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
Explanation: Pick the best hyperparameters and train the full data
End of explanation
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
Explanation: Calculate Accuracy on Full Training Set
End of explanation |
9,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
There are a thousand movie reviews for both
positive and
negetive
reviews
Step1: Now I need to store it as
python
documents = [
('pos', ['good', 'awesome', ....]),
('neg', ['ridiculous', 'horrible', ...])
]
OR
Storing it in a dictionary would also be a better idea, will try out with both
python
documents = {
'pos'
Step2: Getting the list of all words to store the most frequently occuring ones
Step3: Making a frequency distribution of the words
Step4: will train only for the first 5000 top words in the list
Step5: Finding these feature words in documents, making our function would ease it out!
Step6: What the below one does is, before hand we had only words and its category. But not we have the feature set (along with a boolean value of whether it is one of the most frequently used words or not)of the same word and then the category.
Step7: Training the classifier
Step8: We won't be telling the machine the category i.e. whether the document is a postive one or a negative one. We ask it to tell that to us. Then we compare it to the known category that we have and calculate how accurate it is.
Naive bayes algorithm
It states that
\begin{equation}
posterior = \frac{PriorOccurences \times likelihood}{CurrentEvidence}
\end{equation}
Here posterior is likelihood of occurence | Python Code:
movie_reviews.categories()
Explanation: There are a thousand movie reviews for both
positive and
negetive
reviews
End of explanation
documents = [(list(word for word in movie_reviews.words(fileid) if word not in stop_words), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)
]
random.shuffle(documents)
Explanation: Now I need to store it as
python
documents = [
('pos', ['good', 'awesome', ....]),
('neg', ['ridiculous', 'horrible', ...])
]
OR
Storing it in a dictionary would also be a better idea, will try out with both
python
documents = {
'pos': ['good', 'awesome', ....],
'neg': ['ridiculous', 'horrible', ...]
}
End of explanation
all_words = []
for w in movie_reviews.words():
all_words.append(w.lower())
Explanation: Getting the list of all words to store the most frequently occuring ones
End of explanation
all_words = nltk.FreqDist(all_words)
all_words.most_common(20)
all_words["hate"] ## counting the occurences of a single word
Explanation: Making a frequency distribution of the words
End of explanation
feature_words = list(all_words.keys())[:5000]
Explanation: will train only for the first 5000 top words in the list
End of explanation
def find_features(document):
words = set(document)
feature = {}
for w in feature_words:
feature[w] = (w in words)
return feature
Explanation: Finding these feature words in documents, making our function would ease it out!
End of explanation
feature_sets = [(find_features(rev), category) for (rev, category) in documents]
feature_sets[:1]
Explanation: What the below one does is, before hand we had only words and its category. But not we have the feature set (along with a boolean value of whether it is one of the most frequently used words or not)of the same word and then the category.
End of explanation
training_set = feature_sets[:1900]
testing_set = feature_sets[1900:]
Explanation: Training the classifier
End of explanation
## TO-DO: To build own naive bais algorithm
# classifier = nltk.NaiveBayesClassifier.train(training_set)
## saving the classifier
# save_classifier = open("naive_bayes.pickle", "wb")
# pickle.dump(classifier, save_classifier)
# save_classifier.close()
## Now that the picke is saved we will use that.
## Using the pickle file now
pickle_classifier = open("naive_bayes.pickle", "rb")
classifier = pickle.load(pickle_classifier)
pickle_classifier.close()
## Testing it's accuracy
print("Naive bayes classifier accuracy percentage : ", (nltk.classify.accuracy(classifier, testing_set))*100)
classifier.show_most_informative_features(20)
Explanation: We won't be telling the machine the category i.e. whether the document is a postive one or a negative one. We ask it to tell that to us. Then we compare it to the known category that we have and calculate how accurate it is.
Naive bayes algorithm
It states that
\begin{equation}
posterior = \frac{PriorOccurences \times likelihood}{CurrentEvidence}
\end{equation}
Here posterior is likelihood of occurence
End of explanation |
9,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-2', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
9,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Oregon Curriculum Network <br />
Discovering Math with Python
All Aboard the S Train!
Those of us exploring the geometry of thinking laid out in Synergetics (subtitled explorations in the geometry of thinking) will be familiar with the Jitterbug Transformation, popularized in this Youtube introduction to the International Mathematicians Union logo
Step1: The cuboctahedron and icosahedron are related by having the same edge length. The ratio of the two, in terms of volume, is
Step2: Icosa * sfactor = Cubocta.
Step3: The cuboctahedron that jitterbugs into an icosahedron takes twenty regular tetrahedrons -- in volume, eight of them so formed (the other twelve paired in six half-octahedra) -- into twenty irregular tetrahedrons in the corresponding regular icosahedron (same surface edge lengths).
Each of those 20 irregular tetrahedrons we may refer to as an "icosatet" (IcosaTet).
The computation below shows the icosatet (1/sfactor) times 20, giving the same volume as the "Jitterbug icosa" (edges 2R).
Step4: From Figure 988.00 in Synergetics
Step5: Verifying S Module Volume
The "skew icosahedron" inside the volume 4 octahedron is what we use to derive the 24 S modules, which make up the difference in volume between the two. The S module's volume may also be expressed in terms of φ.
Step6: Lets look at the S module in more detail, and compute its volume from scratch, using a Python formula.
<a data-flickr-embed="true" href="https
Step7: Setting a = 2 give us the following edges table
Step8: The S Train
The fact that the cuboctahedron and icosahedron relate in two ways via a common S-factor suggests the metaphor of a train or subway route.
Start at the cuboctahedron and follow the Jitterbug Pathway (one stop, one application of the S-factor, but as a reciprocal, since we're dropping in volume).
We've arrived at the Jitterbug icosahedron. Applying 1/S twice more will take us to another cuboctahedron (dubbed "SmallGuy" in some writings). Its triangular faces overlap those of the Jitterbug icosahedron.
Step9: SmallGuy's edges are 2R times 1/sfactor, since linear change is a 3rd root of volumetric change (when shape is held constant).
Interestingly, this result is one tenth the JB_icosahedron's volume, but a linear measure in this instance.
Step10: When going in the other direction (smaller to bigger), apply the S factor directly (not the reciprocal) since the volumes increase.
For example start at the cuboctahedron of volume 2.5, apply the S factor twice to get the corresponding skew icosahedron ("Icosahedron Within"), its faces embedded in the same volume 4 octahedron (see above).
S is for "Skew"...
However, we might also say "S" is for "Sesame Street" and for "spine" as the Concentric Hierarchy forms the backbone of Synergetics and becomes the familiar neighborhood, what we keep coming back to.
... and for "Subway"
The idea of scale factors taking us from one "station stop" to another within the Concentric Hierarchy jibes with the "hypertoon" concept
Step11: The SuperRT is the RT defined by the Jitterbug icosa (JB_icosa) and its dual, the Pentagonal Dodecahedron of tetravolume $3\sqrt{2}(\phi^2 + 1)$.
The S train through the 2.5 cubocta, which stops at "Icosa Within" does not meet up with S train through 20 cubocta, which runs to SmallGuy.
The 20 and 2.5 cubocta stations are linked by "Double D express" (halve or double all edge lengths).
$$Cubocta 20 \rightarrow DoubleD \rightarrow Cubocta 2.5 \rightarrow S^2 \rightarrow Icosa Within \rightarrow + 24 Smods \rightarrow Octa4$$
The Phi Commuter does a lot of the heavy lifting, multiplying all edges by phi or 1/phi, as in the ...e6, e3, E, E3, E6... progression.
Multiplying edges by x entails multiplying volume by $x^3$.
Take Phi Commuter from SuperRT to the 120 E Mods RT (with radius R), get off and transfer to the T Mods RT (mind the gap of ~0.9994), then take the local to the 7.5 RT.
The space-filling RD6 will be at the same corner (they share vertexes).
<a data-flickr-embed="true" href="https | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("1VXDejQcAWY")
Explanation: Oregon Curriculum Network <br />
Discovering Math with Python
All Aboard the S Train!
Those of us exploring the geometry of thinking laid out in Synergetics (subtitled explorations in the geometry of thinking) will be familiar with the Jitterbug Transformation, popularized in this Youtube introduction to the International Mathematicians Union logo:
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/46320625832/in/dateposted-public/" title="imu_logo_u2be"><img src="https://farm5.staticflickr.com/4815/46320625832_7c33a06f9e.jpg" width="500" height="461" alt="imu_logo_u2be"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
End of explanation
import gmpy2
gmpy2.get_context().precision=200
root2 = gmpy2.sqrt(2)
root7 = gmpy2.sqrt(7)
root5 = gmpy2.sqrt(5)
root3 = gmpy2.sqrt(3)
# phi
𝜙 = (gmpy2.sqrt(5) + 1)/2
# Synergetics modules
Smod = (𝜙 **-5)/2
Emod = (root2/8) * (𝜙 ** -3)
sfactor = Smod/Emod
print("sfactor: {:60.57}".format(sfactor))
Explanation: The cuboctahedron and icosahedron are related by having the same edge length. The ratio of the two, in terms of volume, is: $20 : 5 \sqrt{2} \phi^2$.
Lets call this the "S factor". It also happens to be the Smod/Emod volume ratio.
End of explanation
sfactor = 2 * root2 * 𝜙 ** -2 # 2 * (7 - 3 * root5).sqrt()
print("sfactor: {:60.57}".format(sfactor))
# sfactor in terms of phi-scaled emods
e3 = Emod * 𝜙 ** -3
print("sfactor: {:60.57}".format(24*Emod + 8*e3))
# length of skew icosa edge EF Fig 988.13A below, embedded in
# octa of edge a=2
EF = 2 * gmpy2.sqrt(7 - 3 * root5)
print("sfactor: {:60.57}".format(EF))
Explanation: Icosa * sfactor = Cubocta.
End of explanation
icosatet = 1/sfactor
icosatet
JB_icosa = 20 * icosatet
print("Icosahedron: {:60.57}".format(JB_icosa)) # for volume of JB icosahedron
Explanation: The cuboctahedron that jitterbugs into an icosahedron takes twenty regular tetrahedrons -- in volume, eight of them so formed (the other twelve paired in six half-octahedra) -- into twenty irregular tetrahedrons in the corresponding regular icosahedron (same surface edge lengths).
Each of those 20 irregular tetrahedrons we may refer to as an "icosatet" (IcosaTet).
The computation below shows the icosatet (1/sfactor) times 20, giving the same volume as the "Jitterbug icosa" (edges 2R).
End of explanation
icosa_within = 2.5 * sfactor * sfactor
icosa_within
Explanation: From Figure 988.00 in Synergetics:
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/46319721212/in/dateposted-public/" title="Jitterbug Relation"><img src="https://farm5.staticflickr.com/4908/46319721212_5144721a96.jpg" width="500" height="295" alt="Jitterbug Relation"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">Jitterbug Relationship</div>
The S Train is Leaving the Station...
However there's another twinning or pairing of the cubocta and icosa in Synergetics that arises when we fit both into a contextualizing octahedron.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/46009103944/in/dateposted-public/" title="Phi Scaled S Module"><img src="https://farm5.staticflickr.com/4847/46009103944_bda5a5f0c3.jpg" width="500" height="500" alt="Phi Scaled S Module"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Consider the canonical octahedron of volume 4, with a cuboctahedron inside, its triangular faces flush with the octahedron's. Its volume is 2.5.
Now consider an icosahedron with eight of its twenty faces flush to the same octahedron, but skewed (tilted) relative to the cuboctahedron's.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/31432640247/in/dateposted-public/" title="icosa_within"><img src="https://farm5.staticflickr.com/4876/31432640247_14b56cdc4b.jpg" width="500" height="409" alt="icosa_within"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">From Figure 988.12 in Synergetics by RBF</div>
The relationship between this pair is different than in the Jitterbug Transformation. For one thing, the edges are no longer the same length, and for another, the icosahedron's edges are longer, and its volume is greater.
However, despite these differences, the S-Factor is still involved.
For one thing: the longer edge of the icosahedron is the S-factor, given edges and radii of the cuboctahedron of volume 2.5 are all R = 1 = the radius of one CCP sphere -- each encased by the volume 6 RD (see below).
From Figure 988.00 in Synergetics:
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/46319721512/in/dateposted-public/" title="Skew Relationship"><img src="https://farm5.staticflickr.com/4827/46319721512_e1f04c3ca2.jpg" width="500" height="272" alt="Skew Relationship"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">Cuboctahedron and Icosahedron<br /> both with faces flush to Octahedron of volume 4</div>
For another: the cuboctahedron's volume, times S-Factor to the 2nd power, gives the icosahedron's volume.
End of explanation
smod = (4 - icosa_within)/24
print("smod: {:60.57}".format(smod))
(𝜙**-5)/2
print("smod: {:60.57}".format(smod))
Explanation: Verifying S Module Volume
The "skew icosahedron" inside the volume 4 octahedron is what we use to derive the 24 S modules, which make up the difference in volume between the two. The S module's volume may also be expressed in terms of φ.
End of explanation
import tetvols
# assume a = 1 D
a = 1
# common apex is F
FH = 1/𝜙
FE = sfactor/2
FG = root3 * FE/2
# connecting the base (same order, i.e. H, E, G)
HE = (3 - root5)/2
EG = FE/2
GH = EG
Smod = tetvols.ivm_volume((FH, FE, FG, HE, EG, GH))
print("smod: {:60.57}".format(Smod))
print("Octa Edge = 1")
print("FH: {:60.57}".format(FH))
print("FE: {:60.57}".format(FE))
print("FG: {:60.57}".format(FG))
print("HE: {:60.57}".format(HE))
print("EG: {:60.57}".format(EG))
print("GH: {:60.57}".format(GH))
Explanation: Lets look at the S module in more detail, and compute its volume from scratch, using a Python formula.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/45589318711/in/dateposted-public/" title="dejong"><img src="https://farm2.staticflickr.com/1935/45589318711_677d272397.jpg" width="417" height="136" alt="dejong"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<br />
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/32732893998/in/dateposted-public/" title="smod_dimensions"><img src="https://farm5.staticflickr.com/4892/32732893998_cd5f725f3d.jpg" width="500" height="484" alt="smod_dimensions"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Picking a common apex for three lengths (radials), and then connecting the dots around the base so defined, is step one in using our algorithm. We'll use gmpy2 for its extended precision capabilities.
The Tetrahedron class in tetravolume module is set by default to work in D units (D = 2R) i.e. the canonical tetrahedron, octahedron, icosahedron, all have edges 1.
End of explanation
print("Octa Edge = 2")
print("FH: {:60.57}".format(FH * 2))
print("FE: {:60.57}".format(FE * 2))
print("FG: {:60.57}".format(FG * 2))
print("HE: {:60.57}".format(HE * 2))
print("EG: {:60.57}".format(EG * 2))
print("GH: {:60.57}".format(GH * 2))
Explanation: Setting a = 2 give us the following edges table:
End of explanation
SmallGuy = 20 * (1/sfactor) ** 3
SmallGuy
print("SmallGuy: {:60.57}".format(SmallGuy))
Explanation: The S Train
The fact that the cuboctahedron and icosahedron relate in two ways via a common S-factor suggests the metaphor of a train or subway route.
Start at the cuboctahedron and follow the Jitterbug Pathway (one stop, one application of the S-factor, but as a reciprocal, since we're dropping in volume).
We've arrived at the Jitterbug icosahedron. Applying 1/S twice more will take us to another cuboctahedron (dubbed "SmallGuy" in some writings). Its triangular faces overlap those of the Jitterbug icosahedron.
End of explanation
print("SmallGuy Edge: {:56.54}".format(2 * (1/sfactor))) # SmallGuy edge
print("Icosahedron: {:56.53}".format(JB_icosa)) # for volume of JB icosahedron
Explanation: SmallGuy's edges are 2R times 1/sfactor, since linear change is a 3rd root of volumetric change (when shape is held constant).
Interestingly, this result is one tenth the JB_icosahedron's volume, but a linear measure in this instance.
End of explanation
Syn3 = gmpy2.sqrt(gmpy2.mpq(9,8))
JB_icosa = SmallGuy * sfactor * sfactor
print("JB Icosa: {:60.57}".format(JB_icosa))
JB_cubocta = JB_icosa * sfactor
print("JB Cubocta: {:60.57}".format(JB_cubocta))
SuperRT = JB_cubocta * Syn3
SuperRT # 20*S3
print("SuperRT: {:60.57}".format(SuperRT))
Explanation: When going in the other direction (smaller to bigger), apply the S factor directly (not the reciprocal) since the volumes increase.
For example start at the cuboctahedron of volume 2.5, apply the S factor twice to get the corresponding skew icosahedron ("Icosahedron Within"), its faces embedded in the same volume 4 octahedron (see above).
S is for "Skew"...
However, we might also say "S" is for "Sesame Street" and for "spine" as the Concentric Hierarchy forms the backbone of Synergetics and becomes the familiar neighborhood, what we keep coming back to.
... and for "Subway"
The idea of scale factors taking us from one "station stop" to another within the Concentric Hierarchy jibes with the "hypertoon" concept: smooth transformations terminating in "switch points" from which other transformations also branch (a nodes and edges construct, like the polyhedrons themselves).
Successive applications of both S and Syn3 take us to "station stops" along the "S train" e.g.
$$SmallGuy \rightarrow S^2 \rightarrow icosa \rightarrow S \rightarrow cubocta \rightarrow Syn3 \rightarrow RT$$
and so on. Bigger and bigger (or other way).
Remember Syn3? That's also our $IVM \Leftrightarrow XYZ$ conversion constant. Yet here we're not using it that way, as we're staying in tetravolumes the whole time.
However, what's so is the ratio between the volume of the cube of edges R and the volume of the tetrahedron of edges D (D = 2R) is the same as that between the RT and volume 20 cuboctahedron, where long diagonals of RT = edges of cubocta.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/21077777642/in/photolist-FRd2LJ-y7z7Xm-frqefo-8thDyL-6zKk1y-5KBFWR-5KFVMm-5uinM4" title="Conversion Constant"><img src="https://farm1.staticflickr.com/702/21077777642_9803ddb65e.jpg" width="500" height="375" alt="Conversion Constant"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">Cube edges = 1/2 x Tetrahedron edges;<br /> Cube:Tetrahedron volume ratio = S3</div>
End of explanation
volume1 = SuperRT - JB_icosa
volume2 = (4 - 24*Smod) * (1/sfactor)
print("volume1: {:60.57}".format(volume1))
print("volume2: {:60.57}".format(volume2))
# one more application of the 1/sfactor gives the 2.5 cubocta
print("Edged 1 Cubocta: {:60.57}".format(volume2 * (1/sfactor)))
Explanation: The SuperRT is the RT defined by the Jitterbug icosa (JB_icosa) and its dual, the Pentagonal Dodecahedron of tetravolume $3\sqrt{2}(\phi^2 + 1)$.
The S train through the 2.5 cubocta, which stops at "Icosa Within" does not meet up with S train through 20 cubocta, which runs to SmallGuy.
The 20 and 2.5 cubocta stations are linked by "Double D express" (halve or double all edge lengths).
$$Cubocta 20 \rightarrow DoubleD \rightarrow Cubocta 2.5 \rightarrow S^2 \rightarrow Icosa Within \rightarrow + 24 Smods \rightarrow Octa4$$
The Phi Commuter does a lot of the heavy lifting, multiplying all edges by phi or 1/phi, as in the ...e6, e3, E, E3, E6... progression.
Multiplying edges by x entails multiplying volume by $x^3$.
Take Phi Commuter from SuperRT to the 120 E Mods RT (with radius R), get off and transfer to the T Mods RT (mind the gap of ~0.9994), then take the local to the 7.5 RT.
The space-filling RD6 will be at the same corner (they share vertexes).
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/4178618670/in/photolist-28MC8r3-27PVk6E-27PVjh5-27PVkvN-27PViQ3-27PVjC5-KgsYkX-KgsXRk-KgsZ2B-27KsgFG-27xwi3K-9WvZwa-97TTvV-7nfvKu" title="The 6 and the 7.5"><img src="https://farm3.staticflickr.com/2767/4178618670_1b4729e527.jpg" width="500" height="456" alt="The 6 and the 7.5"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">RT of volume 7.5 and RD of volume 6<br /> with shared vertexes (by David Koski using vZome)</div>
The RD6's long diagonals make Octa4, your bridge to Icosa Within and the S line to the 2.5 cubocta.
$$SuperRT \rightarrow \phi Commuter \rightarrow Emod RT \rightarrow Tmod RT \rightarrow 3/2 \rightarrow 7.5 RT \rightarrow RD6 \rightarrow Octa4$$
This kind of touring by scale factor and switching pathways is called "taking subways around the neighborhood" (i.e. Sesame Street).
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/31433920137/in/dateposted-public/" title="Sesame Street Subway"><img src="https://farm5.staticflickr.com/4812/31433920137_ecb829e3bd.jpg" width="500" height="375" alt="Sesame Street Subway"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Here's another one, derived by David Koski in early March, 2021:
Icosa of (Octa4 - 24 S modules) $\rightarrow$ S-factor down $\rightarrow$ the volumetric difference between SuperRT and the Jitterbug Icosa (which latter inscribes in the former as long face diagonals).
End of explanation |
9,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Identificação de Cargas através de Representação Visual de Séries Temporais
Artigo
Step1: Pré-processamento dos dados
Step2: Parâmetros gerais dos dados utilizados na modelagem (treino e teste)
Step4: Gerando dados
A fim de normalizar os benchmarkings, serão utilizados os dados das séries do bechmarking 1 para o processo de Extração de Características (conversão serie2image - benchmarking 2).
Extração de Características
Step5: Conjunto de Treino
Step6: Conjunto de teste
Step7: Modelagem
Step8: Benchmarking (replicando estudo)
Step9: Embedding das imagens de Treino
Step10: Embedding das imagens de Teste
Step11: Treinando Classificador Supervisionado
Step12: Avaliando Classificador | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rc('text', usetex=False)
from matplotlib.image import imsave
import pandas as pd
import pickle as cPickle
import os, sys
from math import *
from pprint import pprint
from tqdm import tqdm_notebook
from mpl_toolkits.axes_grid1 import make_axes_locatable
from PIL import Image
from glob import glob
from IPython.display import display
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.preprocessing import image as keras_image
from tensorflow.keras.applications.vgg16 import preprocess_input
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score
REDD_RESOURCES_PATH = 'datasets/REDD'
BENCHMARKING_RESOURCES_PATH = 'benchmarkings/Imaging-NILM-time-series/'
sys.path.append(os.path.join(BENCHMARKING_RESOURCES_PATH, ''))
from serie2QMlib import *
Explanation: Identificação de Cargas através de Representação Visual de Séries Temporais
Artigo: Imaging NILM Time-series
URL: https://link.springer.com/chapter/10.1007/978-3-030-20257-6_16
Source-code: https://github.com/LampriniKyrk/Imaging-NILM-time-series
Estratégia proposta: converter série-temporal em imagens, extrair características com DNN (VG16) e classificação supervisionada.
Carregando ambiente e parâmetros
End of explanation
# Define sliding window
def window_time_series(series, n, step=1):
# print "in window_time_series",series
if step < 1.0:
step = max(int(step * n), 1)
return [series[i:i + n] for i in range(0, len(series) - n + 1, step)]
# PAA function
def paa(series, now, opw):
if now == None:
now = len(series) / opw
if opw == None:
opw = ceil(len(series) / now)
return [sum(series[i * opw: (i + 1) * opw]) / float(opw) for i in range(now)]
def standardize(serie):
dev = np.sqrt(np.var(serie))
mean = np.mean(serie)
return [(each - mean) / dev for each in serie]
# Rescale data into [0,1]
def rescale(serie):
maxval = max(serie)
minval = min(serie)
gap = float(maxval - minval)
return [(each - minval) / gap for each in serie]
# Rescale data into [-1,1]
def rescaleminus(serie):
maxval = max(serie)
minval = min(serie)
gap = float(maxval - minval)
return [(each - minval) / gap * 2 - 1 for each in serie]
# Generate quantile bins
def QMeq(series, Q):
q = pd.qcut(list(set(series)), Q)
dic = dict(zip(set(series), q.labels))
MSM = np.zeros([Q, Q])
label = []
for each in series:
label.append(dic[each])
for i in range(0, len(label) - 1):
MSM[label[i]][label[i + 1]] += 1
for i in xrange(Q):
if sum(MSM[i][:]) == 0:
continue
MSM[i][:] = MSM[i][:] / sum(MSM[i][:])
return np.array(MSM), label, q.levels
# Generate quantile bins when equal values exist in the array (slower than QMeq)
def QVeq(series, Q):
q = pd.qcut(list(set(series)), Q)
dic = dict(zip(set(series), q.labels))
qv = np.zeros([1, Q])
label = []
for each in series:
label.append(dic[each])
for i in range(0, len(label)):
qv[0][label[i]] += 1.0
return np.array(qv[0][:] / sum(qv[0][:])), label
# Generate Markov Matrix given a spesicif number of quantile bins
def paaMarkovMatrix(paalist, level):
paaindex = []
for each in paalist:
for k in range(len(level)):
lower = float(level[k][1:-1].split(',')[0])
upper = float(level[k][1:-1].split(',')[-1])
if each >= lower and each <= upper:
paaindex.append(k)
return paaindex
# Generate Image (.png) files of generated images
def gengramImgs(image, paaimages, label, name, path):
import operator
index = zip(range(len(label)), label)
index.sort(key=operator.itemgetter(1))
count = 0
for p, q in index:
count += 1
#print 'generate fig of pdfs:', p
plt.ioff();
fig = plt.figure();
fig.set_size_inches((1,1))
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
plt.imshow(paaimages[p], aspect='equal');
plt.savefig(path+"/fig-"+name+".png")
plt.close(fig)
if count > 30:
break
# Generate pdf files of trainsisted array in porlar coordinates
def genpolarpdfs(raw, label, name):
import matplotlib.backends.backend_pdf as bpdf
import operator
index = zip(range(len(label)), label)
index.sort(key=operator.itemgetter(1))
with bpdf.PdfPages(name) as pdf:
for p, q in index:
#print 'generate fig of pdfs:', p
plt.ioff();
r = np.array(range(1, length + 1));
r = r / 100.0;
theta = np.arccos(np.array(rescaleminus(standardize(raw[p][1:])))) * 2;
fig = plt.figure();
plt.suptitle(datafile + '_' + str(label[p]));
ax = plt.subplot(111, polar=True);
ax.plot(theta, r, color='r', linewidth=3);
pdf.savefig(fig)
plt.close(fig)
pdf.close
# return the max value instead of mean value in PAAs
def maxsample(mat, s):
retval = []
x, y, z = mat.shape
l = np.int(np.floor(y / float(s)))
for each in mat:
block = []
for i in range(s):
block.append([np.max(each[i * l:(i + 1) * l, j * l:(j + 1) * l]) for j in xrange(s)])
retval.append(np.asarray(block))
return np.asarray(retval)
# Pickle the data and save in the pkl file
def pickledata(mat, label, train, name):
#print '..pickling data:', name
traintp = (mat[:train], label[:train])
testtp = (mat[train:], label[train:])
f = file('fridge/' + name + '.pkl', 'wb')
pickletp = [traintp, testtp]
cPickle.dump(pickletp, f, protocol=cPickle.HIGHEST_PROTOCOL)
def pickle3data(mat, label, train, name):
#print '..pickling data:', name
traintp = (mat[:train], label[:train])
validtp = (mat[:train], label[:train])
testtp = (mat[train:], label[train:])
f = file(name + '.pkl', 'wb')
pickletp = [traintp, validtp, testtp]
cPickle.dump(pickletp, f, protocol=cPickle.HIGHEST_PROTOCOL)
Explanation: Pré-processamento dos dados
End of explanation
#################################
###Define the parameters here####
#################################
datafiles = ['dish washer1-1'] # Data file name (TODO: alterar aqui)
trains = [250] # Number of training instances (because we assume training and test data are mixed in one file)
size = [32] # PAA size
GAF_type = 'GADF' # GAF type: GASF, GADF
save_PAA = True # Save the GAF with or without dimension reduction by PAA: True, False
rescale_type = 'Zero' # Rescale the data into [0,1] or [-1,1]: Zero, Minusone
directory = os.path.join(BENCHMARKING_RESOURCES_PATH, 'GeneratedImages') #the directory will be created if it does not already exist. Here the images will be stored
if not os.path.exists(directory):
os.makedirs(directory)
Explanation: Parâmetros gerais dos dados utilizados na modelagem (treino e teste)
End of explanation
def serie2image(serie, GAF_type = 'GADF', scaling = False, s = 32):
Customized function to perform Series to Image conversion.
Args:
serie : original input data (time-serie chunk of appliance/main data - REDD - benchmarking 1)
GAF_type : GADF / GASF (Benchmarking 2 process)
s : Size of output paaimage originated from serie [ INFO: PAA = (32, 32) / noPAA = (50, 50) ]
image = None
paaimage = None
patchimage = None
matmatrix = None
fullmatrix = None
std_data = serie
if scaling:
std_data = rescale(std_data)
paalistcos = paa(std_data, s, None)
# paalistcos = rescale(paa(each[1:],s,None))
# paalistcos = rescaleminus(paa(each[1:],s,None))
################raw###################
datacos = np.array(std_data)
#print(datacos)
datasin = np.sqrt(1 - np.array(std_data) ** 2)
#print(datasin)
paalistcos = np.array(paalistcos)
paalistsin = np.sqrt(1 - paalistcos ** 2)
datacos = np.matrix(datacos)
datasin = np.matrix(datasin)
paalistcos = np.matrix(paalistcos)
paalistsin = np.matrix(paalistsin)
if GAF_type == 'GASF':
paamatrix = paalistcos.T * paalistcos - paalistsin.T * paalistsin
matrix = np.array(datacos.T * datacos - datasin.T * datasin)
elif GAF_type == 'GADF':
paamatrix = paalistsin.T * paalistcos - paalistcos.T * paalistsin
matrix = np.array(datasin.T * datacos - datacos.T * datasin)
else:
sys.exit('Unknown GAF type!')
#label = np.asarray(label)
image = matrix
paaimage = np.array(paamatrix)
matmatrix = np.asarray(matmatrix)
fullmatrix = np.asarray(fullmatrix)
#
# maximage = maxsample(image, s)
# maxmatrix = np.asarray(np.asarray([each.flatten() for each in maximage]))
if save_PAA == False:
finalmatrix = matmatrix
else:
finalmatrix = fullmatrix
# uncomment below if needed data in pickled form
# pickledata(finalmatrix, label, train, datafilename)
#gengramImgs(image, paaimage, label, directory)
return image, paaimage, matmatrix, fullmatrix, finalmatrix
# Reading power dataset (benchmark 1)
BENCHMARKING1_RESOURCES_PATH = "benchmarkings/cs446 project-electric-load-identification-using-machine-learning/"
size_paa = 32
size_without_paa = 30
# devices to be used in training and testing
use_idx = np.array([3,4,6,7,10,11,13,17,19])
label_columns_idx = ["APLIANCE_{}".format(i) for i in use_idx]
Explanation: Gerando dados
A fim de normalizar os benchmarkings, serão utilizados os dados das séries do bechmarking 1 para o processo de Extração de Características (conversão serie2image - benchmarking 2).
Extração de Características
End of explanation
print("Processing train dataset (Series to Images)...")
# Train...
train_power_chunks = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/train_power_chunks.npy') )
train_labels_binary = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/train_labels_binary.npy') )
data_paa_train = []
data_without_paa_train = []
#for idx, row in tqdm_notebook(df_power_chunks.iterrows(), total = df_power_chunks.shape[0]):
for idx, power_chunk in tqdm_notebook(enumerate(train_power_chunks), total = train_power_chunks.shape[0]):
#serie = row[attr_columns_idx].tolist()
#print(serie)
#labels = row[label_columns_idx].astype('int').astype('str').tolist()
serie = power_chunk
labels = train_labels_binary[idx, :].astype('str').tolist()
labels_str = ''.join(labels)
for g_Type in ['GASF', 'GADF']:
#image, paaimage, matmatrix, fullmatrix, finalmatrix = serie2image(serie, g_Type)
image, paaimage, _, _, _ = serie2image(serie, g_Type, scaling=True)
# Persist image data files (PAA - noPAA)
np.save(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedMatrixImages",
"{}_WITHOUTPAA_{}_train_{}.npy".format(idx, g_Type, labels_str)
),
image
)
# x is the array you want to save
imsave(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedImages",
"{}_WITHOUTPAA_{}_train_{}.png".format(idx, g_Type, labels_str)
),
image
)
data_without_paa_train.append( list([idx, g_Type]) + list(image.flatten()) + list(labels) )
np.save(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedMatrixImages",
"{}_PAA_{}_train_{}.npy".format(idx, g_Type, labels_str)
),
paaimage
)
imsave(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedImages",
"{}_PAA_{}_train_{}.png".format(idx, g_Type, labels_str)
),
paaimage
)
data_paa_train.append( list([idx, g_Type]) + list(paaimage.flatten()) + list(labels) )
# VIsualizgin some results...
plt.figure(figsize=(8,6));
plt.suptitle(g_Type + ' series');
ax1 = plt.subplot(121);
plt.title(g_Type + ' without PAA');
plt.imshow(image);
divider = make_axes_locatable(ax1);
cax = divider.append_axes("right", size="2.5%", pad=0.2);
plt.colorbar(cax=cax);
ax2 = plt.subplot(122);
plt.title(g_Type + ' with PAA');
plt.imshow(paaimage);
print('Saving processed data...')
df_without_paa_train = pd.DataFrame(
data = data_without_paa_train,
columns = list(["IDX", "TYPE"]) + ["DIMESION_{}".format(d) for d in range(size_without_paa*size_without_paa)] + list(label_columns_idx)
)
df_without_paa_train.to_csv(os.path.join( BENCHMARKING_RESOURCES_PATH, "datasets", "df_without_paa_train.csv"))
df_paa_train = pd.DataFrame(
data = data_paa_train,
columns = list(["IDX", "TYPE"]) + ["DIMESION_{}".format(d) for d in range(size_paa*size_paa)] + list(label_columns_idx)
)
df_paa_train.to_csv(os.path.join( BENCHMARKING_RESOURCES_PATH, "datasets", "df_paa_train.csv"))
Explanation: Conjunto de Treino
End of explanation
print("Processing test dataset (Series to Images)...")
# Test...
test_power_chunks = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/test_power_chunks.npy') )
test_labels_binary = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/test_labels_binary.npy') )
data_paa_test = []
data_without_paa_test = []
#for idx, row in tqdm_notebook(df_power_chunks.iterrows(), total = df_power_chunks.shape[0]):
for idx, power_chunk in tqdm_notebook(enumerate(test_power_chunks), total = test_power_chunks.shape[0]):
#serie = row[attr_columns_idx].tolist()
#print(serie)
#labels = row[label_columns_idx].astype('int').astype('str').tolist()
serie = power_chunk
labels = test_labels_binary[idx, :].astype('str').tolist()
labels_str = ''.join(labels)
for g_Type in ['GASF', 'GADF']:
#image, paaimage, matmatrix, fullmatrix, finalmatrix = serie2image(serie, g_Type)
image, paaimage, _, _, _ = serie2image(serie, g_Type, scaling=True)
# Persist image data files (PAA - noPAA)
np.save(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedMatrixImages",
"{}_WITHOUTPAA_{}_test_{}.npy".format(idx, g_Type, labels_str)
),
image
)
# x is the array you want to save
imsave(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedImages",
"{}_WITHOUTPAA_{}_test_{}.png".format(idx, g_Type, labels_str)
),
image
)
data_without_paa_test.append( list([idx, g_Type]) + list(image.flatten()) + list(labels) )
np.save(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedMatrixImages",
"{}_PAA_{}_test_{}.npy".format(idx, g_Type, labels_str)
),
paaimage
)
imsave(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedImages",
"{}_PAA_{}_test_{}.png".format(idx, g_Type, labels_str)
),
paaimage
)
data_paa_test.append( list([idx, g_Type]) + list(paaimage.flatten()) + list(labels) )
# VIsualizgin some results...
plt.figure(figsize=(8,6));
plt.suptitle(g_Type + ' series');
ax1 = plt.subplot(121);
plt.title(g_Type + ' without PAA');
plt.imshow(image);
divider = make_axes_locatable(ax1);
cax = divider.append_axes("right", size="2.5%", pad=0.2);
plt.colorbar(cax=cax);
ax2 = plt.subplot(122);
plt.title(g_Type + ' with PAA');
plt.imshow(paaimage);
print('Saving processed data...')
df_without_paa_test = pd.DataFrame(
data = data_without_paa_test,
columns = list(["IDX", "TYPE"]) + ["DIMESION_{}".format(d) for d in range(size_without_paa*size_without_paa)] + list(label_columns_idx)
)
df_without_paa_test.to_csv(os.path.join( BENCHMARKING_RESOURCES_PATH, "datasets", "df_without_paa_test.csv"))
df_paa_test = pd.DataFrame(
data = data_paa_test,
columns = list(["IDX", "TYPE"]) + ["DIMESION_{}".format(d) for d in range(size_paa*size_paa)] + list(label_columns_idx)
)
df_paa_test.to_csv(os.path.join( BENCHMARKING_RESOURCES_PATH, "datasets", "df_paa_test.csv"))
Explanation: Conjunto de teste
End of explanation
def metrics(test, predicted):
##CLASSIFICATION METRICS
acc = accuracy_score(test, predicted)
prec = precision_score(test, predicted)
rec = recall_score(test, predicted)
f1 = f1_score(test, predicted)
f1m = f1_score(test, predicted, average='macro')
# print('f1:',f1)
# print('acc: ',acc)
# print('recall: ',rec)
# print('precision: ',prec)
# # to copy paste print
#print("{:.4}\t{:.4}\t{:.4}\t{:.4}\t{:.4}".format(acc, prec, rec, f1, f1m))
# ##REGRESSION METRICS
# mae = mean_absolute_error(test_Y,pred)
# print('mae: ',mae)
# E_pred = sum(pred)
# E_ground = sum(test_Y)
# rete = abs(E_pred-E_ground)/float(max(E_ground,E_pred))
# print('relative error total energy: ',rete)
return acc, prec, rec, f1, f1m
def plot_predicted_and_ground_truth(test, predicted):
#import matplotlib.pyplot as plt
plt.plot(predicted.flatten(), label = 'pred')
plt.plot(test.flatten(), label= 'Y')
plt.show()
return
def embedding_images(images, model):
# Feature extraction process with VGG16
vgg16_feature_list = [] # Attributes array (vgg16 embedding)
y = [] # Extract labels from name of image path[]
for path in tqdm_notebook(images):
img = keras_image.load_img(path, target_size=(100, 100))
x = keras_image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
# "Extracting" features...
vgg16_feature = vgg16_model.predict(x)
vgg16_feature_np = np.array(vgg16_feature)
vgg16_feature_list.append(vgg16_feature_np.flatten())
# Image (chuncked serie)
file_name = path.split("\\")[-1].split(".")[0]
image_labels = [int(l) for l in list(file_name.split("_")[-1])]
y.append(image_labels)
X = np.array(vgg16_feature_list)
return X, y
Explanation: Modelagem
End of explanation
# Building dnn model (feature extraction)
vgg16_model = VGG16(
include_top=False,
weights='imagenet',
input_tensor=None,
input_shape=(100, 100, 3),
pooling='avg',
classes=1000
)
Explanation: Benchmarking (replicando estudo)
End of explanation
# GAFD Images with PAA (Train)
images = sorted(glob(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedImages",
"*_PAA_GADF_train_*.png"
)
))
X_train, y_train = embedding_images(images, vgg16_model)
# Data persistence
np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/X_train.npy'), X_train)
np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/y_train.npy'), y_train)
Explanation: Embedding das imagens de Treino
End of explanation
# GAFD Images with PAA (Train)
images = sorted(glob(
os.path.join(
BENCHMARKING_RESOURCES_PATH,
"GeneratedImages",
"*_PAA_GADF_test_*.png"
)
))
X_test, y_test = embedding_images(images, vgg16_model)
# Data persistence
np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/X_test.npy'), X_test)
np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/y_test.npy'), y_test)
Explanation: Embedding das imagens de Teste
End of explanation
# Training supervised classifier
clf = DecisionTreeClassifier(max_depth=15)
# Train classifier
clf.fit(X_train, y_train)
# Save classifier for future use
#joblib.dump(clf, 'Tree'+'-'+device+'-redd-all.joblib')
Explanation: Treinando Classificador Supervisionado
End of explanation
# Predict test data
y_pred = clf.predict(X_test)
# Print metrics
final_performance = []
y_test = np.array(y_test)
y_pred = np.array(y_pred)
print("")
print("RESULT ANALYSIS\n\n")
print("ON/OFF State Charts")
print("-" * 115)
for i in range(y_test.shape[1]):
fig = plt.figure(figsize=(15, 2))
plt.title("Appliance #{}".format( label_columns_idx[i]))
plt.plot(y_test[:, i].flatten(), label = "True Y")
plt.plot( y_pred[:, i].flatten(), label = "Predicted Y")
plt.xlabel('Sample')
plt.xticks(range(0, y_test.shape[0], 50))
plt.xlim(0, y_test.shape[0])
plt.ylabel('Status')
plt.yticks([0, 1])
plt.ylim(0,1)
plt.legend()
plt.show()
acc, prec, rec, f1, f1m = metrics(y_test[:, i], y_pred[:, i])
final_performance.append([
label_columns_idx[i],
round(acc*100, 2),
round(prec*100, 2),
round(rec*100, 2),
round(f1*100, 2),
round(f1m*100, 2)
])
print("-" * 115)
print("")
print("FINAL PERFORMANCE BY APPLIANCE (LABEL):")
df_metrics = pd.DataFrame(
data = final_performance,
columns = ["Appliance", "Accuracy", "Precision", "Recall", "F1-score", "F1-macro"]
)
display(df_metrics)
print("")
print("OVERALL AVERAGE PERFORMANCE:")
final_performance = np.mean(np.array(final_performance)[:, 1:].astype(float), axis = 0)
display(pd.DataFrame(
data = {
"Metric": ["Accuracy", "Precision", "Recall", "F1-score", "F1-macro"],
"Result (%)": [round(p, 2) for p in final_performance]
}
))
# print("-----------------")
# print("Accuracy : {0:.2f}%".format( final_performance[0] ))
# print("Precision : {0:.2f}%".format( final_performance[1] ))
# print("Recall : {0:.2f}%".format( final_performance[2] ))
# print("F1-score : {0:.2f}%".format( final_performance[3] ))
# print("F1-macro : {0:.2f}%".format( final_performance[4] ))
# print("-----------------")
Explanation: Avaliando Classificador
End of explanation |
9,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Pytorch-Introduction" data-toc-modified-id="Pytorch-Introduction-1"><span class="toc-item-num">1 </span>Pytorch Introduction</a></span><ul class="toc-item"><li><span><a href="#Linear-Regression" data-toc-modified-id="Linear-Regression-1.1"><span class="toc-item-num">1.1 </span>Linear Regression</a></span></li><li><span><a href="#Linear-Regression-Version-2" data-toc-modified-id="Linear-Regression-Version-2-1.2"><span class="toc-item-num">1.2 </span>Linear Regression Version 2</a></span></li><li><span><a href="#Logistic-Regression" data-toc-modified-id="Logistic-Regression-1.3"><span class="toc-item-num">1.3 </span>Logistic Regression</a></span></li><li><span><a href="#Recurrent-Neural-Network-(RNN)" data-toc-modified-id="Recurrent-Neural-Network-(RNN)-1.4"><span class="toc-item-num">1.4 </span>Recurrent Neural Network (RNN)</a></span><ul class="toc-item"><li><span><a href="#Vanilla-RNN" data-toc-modified-id="Vanilla-RNN-1.4.1"><span class="toc-item-num">1.4.1 </span>Vanilla RNN</a></span></li><li><span><a href="#LSTM" data-toc-modified-id="LSTM-1.4.2"><span class="toc-item-num">1.4.2 </span>LSTM</a></span></li></ul></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Pytorch Introduction
```bash
installation on a mac
for more information on installation refer to
the following link
Step2: Here we start defining the linear regression model, recall that in linear regression, we are optimizing for the squared loss.
\begin{align}
L = \frac{1}{2}(y-(Xw + b))^2
\end{align}
Step3: Linear Regression Version 2
A better way of defining our model is to inherit the nn.Module class, to use it all we need to do is define our model's forward pass and the nn.Module will automatically define the backward method for us, where the gradients will be computed using autograd.
Step4: After training our model, we can also save the model's parameter and load it back into the model in the future
Step5: Logistic Regression
Let's now look at a classification example, here we'll define a logistic regression that takes in a bag of words representation of some text and predicts over two labels "English" and "Spanish".
Step6: The next code chunk create words to index mappings. To build our bag of words (BoW) representation, we need to assign each word in our vocabulary an unique index. Let's say our entire corpus only consists of two words "hello" and "world", with "hello" corresponding to index 0 and "world" to index 1. Then the BoW vector for the sentence "hello world hello world" will be [2, 2], i.e. the count for the word "hello" will be at position 0 of the array and so on.
Step8: Next we define our model using the inherenting from nn.Module approach and also two helper functions to convert our data to torch Tensors so we can use to during training.
Step9: We are now ready to train this!
Step10: Recurrent Neural Network (RNN)
The idea behind RNN is to make use of sequential information that exists in our dataset. In feedforward neural network, we assume that all inputs and outputs are independent of each other. But for some tasks, this might not be the best way to tackle the problem. For example, in Natural Language Processing (NLP) applications, if we wish to predict the next word in a sentence (one business application of this is Swiftkey), then we could imagine that knowing the word that comes before it can come in handy.
Vanilla RNN
The input $x$ will be a sequence of words, and each $x_t$ is a single word. And because of how matrix multiplication works, we can't simply use a word index like (36) as an input, instead we represent each word as a one-hot vector with a size of the total number of vocabulary. For example, the word with index 36 have the value 1 at position 36 and the rest of the value in the vector would all be 0's.
Step13: In the next section, we'll teach our RNN to produce "ihello" from "hihell".
Step15: LSTM
The example below uses an LSTM to generate part of speech tags. The usage of LSTM API is essentially the same as the RNN we were using in the last section. Expect in this example, we will prepare the word to index mapping ourselves and as for the modeling part, we will add an embedding layer before the LSTM layer, this is a common technique in NLP applications. So for each word, instead of using the one hot encoding way of representation the data (which can be inefficient and it treats all words as independent entities with no relationships amongst each other), word embeddings will compress them into a lower dimension that encode the semantics of the words, i.e. how similar each word is used within our given corpus. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
%watermark -a 'Ethen' -d -t -v -p torch,numpy,matplotlib
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Pytorch-Introduction" data-toc-modified-id="Pytorch-Introduction-1"><span class="toc-item-num">1 </span>Pytorch Introduction</a></span><ul class="toc-item"><li><span><a href="#Linear-Regression" data-toc-modified-id="Linear-Regression-1.1"><span class="toc-item-num">1.1 </span>Linear Regression</a></span></li><li><span><a href="#Linear-Regression-Version-2" data-toc-modified-id="Linear-Regression-Version-2-1.2"><span class="toc-item-num">1.2 </span>Linear Regression Version 2</a></span></li><li><span><a href="#Logistic-Regression" data-toc-modified-id="Logistic-Regression-1.3"><span class="toc-item-num">1.3 </span>Logistic Regression</a></span></li><li><span><a href="#Recurrent-Neural-Network-(RNN)" data-toc-modified-id="Recurrent-Neural-Network-(RNN)-1.4"><span class="toc-item-num">1.4 </span>Recurrent Neural Network (RNN)</a></span><ul class="toc-item"><li><span><a href="#Vanilla-RNN" data-toc-modified-id="Vanilla-RNN-1.4.1"><span class="toc-item-num">1.4.1 </span>Vanilla RNN</a></span></li><li><span><a href="#LSTM" data-toc-modified-id="LSTM-1.4.2"><span class="toc-item-num">1.4.2 </span>LSTM</a></span></li></ul></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
# make up some trainig data and specify the type to be float, i.e. np.float32
# We DO not recommend double, i.e. np.float64, especially on the GPU. GPUs have bad
# double precision performance since they are optimized for float32
X_train = np.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59,
2.167, 7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1], dtype = np.float32)
X_train = X_train.reshape(-1, 1)
y_train = np.asarray([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53,
1.221, 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3], dtype = np.float32)
y_train = y_train.reshape(-1, 1)
# Convert numpy array to Pytorch Tensors
X = torch.FloatTensor(X_train)
y = torch.FloatTensor(y_train)
Explanation: Pytorch Introduction
```bash
installation on a mac
for more information on installation refer to
the following link:
http://pytorch.org/
conda install pytorch torchvision -c pytorch
```
At its core, PyTorch provides two main features:
An n-dimensional Tensor, similar to numpy array but can run on GPUs. PyTorch provides many functions for operating on these Tensors, thus it can be used as a general purpose scientific computing tool.
Automatic differentiation for building and training neural networks.
Let's dive in by looking at some examples:
Linear Regression
End of explanation
# with linear regression, we apply a linear transformation
# to the incoming data, i.e. y = Xw + b, here we only have a 1
# dimensional data, thus the feature size will be 1
model = nn.Linear(in_features=1, out_features=1)
# although we can write our own loss function, the nn module
# also contains definitions of popular loss functions; here
# we use the MSELoss, a.k.a the L2 loss, and size_average parameter
# simply divides it with the number of examples
criterion = nn.MSELoss(size_average=True)
# Then we use the optim module to define an Optimizer that will update the weights of
# the model for us. Here we will use SGD; but it contains many other
# optimization algorithms. The first argument to the SGD constructor tells the
# optimizer the parameters that it should update
learning_rate = 0.01
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
# start the optimization process
n_epochs = 100
for _ in range(n_epochs):
# torch accumulates the gradients, thus before running new things
# use the optimizer object to zero all of the gradients for the
# variables it will update (which are the learnable weights of the model),
# think in terms of refreshing the gradients before doing the another round of update
optimizer.zero_grad()
# forward pass: compute predicted y by passing X to the model
output = model(X)
# compute the loss function
loss = criterion(output, y)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# call the step function on an Optimizer makes an update to its parameters
optimizer.step()
# plot the data and the fitted line to confirm the result
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 14
# convert a torch FloatTensor back to a numpy ndarray
# here, we also call .detach to detach the result from the computation history,
# to prevent future computations on it from being tracked
y_pred = model(X).detach().numpy()
plt.plot(X_train, y_train, 'ro', label='Original data')
plt.plot(X_train, y_pred, label='Fitted line')
plt.legend()
plt.show()
# to get the parameters, i.e. weight and bias from the model,
# we can use the state_dict() attribute from the model that
# we've defined
model.state_dict()
# or we could get it from the model's parameter
# which by itself is a generator
list(model.parameters())
Explanation: Here we start defining the linear regression model, recall that in linear regression, we are optimizing for the squared loss.
\begin{align}
L = \frac{1}{2}(y-(Xw + b))^2
\end{align}
End of explanation
class LinearRegression(nn.Module):
def __init__(self, in_features, out_features):
super().__init__() # boilerplate call
self.in_features = in_features
self.out_features = out_features
self.linear = nn.Linear(in_features, out_features)
def forward(self, x):
out = self.linear(x)
return out
# same optimization process
n_epochs = 100
learning_rate = 0.01
criterion = nn.MSELoss(size_average=True)
model = LinearRegression(in_features=1, out_features=1)
# when we defined our LinearRegression class, we've assigned
# a neural network's component/layer to a class variable in the
# __init__ function, and now notice that we can directly call
# .parameters() on the class we've defined due to some Python magic
# from the Pytorch devs
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(n_epochs):
# forward + backward + optimize
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
optimizer.step()
# print the loss per 20 epoch this time
if (epoch + 1) % 20 == 0:
# starting from pytorch 0.4.0, we use .item to get a python number from a
# torch scalar, before loss.item() looks something like loss.data[0]
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch + 1, n_epochs, loss.item()))
Explanation: Linear Regression Version 2
A better way of defining our model is to inherit the nn.Module class, to use it all we need to do is define our model's forward pass and the nn.Module will automatically define the backward method for us, where the gradients will be computed using autograd.
End of explanation
checkpoint_path = 'model.pkl'
torch.save(model.state_dict(), checkpoint_path)
model.load_state_dict(torch.load(checkpoint_path))
y_pred = model(X).detach().numpy()
plt.plot(X_train, y_train, 'ro', label='Original data')
plt.plot(X_train, y_pred, label='Fitted line')
plt.legend()
plt.show()
Explanation: After training our model, we can also save the model's parameter and load it back into the model in the future
End of explanation
# define some toy dataset
train_data = [
('me gusta comer en la cafeteria'.split(), 'SPANISH'),
('Give it to me'.split(), 'ENGLISH'),
('No creo que sea una buena idea'.split(), 'SPANISH'),
('No it is not a good idea to get lost at sea'.split(), 'ENGLISH')
]
test_data = [
('Yo creo que si'.split(), 'SPANISH'),
('it is lost on me'.split(), 'ENGLISH')
]
Explanation: Logistic Regression
Let's now look at a classification example, here we'll define a logistic regression that takes in a bag of words representation of some text and predicts over two labels "English" and "Spanish".
End of explanation
idx_to_label = ['SPANISH', 'ENGLISH']
label_to_idx = {"SPANISH": 0, "ENGLISH": 1}
word_to_idx = {}
for sent, _ in train_data + test_data:
for word in sent:
if word not in word_to_idx:
word_to_idx[word] = len(word_to_idx)
print(word_to_idx)
VOCAB_SIZE = len(word_to_idx)
NUM_LABELS = len(label_to_idx)
Explanation: The next code chunk create words to index mappings. To build our bag of words (BoW) representation, we need to assign each word in our vocabulary an unique index. Let's say our entire corpus only consists of two words "hello" and "world", with "hello" corresponding to index 0 and "world" to index 1. Then the BoW vector for the sentence "hello world hello world" will be [2, 2], i.e. the count for the word "hello" will be at position 0 of the array and so on.
End of explanation
class BoWClassifier(nn.Module):
def __init__(self, vocab_size, num_labels):
super().__init__()
self.linear = nn.Linear(vocab_size, num_labels)
def forward(self, bow_vector):
When we're performing a classification, after passing
through the linear layer or also known as the affine layer
we also need pass it through the softmax layer to convert a vector
of real numbers into probability distribution, here we use
log softmax for numerical stability reasons.
return F.log_softmax(self.linear(bow_vector), dim = 1)
def make_bow_vector(sentence, word_to_idx):
vector = torch.zeros(len(word_to_idx))
for word in sentence:
vector[word_to_idx[word]] += 1
return vector.view(1, -1)
def make_target(label, label_to_idx):
return torch.LongTensor([label_to_idx[label]])
Explanation: Next we define our model using the inherenting from nn.Module approach and also two helper functions to convert our data to torch Tensors so we can use to during training.
End of explanation
model = BoWClassifier(VOCAB_SIZE, NUM_LABELS)
# note that instead of using NLLLoss (negative log likelihood),
# we could have used CrossEntropyLoss and remove the log_softmax
# function call in our forward method. The CrossEntropyLoss docstring
# explicitly states that this criterion combines `LogSoftMax` and
# `NLLLoss` in one single class.
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
n_epochs = 100
for epoch in range(n_epochs):
for instance, label in train_data:
bow_vector = make_bow_vector(instance, word_to_idx)
target = make_target(label, label_to_idx)
# standard step to perform the forward and backward step
model.zero_grad()
log_probs = model(bow_vector)
loss = criterion(log_probs, target)
loss.backward()
optimizer.step()
# we can also wrap the code block in with torch.no_grad(): to
# prevent history tracking, this is often used in model inferencing,
# or when evaluating the model as we won't be needing the gradient during
# this stage
with torch.no_grad():
# predict on the test data to check if the model actually learned anything
for instance, label in test_data:
bow_vec = make_bow_vector(instance, word_to_idx)
log_probs = model(bow_vec)
y_pred = np.argmax(log_probs[0].numpy())
label_pred = idx_to_label[y_pred]
print('true label: ', label, ' predicted label: ', label_pred)
Explanation: We are now ready to train this!
End of explanation
torch.manual_seed(777)
# suppose we have a
# one hot encoding for each char in 'hello'
# and the sequence length for the word 'hello' is 5
seq_len = 5
h = [1, 0, 0, 0]
e = [0, 1, 0, 0]
l = [0, 0, 1, 0]
o = [0, 0, 0, 1]
# here we specify a single RNN cell with the property of
# input_dim (4) -> output_dim (2)
# batch_first explained in the following
rnn_cell = nn.RNN(input_size=4, hidden_size=2, batch_first=True)
# our input shape should be of shape
# (batch, seq_len, input_size) when batch_first=True;
# the input size basically refers to the number of features
# (seq_len, batch_size, input_size) when batch_first=False (default)
# thus we reshape our input to the appropriate size, torch.view is
# equivalent to numpy.reshape
inputs = torch.Tensor([h, e, l, l, o])
inputs = inputs.view(1, 5, -1)
# our hidden is the weights that gets passed along the cells,
# here we initialize some random values for it:
# (batch, num_layers * num_directions, hidden_size) for batch_first=True
# disregard the second argument as of now
hidden = torch.zeros(1, 1, 2)
out, hidden = rnn_cell(inputs, hidden)
print('sequence input size', inputs.size())
print('out size', out.size())
print('sequence size', hidden.size())
# the first value returned by the rnn cell is all
# of the hidden state throughout the sequence, while
# the second value is the most recent hidden state;
# hence we can compare the last slice of the the first
# value with the second value to confirm that they are
# the same
print('\ncomparing rnn cell output:')
print(out[:, -1, :])
hidden[0]
Explanation: Recurrent Neural Network (RNN)
The idea behind RNN is to make use of sequential information that exists in our dataset. In feedforward neural network, we assume that all inputs and outputs are independent of each other. But for some tasks, this might not be the best way to tackle the problem. For example, in Natural Language Processing (NLP) applications, if we wish to predict the next word in a sentence (one business application of this is Swiftkey), then we could imagine that knowing the word that comes before it can come in handy.
Vanilla RNN
The input $x$ will be a sequence of words, and each $x_t$ is a single word. And because of how matrix multiplication works, we can't simply use a word index like (36) as an input, instead we represent each word as a one-hot vector with a size of the total number of vocabulary. For example, the word with index 36 have the value 1 at position 36 and the rest of the value in the vector would all be 0's.
End of explanation
# create an index to character mapping
idx2char = ['h', 'i', 'e', 'l', 'o']
# Teach hihell -> ihello
x_data = [[0, 1, 0, 2, 3, 3]] # hihell
x_one_hot = [[[1, 0, 0, 0, 0], # h 0
[0, 1, 0, 0, 0], # i 1
[1, 0, 0, 0, 0], # h 0
[0, 0, 1, 0, 0], # e 2
[0, 0, 0, 1, 0], # l 3
[0, 0, 0, 1, 0]]] # l 3
x_one_hot = np.array(x_one_hot)
y_data = np.array([1, 0, 2, 3, 3, 4]) # ihello
# As we have one batch of samples, we will change them to variables only once
inputs = torch.Tensor(x_one_hot)
labels = torch.LongTensor(y_data)
# hyperparameters
seq_len = 6 # |hihell| == 6, equivalent to time step
input_size = 5 # one-hot size
batch_size = 1 # one sentence per batch
num_layers = 1 # one-layer rnn
num_classes = 5 # predicting 5 distinct character
hidden_size = 4 # output from the RNN
class RNN(nn.Module):
The RNN model will be a RNN followed by a linear layer,
i.e. a fully-connected layer
def __init__(self, seq_len, num_classes, input_size, hidden_size, num_layers):
super().__init__()
self.seq_len = seq_len
self.num_layers = num_layers
self.input_size = input_size
self.num_classes = num_classes
self.hidden_size = hidden_size
self.rnn = nn.RNN(input_size, hidden_size, batch_first=True)
self.linear = nn.Linear(hidden_size, num_classes)
def forward(self, x):
# assuming batch_first = True for RNN cells
batch_size = x.size(0)
hidden = self._init_hidden(batch_size)
x = x.view(batch_size, self.seq_len, self.input_size)
# apart from the output, rnn also gives us the hidden
# cell, this gives us the opportunity to pass it to
# the next cell if needed; we won't be needing it here
# because the nn.RNN already computed all the time steps
# for us. rnn_out will of size [batch_size, seq_len, hidden_size]
rnn_out, _ = self.rnn(x, hidden)
linear_out = self.linear(rnn_out.view(-1, hidden_size))
return linear_out
def _init_hidden(self, batch_size):
Initialize hidden cell states, assuming
batch_first = True for RNN cells
return torch.zeros(batch_size, self.num_layers, self.hidden_size)
# Set loss, optimizer and the RNN model
torch.manual_seed(777)
rnn = RNN(seq_len, num_classes, input_size, hidden_size, num_layers)
print('network architecture:\n', rnn)
# train the model
num_epochs = 15
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(rnn.parameters(), lr=0.1)
for epoch in range(1, num_epochs + 1):
optimizer.zero_grad()
outputs = rnn(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# check the current predicted string
# max gives the maximum value and its
# corresponding index, we will only
# be needing the index
_, idx = outputs.max(dim = 1)
idx = idx.detach().numpy()
result_str = [idx2char[c] for c in idx]
print('epoch: {}, loss: {:1.3f}'.format(epoch, loss.item()))
print('Predicted string: ', ''.join(result_str))
Explanation: In the next section, we'll teach our RNN to produce "ihello" from "hihell".
End of explanation
# These will usually be more like 32 or 64 dimensional.
# We will keep them small for this toy example
EMBEDDING_SIZE = 6
HIDDEN_SIZE = 6
training_data = [
("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
idx_to_tag = ['DET', 'NN', 'V']
tag_to_idx = {'DET': 0, 'NN': 1, 'V': 2}
word_to_idx = {}
for sent, tags in training_data:
for word in sent:
if word not in word_to_idx:
word_to_idx[word] = len(word_to_idx)
word_to_idx
def prepare_sequence(seq, to_idx):
Convert sentence/sequence to torch Tensors
idxs = [to_idx[w] for w in seq]
return torch.LongTensor(idxs)
seq = training_data[0][0]
inputs = prepare_sequence(seq, word_to_idx)
inputs
class LSTMTagger(nn.Module):
def __init__(self, embedding_size, hidden_size, vocab_size, tagset_size):
super().__init__()
self.embedding_size = embedding_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.tagset_size = tagset_size
self.embedding = nn.Embedding(vocab_size, embedding_size)
self.lstm = nn.LSTM(embedding_size, hidden_size)
self.hidden2tag = nn.Linear(hidden_size, tagset_size)
def forward(self, x):
embed = self.embedding(x)
hidden = self._init_hidden()
# the second dimension refers to the batch size, which we've hard-coded
# it as 1 throughout the example
lstm_out, lstm_hidden = self.lstm(embed.view(len(x), 1, -1), hidden)
output = self.hidden2tag(lstm_out.view(len(x), -1))
return output
def _init_hidden(self):
# the dimension semantics are [num_layers, batch_size, hidden_size]
return (torch.rand(1, 1, self.hidden_size),
torch.rand(1, 1, self.hidden_size))
model = LSTMTagger(EMBEDDING_SIZE, HIDDEN_SIZE, len(word_to_idx), len(tag_to_idx))
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
epochs = 300
for epoch in range(epochs):
for sentence, tags in training_data:
model.zero_grad()
sentence = prepare_sequence(sentence, word_to_idx)
target = prepare_sequence(tags, tag_to_idx)
output = model(sentence)
loss = criterion(output, target)
loss.backward()
optimizer.step()
inputs = prepare_sequence(training_data[0][0], word_to_idx)
tag_scores = model(inputs)
# validating that the sentence "the dog ate the apple".
# the correct tag should be DET NOUN VERB DET NOUN
print('expected target: ', training_data[0][1])
tag_scores = tag_scores.detach().numpy()
tag = [idx_to_tag[idx] for idx in np.argmax(tag_scores, axis = 1)]
print('generated target: ', tag)
Explanation: LSTM
The example below uses an LSTM to generate part of speech tags. The usage of LSTM API is essentially the same as the RNN we were using in the last section. Expect in this example, we will prepare the word to index mapping ourselves and as for the modeling part, we will add an embedding layer before the LSTM layer, this is a common technique in NLP applications. So for each word, instead of using the one hot encoding way of representation the data (which can be inefficient and it treats all words as independent entities with no relationships amongst each other), word embeddings will compress them into a lower dimension that encode the semantics of the words, i.e. how similar each word is used within our given corpus.
End of explanation |
9,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BUG report
To keep track of bugs we report them here so that they can be reproduced easily. Additionally, as soon as they are fixed they should disappear in this notebook. To each bug report add time and your name. Add new bugs at the end of the file.
Step1: BUG
Step2: Bug #
Step3: Bug #3
Step4: Bug 4
Step5: BUG #5
Step6: Bug
Step7: Compare to SNIa
Step8: BUG
Step9: BUG | Python Code:
%pylab nbagg
import sygma as s
reload(s)
s.__file__
!echo $PYTHONPATH
Explanation: BUG report
To keep track of bugs we report them here so that they can be reproduced easily. Additionally, as soon as they are fixed they should disappear in this notebook. To each bug report add time and your name. Add new bugs at the end of the file.
End of explanation
s_0_1_10=sygma.sygma(imf_bdys=[0.1,10])
s_1_10=sygma.sygma(imf_bdys=[1,10])
s_2_8_10=sygma.sygma(imf_bdys=[2.8,10])
s_0_1_10.plot_sn_distr(rate_only='sn2',label2='[0.1,10]',marker2='d')
s_1_10.plot_sn_distr(rate_only='sn2',label2='[1,10]',marker2='o')
s_2_8_10.plot_sn_distr(rate_only='sn2',label2='[2.8,10]',marker2='x')
plt.xlim(1e7,6e7)
s_0_1_10.plot_sn_distr(rate_only='sn1a',label1='[0.1,10]',marker1='d')
s_1_10.plot_sn_distr(rate_only='sn1a',label1='[1,10]',marker1='o')
s_2_8_10.plot_sn_distr(rate_only='sn1a',label1='[2.8,10]',marker1='x')
plt.ylim(1e-1,1e5)
plt.xlim(1e6,1.5e10)
print sum(s_0_1_10.history.sn1a_numbers)
print sum(s_1_10.history.sn1a_numbers)
Explanation: BUG : SN not changing for different mass ranges (report from Benoit) All include massive stars assuming the transition mass of 8Msun
CR/15
End of explanation
c_2_8_10=sygma.sygma(imf_type='chabrier',imf_bdys=[2.8,10])
c_1_10=sygma.sygma(imf_type='chabrier',imf_bdys=[1,10])
c_0_1_10=sygma.sygma(imf_type='chabrier',imf_bdys=[0.1,10])
c_2_8_10.plot_totmasses(source='agb',marker='o',color='r',label='[2.8,10]')
c_1_10.plot_totmasses(source='agb',marker='s',color='b',label='[1,10]')
c_0_1_10.plot_totmasses(source='agb',marker='p',color='k',label='[0.1,10]')
Explanation: Bug # : IMF implementation
CR/15
End of explanation
s1=s.sygma()
s1.plot_mass_range_contributions(specie='H',prodfac=False,rebin=0.5)
s1.plot_mass(specie='Ni',source='sn1a')
s1.plot_mass(specie='Ni',source='massive')
Explanation: Bug #3 : large H amount in the ~10Msun interval
End of explanation
reload(s)
s1=s.sygma(iniZ=0.0)
Explanation: Bug 4: PoPIII stars with BB abundance : General issues: runs now but needs further test
read BB abundance has problems with isotopes
End of explanation
reload(sygma)
s1=s.sygma(iolevel=1,iniZ=0.02)
s1.plot_mass_range_contributions(specie='C',prodfac=True,rebin=0.5)
Explanation: BUG #5: with Z=0.02 not all mass is locked away; plus plot_mass_range contributions does not work for bin size of smaller 1
End of explanation
reload(s)
s7=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e9,imf_type='salpeter',imf_bdys=[1,30],special_timesteps=-1,iniZ=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')
s8=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e9,imf_type='salpeter',imf_bdys=[1,30],special_timesteps=-1,iniZ=0.0001)
s7.plot_sn_distr(rate=True,rate_only='sn2',marker2='o',markevery=3)
s8.plot_sn_distr(rate=True,rate_only='sn2',markevery=5,marker2='x')
#plt.xlim(1e6,1e8)
Explanation: Bug : plot_sn_distr: Number of massive stars too large & comparison to SNIa
Both cases below s7,s8 should deliver same number of stars
End of explanation
s7.plot_sn_distr(rate=False)
s8.plot_sn_distr(rate=False)
s7.plot_sn_distr(rate=True)
s8.plot_sn_distr(rate=True)
Explanation: Compare to SNIa: Rate & numbers are just scaled
End of explanation
import chem_evol
reload(chem_evol)
import sygma as s
reload(s)
imf_bdys = [0.1, 100.0]
imf_yields_range = [1.0, 100.0]
table = 'yield_tables/isotope_yield_table_MESA_only.txt'
s_imf_y_0_02 = s.sygma(table=table,imf_bdys=imf_bdys,imf_yields_range=imf_yields_range,iniZ=0.02)
imf_yields_range = [1.0, 30.0]
table = 'yield_tables/isotope_yield_table_heger_GG_ertl_fullZ_CL13_56Ni.txt'
s_imf_y_0_02 = s.sygma(table=table,imf_bdys=imf_bdys,imf_yields_range=imf_yields_range,iniZ=0.02)
imf_yields_range = [1.0, 100.0]
table = 'yield_tables/isotope_yield_table_heger_GG_ertl_fullZ_CL13_56Ni.txt'
s_imf_y_0_02 = s.sygma(table=table,imf_bdys=imf_bdys,imf_yields_range=imf_yields_range,iniZ=0.02)
Explanation: BUG : The code crashes with imf_yields_range
Benoit - Jan 20th 2016
End of explanation
%matplotlib inline
s1=s.sygma(iniZ=0.02,dt=1e7,tend=2e7)
s1.plot_yield_input(fig=3,xaxis='[C/H]',yaxis='[Fe/H]',iniZ=0.0001,masses=[1,3,12,25],marker='x',color='b',shape='--')
Explanation: BUG: ini_elem_frac_sol variable is not initialized in plot_yield_input function (sygma.py line 568)
End of explanation |
9,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Applying unvectorized functions with apply_ufunc
This example will illustrate how to conveniently apply an unvectorized function func to xarray objects using apply_ufunc. func expects 1D numpy arrays and returns a 1D numpy array. Our goal is to coveniently apply this function along a dimension of xarray objects that may or may not wrap dask arrays with a signature.
We will illustrate this using np.interp
Step1: The function we will apply is np.interp which expects 1D numpy arrays. This functionality is already implemented in xarray so we use that capability to make sure we are not making mistakes.
Step2: Let's define a function that works with one vector of data along lat at a time.
Step3: No errors are raised so our interpolation is working.
This function consumes and returns numpy arrays, which means we need to do a lot of work to convert the result back to an xarray object with meaningful metadata. This is where apply_ufunc is very useful.
apply_ufunc
Apply a vectorized function for unlabeled arrays on xarray objects.
The function will be mapped over the data variable(s) of the input arguments using
xarray’s standard rules for labeled computation, including alignment, broadcasting,
looping over GroupBy/Dataset variables, and merging of coordinates.
apply_ufunc has many capabilities but for simplicity this example will focus on the common task of vectorizing 1D functions over nD xarray objects. We will iteratively build up the right set of arguments to apply_ufunc and read through many error messages in doing so.
Step4: apply_ufunc needs to know a lot of information about what our function does so that it can reconstruct the outputs. In this case, the size of dimension lat has changed and we need to explicitly specify that this will happen. xarray helpfully tells us that we need to specify the kwarg exclude_dims.
exclude_dims
exclude_dims
Step5: Core dimensions
Core dimensions are central to using apply_ufunc. In our case, our function expects to receive a 1D vector along lat — this is the dimension that is "core" to the function's functionality. Multiple core dimensions are possible. apply_ufunc needs to know which dimensions of each variable are core dimensions.
input_core_dims
Step6: xarray is telling us that it expected to receive back a numpy array with 0 dimensions but instead received an array with 1 dimension corresponding to newlat. We can fix this by specifying output_core_dims
Step7: Finally we get some output! Let's check that this is right
Step8: No errors are raised so it is right!
Vectorization with np.vectorize
Now our function currently only works on one vector of data which is not so useful given our 3D dataset.
Let's try passing the whole dataset. We add a print statement so we can see what our function receives.
Step9: That's a hard-to-interpret error but our print call helpfully printed the shapes of the input data
Step10: This unfortunately is another cryptic error from numpy.
Notice that newlat is not an xarray object. Let's add a dimension name new_lat and modify the call. Note this cannot be lat because xarray expects dimensions to be the same size (or broadcastable) among all inputs. output_core_dims needs to be modified appropriately. We'll manually rename new_lat back to lat for easy checking.
Step11: Notice that the printed input shapes are all 1D and correspond to one vector along the lat dimension.
The result is now an xarray object with coordinate values copied over from data. This is why apply_ufunc is so convenient; it takes care of a lot of boilerplate necessary to apply functions that consume and produce numpy arrays to xarray objects.
One final point
Step12: Yay! our function is receiving 1D vectors, so we've successfully parallelized applying a 1D function over a block. If you have a distributed dashboard up, you should see computes happening as equality is checked.
High performance vectorization
Step13: The warnings are about object-mode compilation relating to the print statement. This means we don't get much speed up
Step14: Yay! Our function is receiving 1D vectors and is working automatically with dask arrays. Finally let's comment out the print line and wrap everything up in a nice reusable function | Python Code:
import xarray as xr
import numpy as np
xr.set_options(display_style="html") # fancy HTML repr
air = (
xr.tutorial.load_dataset("air_temperature")
.air.sortby("lat") # np.interp needs coordinate in ascending order
.isel(time=slice(4), lon=slice(3))
) # choose a small subset for convenience
air
Explanation: Applying unvectorized functions with apply_ufunc
This example will illustrate how to conveniently apply an unvectorized function func to xarray objects using apply_ufunc. func expects 1D numpy arrays and returns a 1D numpy array. Our goal is to coveniently apply this function along a dimension of xarray objects that may or may not wrap dask arrays with a signature.
We will illustrate this using np.interp:
Signature: np.interp(x, xp, fp, left=None, right=None, period=None)
Docstring:
One-dimensional linear interpolation.
Returns the one-dimensional piecewise linear interpolant to a function
with given discrete data points (`xp`, `fp`), evaluated at `x`.
and write an xr_interp function with signature
xr_interp(xarray_object, dimension_name, new_coordinate_to_interpolate_to)
Load data
First lets load an example dataset
End of explanation
newlat = np.linspace(15, 75, 100)
air.interp(lat=newlat)
Explanation: The function we will apply is np.interp which expects 1D numpy arrays. This functionality is already implemented in xarray so we use that capability to make sure we are not making mistakes.
End of explanation
def interp1d_np(data, x, xi):
return np.interp(xi, x, data)
interped = interp1d_np(air.isel(time=0, lon=0), air.lat, newlat)
expected = air.interp(lat=newlat)
# no errors are raised if values are equal to within floating point precision
np.testing.assert_allclose(expected.isel(time=0, lon=0).values, interped)
Explanation: Let's define a function that works with one vector of data along lat at a time.
End of explanation
xr.apply_ufunc(
interp1d_np, # first the function
air.isel(time=0, lon=0), # now arguments in the order expected by 'interp1_np'
air.lat,
newlat,
)
Explanation: No errors are raised so our interpolation is working.
This function consumes and returns numpy arrays, which means we need to do a lot of work to convert the result back to an xarray object with meaningful metadata. This is where apply_ufunc is very useful.
apply_ufunc
Apply a vectorized function for unlabeled arrays on xarray objects.
The function will be mapped over the data variable(s) of the input arguments using
xarray’s standard rules for labeled computation, including alignment, broadcasting,
looping over GroupBy/Dataset variables, and merging of coordinates.
apply_ufunc has many capabilities but for simplicity this example will focus on the common task of vectorizing 1D functions over nD xarray objects. We will iteratively build up the right set of arguments to apply_ufunc and read through many error messages in doing so.
End of explanation
xr.apply_ufunc(
interp1d_np, # first the function
air.isel(time=0, lon=0), # now arguments in the order expected by 'interp1_np'
air.lat,
newlat,
exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be set!
)
Explanation: apply_ufunc needs to know a lot of information about what our function does so that it can reconstruct the outputs. In this case, the size of dimension lat has changed and we need to explicitly specify that this will happen. xarray helpfully tells us that we need to specify the kwarg exclude_dims.
exclude_dims
exclude_dims : set, optional
Core dimensions on the inputs to exclude from alignment and
broadcasting entirely. Any input coordinates along these dimensions
will be dropped. Each excluded dimension must also appear in
``input_core_dims`` for at least one argument. Only dimensions listed
here are allowed to change size between input and output objects.
End of explanation
xr.apply_ufunc(
interp1d_np, # first the function
air.isel(time=0, lon=0), # now arguments in the order expected by 'interp1_np'
air.lat,
newlat,
input_core_dims=[["lat"], ["lat"], []],
exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be set!
)
Explanation: Core dimensions
Core dimensions are central to using apply_ufunc. In our case, our function expects to receive a 1D vector along lat — this is the dimension that is "core" to the function's functionality. Multiple core dimensions are possible. apply_ufunc needs to know which dimensions of each variable are core dimensions.
input_core_dims : Sequence[Sequence], optional
List of the same length as ``args`` giving the list of core dimensions
on each input argument that should not be broadcast. By default, we
assume there are no core dimensions on any input arguments.
For example, ``input_core_dims=[[], ['time']]`` indicates that all
dimensions on the first argument and all dimensions other than 'time'
on the second argument should be broadcast.
Core dimensions are automatically moved to the last axes of input
variables before applying ``func``, which facilitates using NumPy style
generalized ufuncs [2]_.
output_core_dims : List[tuple], optional
List of the same length as the number of output arguments from
``func``, giving the list of core dimensions on each output that were
not broadcast on the inputs. By default, we assume that ``func``
outputs exactly one array, with axes corresponding to each broadcast
dimension.
Core dimensions are assumed to appear as the last dimensions of each
output in the provided order.
Next we specify "lat" as input_core_dims on both air and air.lat
End of explanation
xr.apply_ufunc(
interp1d_np, # first the function
air.isel(time=0, lon=0), # now arguments in the order expected by 'interp1_np'
air.lat,
newlat,
input_core_dims=[["lat"], ["lat"], []], # list with one entry per arg
output_core_dims=[["lat"]],
exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be set!
)
Explanation: xarray is telling us that it expected to receive back a numpy array with 0 dimensions but instead received an array with 1 dimension corresponding to newlat. We can fix this by specifying output_core_dims
End of explanation
interped = xr.apply_ufunc(
interp1d_np, # first the function
air.isel(time=0, lon=0), # now arguments in the order expected by 'interp1_np'
air.lat,
newlat,
input_core_dims=[["lat"], ["lat"], []], # list with one entry per arg
output_core_dims=[["lat"]],
exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be set!
)
interped["lat"] = newlat # need to add this manually
xr.testing.assert_allclose(expected.isel(time=0, lon=0), interped)
Explanation: Finally we get some output! Let's check that this is right
End of explanation
def interp1d_np(data, x, xi):
print(f"data: {data.shape} | x: {x.shape} | xi: {xi.shape}")
return np.interp(xi, x, data)
interped = xr.apply_ufunc(
interp1d_np, # first the function
air.isel(
lon=slice(3), time=slice(4)
), # now arguments in the order expected by 'interp1_np'
air.lat,
newlat,
input_core_dims=[["lat"], ["lat"], []], # list with one entry per arg
output_core_dims=[["lat"]],
exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be set!
)
interped["lat"] = newlat # need to add this manually
xr.testing.assert_allclose(expected.isel(time=0, lon=0), interped)
Explanation: No errors are raised so it is right!
Vectorization with np.vectorize
Now our function currently only works on one vector of data which is not so useful given our 3D dataset.
Let's try passing the whole dataset. We add a print statement so we can see what our function receives.
End of explanation
def interp1d_np(data, x, xi):
print(f"data: {data.shape} | x: {x.shape} | xi: {xi.shape}")
return np.interp(xi, x, data)
interped = xr.apply_ufunc(
interp1d_np, # first the function
air, # now arguments in the order expected by 'interp1_np'
air.lat, # as above
newlat, # as above
input_core_dims=[["lat"], ["lat"], []], # list with one entry per arg
output_core_dims=[["lat"]], # returned data has one dimension
exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be set!
vectorize=True, # loop over non-core dims
)
interped["lat"] = newlat # need to add this manually
xr.testing.assert_allclose(expected, interped)
Explanation: That's a hard-to-interpret error but our print call helpfully printed the shapes of the input data:
data: (10, 53, 25) | x: (25,) | xi: (100,)
We see that air has been passed as a 3D numpy array which is not what np.interp expects. Instead we want loop over all combinations of lon and time; and apply our function to each corresponding vector of data along lat.
apply_ufunc makes this easy by specifying vectorize=True:
vectorize : bool, optional
If True, then assume ``func`` only takes arrays defined over core
dimensions as input and vectorize it automatically with
:py:func:`numpy.vectorize`. This option exists for convenience, but is
almost always slower than supplying a pre-vectorized function.
Using this option requires NumPy version 1.12 or newer.
Also see the documentation for np.vectorize: https://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html. Most importantly
The vectorize function is provided primarily for convenience, not for performance.
The implementation is essentially a for loop.
End of explanation
def interp1d_np(data, x, xi):
print(f"data: {data.shape} | x: {x.shape} | xi: {xi.shape}")
return np.interp(xi, x, data)
interped = xr.apply_ufunc(
interp1d_np, # first the function
air, # now arguments in the order expected by 'interp1_np'
air.lat, # as above
newlat, # as above
input_core_dims=[["lat"], ["lat"], ["new_lat"]], # list with one entry per arg
output_core_dims=[["new_lat"]], # returned data has one dimension
exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be a set!
vectorize=True, # loop over non-core dims
)
interped = interped.rename({"new_lat": "lat"})
interped["lat"] = newlat # need to add this manually
xr.testing.assert_allclose(
expected.transpose(*interped.dims), interped # order of dims is different
)
interped
Explanation: This unfortunately is another cryptic error from numpy.
Notice that newlat is not an xarray object. Let's add a dimension name new_lat and modify the call. Note this cannot be lat because xarray expects dimensions to be the same size (or broadcastable) among all inputs. output_core_dims needs to be modified appropriately. We'll manually rename new_lat back to lat for easy checking.
End of explanation
def interp1d_np(data, x, xi):
print(f"data: {data.shape} | x: {x.shape} | xi: {xi.shape}")
return np.interp(xi, x, data)
interped = xr.apply_ufunc(
interp1d_np, # first the function
air.chunk(
{"time": 2, "lon": 2}
), # now arguments in the order expected by 'interp1_np'
air.lat, # as above
newlat, # as above
input_core_dims=[["lat"], ["lat"], ["new_lat"]], # list with one entry per arg
output_core_dims=[["new_lat"]], # returned data has one dimension
exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be a set!
vectorize=True, # loop over non-core dims
dask="parallelized",
output_dtypes=[air.dtype], # one per output
).rename({"new_lat": "lat"})
interped["lat"] = newlat # need to add this manually
xr.testing.assert_allclose(expected.transpose(*interped.dims), interped)
Explanation: Notice that the printed input shapes are all 1D and correspond to one vector along the lat dimension.
The result is now an xarray object with coordinate values copied over from data. This is why apply_ufunc is so convenient; it takes care of a lot of boilerplate necessary to apply functions that consume and produce numpy arrays to xarray objects.
One final point: lat is now the last dimension in interped. This is a "property" of core dimensions: they are moved to the end before being sent to interp1d_np as was noted in the docstring for input_core_dims
Core dimensions are automatically moved to the last axes of input
variables before applying ``func``, which facilitates using NumPy style
generalized ufuncs [2]_.
Parallelization with dask
So far our function can only handle numpy arrays. A real benefit of apply_ufunc is the ability to easily parallelize over dask chunks when needed.
We want to apply this function in a vectorized fashion over each chunk of the dask array. This is possible using dask's blockwise, map_blocks, or apply_gufunc. Xarray's apply_ufunc wraps dask's apply_gufunc and asking it to map the function over chunks using apply_gufunc is as simple as specifying dask="parallelized". With this level of flexibility we need to provide dask with some extra information:
1. output_dtypes: dtypes of all returned objects, and
2. output_sizes: lengths of any new dimensions.
Here we need to specify output_dtypes since apply_ufunc can infer the size of the new dimension new_lat from the argument corresponding to the third element in input_core_dims. Here I choose the chunk sizes to illustrate that np.vectorize is still applied so that our function receives 1D vectors even though the blocks are 3D.
End of explanation
from numba import float64, guvectorize
@guvectorize("(float64[:], float64[:], float64[:], float64[:])", "(n), (n), (m) -> (m)")
def interp1d_np_gufunc(data, x, xi, out):
# numba doesn't really like this.
# seem to support fstrings so do it the old way
print(
"data: " + str(data.shape) + " | x:" + str(x.shape) + " | xi: " + str(xi.shape)
)
out[:] = np.interp(xi, x, data)
# gufuncs don't return data
# instead you assign to a the last arg
# return np.interp(xi, x, data)
Explanation: Yay! our function is receiving 1D vectors, so we've successfully parallelized applying a 1D function over a block. If you have a distributed dashboard up, you should see computes happening as equality is checked.
High performance vectorization: gufuncs, numba & guvectorize
np.vectorize is a very convenient function but is unfortunately slow. It is only marginally faster than writing a for loop in Python and looping. A common way to get around this is to write a base interpolation function that can handle nD arrays in a compiled language like Fortran and then pass that to apply_ufunc.
Another option is to use the numba package which provides a very convenient guvectorize decorator: https://numba.pydata.org/numba-doc/latest/user/vectorize.html#the-guvectorize-decorator
Any decorated function gets compiled and will loop over any non-core dimension in parallel when necessary. We need to specify some extra information:
Our function cannot return a variable any more. Instead it must receive a variable (the last argument) whose contents the function will modify. So we change from def interp1d_np(data, x, xi) to def interp1d_np_gufunc(data, x, xi, out). Our computed results must be assigned to out. All values of out must be assigned explicitly.
guvectorize needs to know the dtypes of the input and output. This is specified in string form as the first argument. Each element of the tuple corresponds to each argument of the function. In this case, we specify float64 for all inputs and outputs: "(float64[:], float64[:], float64[:], float64[:])" corresponding to data, x, xi, out
Now we need to tell numba the size of the dimensions the function takes as inputs and returns as output i.e. core dimensions. This is done in symbolic form i.e. data and x are vectors of the same length, say n; xi and the output out have a different length, say m. So the second argument is (again as a string)
"(n), (n), (m) -> (m)." corresponding again to data, x, xi, out
End of explanation
interped = xr.apply_ufunc(
interp1d_np_gufunc, # first the function
air.chunk(
{"time": 2, "lon": 2}
), # now arguments in the order expected by 'interp1_np'
air.lat, # as above
newlat, # as above
input_core_dims=[["lat"], ["lat"], ["new_lat"]], # list with one entry per arg
output_core_dims=[["new_lat"]], # returned data has one dimension
exclude_dims=set(("lat",)), # dimensions allowed to change size. Must be a set!
# vectorize=True, # not needed since numba takes care of vectorizing
dask="parallelized",
output_dtypes=[air.dtype], # one per output
).rename({"new_lat": "lat"})
interped["lat"] = newlat # need to add this manually
xr.testing.assert_allclose(expected.transpose(*interped.dims), interped)
Explanation: The warnings are about object-mode compilation relating to the print statement. This means we don't get much speed up: https://numba.pydata.org/numba-doc/latest/user/performance-tips.html#no-python-mode-vs-object-mode. We'll keep the print statement temporarily to make sure that guvectorize acts like we want it to.
End of explanation
from numba import float64, guvectorize
@guvectorize(
"(float64[:], float64[:], float64[:], float64[:])",
"(n), (n), (m) -> (m)",
nopython=True,
)
def interp1d_np_gufunc(data, x, xi, out):
out[:] = np.interp(xi, x, data)
def xr_interp(data, dim, newdim):
interped = xr.apply_ufunc(
interp1d_np_gufunc, # first the function
data, # now arguments in the order expected by 'interp1_np'
data[dim], # as above
newdim, # as above
input_core_dims=[[dim], [dim], ["__newdim__"]], # list with one entry per arg
output_core_dims=[["__newdim__"]], # returned data has one dimension
exclude_dims=set((dim,)), # dimensions allowed to change size. Must be a set!
# vectorize=True, # not needed since numba takes care of vectorizing
dask="parallelized",
output_dtypes=[
data.dtype
], # one per output; could also be float or np.dtype("float64")
).rename({"__newdim__": dim})
interped[dim] = newdim # need to add this manually
return interped
xr.testing.assert_allclose(
expected.transpose(*interped.dims),
xr_interp(air.chunk({"time": 2, "lon": 2}), "lat", newlat),
)
Explanation: Yay! Our function is receiving 1D vectors and is working automatically with dask arrays. Finally let's comment out the print line and wrap everything up in a nice reusable function
End of explanation |
9,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building the train object
The job of the YAML parser is to instantiate the train object and everything inside of it. Looking at an example YAML file
Step1: We want to know how to build a model with parallel channels. So, we're going to look at interactively building just the model part of this specification and how it deals with different inputs. It should be possible to put the convolutional layers in parallel using a CompositeSpace as described in this post on the pylearn-users. It could be troublesome, however, supplying these layers with two data streams.
Building the model
Using the specification from above we can see how to instantiate an MLP class interactively. The obvious part we need to deal with first is the input_space. We have to define this to be a CompositeSpace (documentation for spaces). Seems like this will involve modifying the dataset class, but as long as the tuple is in the right format it shouldn't be a problem.
This post might also be useful, as they seem to be trying to do the same thing, and contains an example of how to defined the CompositeSpace. So, we should start by instantiating the CompositeSpace.
Step2: Composite Layers
Up until we reach the fully connected layers we want to have different convolutional pipelines. To do this, we have to define two of these pipelines inside a CompositeLayer.
Step3: First, we have to instantiate two copies of the above convolutional layers as their own MLP objects. Originally, I thought these should have an input_source to specify the inputs they take, turns out nested MLPs do not have input or target sources. Might as well store these in a dictionary
Step4: Then we can initialise our CompositeLayer with these two stacks of convolutional layers. Have to define dictionary mapping which of the inputs in the composite space supplied goes to which component of the space.
Step5: Unfortunately, it turns out we also have to put a FlattenerLayer around this so that the output of this layer will play nicely with the fully connected layer following this
Step6: Now we need to connect this composite layer to the rest of the network, which is a single fully connected layer and the softmax output layer. To do this, we instantiate another MLP object, in which the first layer is this composite layer. This also when we use the composite input space we defined above.
Step7: Creating the dataset
To test this model we need a dataset that's going to supply the input data in the correct format. This should be a tuple of 4D arrays returns by the iterator in the tuple containing the input and target batches. We can create this pretty easily by just making a Dataset that inherits our old ListDataset and creates an iterator that contains two FlyIterators.
Step8: Testing this new dataset iterator
Step9: Plotting some of the images it produces side by side to make sure they're the same
Step10: Don't know why there's a single one from Iterator 2 at the start, but otherwise seems to have worked.
Creating the rest
The rest of the train object stays the same, apart from the save path and that the algorithm will have to load one of these new ParallelDataset objects for its validation set. So, we're missing
Step11: Assembling the full train object
We now have everything we need to make up our train object, so we can put it together and see how well it runs.
Step12: We can live with that warning.
Now, attempting to run the model | Python Code:
!cat yaml_templates/replicate_8aug_online.yaml
Explanation: Building the train object
The job of the YAML parser is to instantiate the train object and everything inside of it. Looking at an example YAML file:
End of explanation
import pylearn2.space
final_shape = (48,48)
input_space = pylearn2.space.CompositeSpace([
pylearn2.space.Conv2DSpace(shape=final_shape,num_channels=1,axes=['b',0,1,'c']),
pylearn2.space.Conv2DSpace(shape=final_shape,num_channels=1,axes=['b',0,1,'c'])
])
Explanation: We want to know how to build a model with parallel channels. So, we're going to look at interactively building just the model part of this specification and how it deals with different inputs. It should be possible to put the convolutional layers in parallel using a CompositeSpace as described in this post on the pylearn-users. It could be troublesome, however, supplying these layers with two data streams.
Building the model
Using the specification from above we can see how to instantiate an MLP class interactively. The obvious part we need to deal with first is the input_space. We have to define this to be a CompositeSpace (documentation for spaces). Seems like this will involve modifying the dataset class, but as long as the tuple is in the right format it shouldn't be a problem.
This post might also be useful, as they seem to be trying to do the same thing, and contains an example of how to defined the CompositeSpace. So, we should start by instantiating the CompositeSpace.
End of explanation
import pylearn2.models.mlp
Explanation: Composite Layers
Up until we reach the fully connected layers we want to have different convolutional pipelines. To do this, we have to define two of these pipelines inside a CompositeLayer.
End of explanation
convlayers = {}
for i in range(2):
convlayers[i] = pylearn2.models.mlp.MLP(
layer_name="convlayer_{0}".format(i),
batch_size=128,
layers=[pylearn2.models.mlp.ConvRectifiedLinear(
layer_name='h1',
output_channels=48,
irange=0.025,
init_bias=0,
kernel_shape=[8,8],
pool_shape=[2,2],
pool_stride=[2,2],
max_kernel_norm=1.9365
),
pylearn2.models.mlp.ConvRectifiedLinear(
layer_name='h2',
output_channels=96,
irange=0.025,
init_bias=0,
kernel_shape=[5,5],
pool_shape=[2,2],
pool_stride=[2,2],
max_kernel_norm=1.9365
),
pylearn2.models.mlp.ConvRectifiedLinear(
layer_name='h3',
output_channels=128,
irange=0.025,
init_bias=0,
kernel_shape=[3,3],
pool_shape=[2,2],
pool_stride=[2,2],
max_kernel_norm=1.9365
),
pylearn2.models.mlp.ConvRectifiedLinear(
layer_name='h4',
output_channels=128,
irange=0.025,
init_bias=0,
kernel_shape=[3,3],
pool_shape=[2,2],
pool_stride=[2,2],
max_kernel_norm=1.9365
)
]
)
Explanation: First, we have to instantiate two copies of the above convolutional layers as their own MLP objects. Originally, I thought these should have an input_source to specify the inputs they take, turns out nested MLPs do not have input or target sources. Might as well store these in a dictionary:
End of explanation
inputs_to_layers = {0:[0],1:[1]}
compositelayer = pylearn2.models.mlp.CompositeLayer(
layer_name="parallel_conv",
layers=[convlayers[i] for i in range(2)],
inputs_to_layers=inputs_to_layers)
Explanation: Then we can initialise our CompositeLayer with these two stacks of convolutional layers. Have to define dictionary mapping which of the inputs in the composite space supplied goes to which component of the space.
End of explanation
flattened = pylearn2.models.mlp.FlattenerLayer(raw_layer=compositelayer)
Explanation: Unfortunately, it turns out we also have to put a FlattenerLayer around this so that the output of this layer will play nicely with the fully connected layer following this:
End of explanation
n_classes=121
main_mlp =None
main_mlp = pylearn2.models.mlp.MLP(
batch_size=128,
input_space=input_space,
input_source=['img_1','img_2'],
layers=[
flattened,
pylearn2.models.mlp.RectifiedLinear(
dim=1024,
max_col_norm=1.9,
layer_name='h5',
istdev=0.05,
W_lr_scale=0.25,
b_lr_scale=0.25),
pylearn2.models.mlp.Softmax(
n_classes=121,
max_col_norm=1.9365,
layer_name='y',
istdev=0.05,
W_lr_scale=0.25,
b_lr_scale=0.25
)
]
)
Explanation: Now we need to connect this composite layer to the rest of the network, which is a single fully connected layer and the softmax output layer. To do this, we instantiate another MLP object, in which the first layer is this composite layer. This also when we use the composite input space we defined above.
End of explanation
import neukrill_net.image_directory_dataset
import copy
reload(neukrill_net.image_directory_dataset)
class ParallelIterator(object):
def __init__(self, *args, **keyargs):
keyargs['rng'] = np.random.RandomState(42)
self.iterator_1 = neukrill_net.image_directory_dataset.FlyIterator(*args,**keyargs)
keyargs = copy.deepcopy(keyargs)
keyargs['rng'] = np.random.RandomState(42)
self.iterator_2 = neukrill_net.image_directory_dataset.FlyIterator(*args,**keyargs)
self.stochastic=False
self.num_examples = self.iterator_1.num_examples
def __iter__(self):
return self
def next(self):
# get a batch from both iterators:
Xbatch1,ybatch1 = self.iterator_1.next()
Xbatch2,ybatch2 = self.iterator_2.next()
assert np.allclose(ybatch1,ybatch2)
return Xbatch1,Xbatch2,ybatch1
class ParallelDataset(neukrill_net.image_directory_dataset.ListDataset):
def iterator(self, mode=None, batch_size=None, num_batches=None, rng=None,
data_specs=None, return_tuple=False):
if not num_batches:
num_batches = int(len(self.X)/batch_size)
iterator = ParallelIterator(dataset=self, batch_size=batch_size,
num_batches=num_batches,
final_shape=self.run_settings["final_shape"],
rng=None,mode=mode)
return iterator
import neukrill_net.augment
import os
dataset = ParallelDataset(
transformer=neukrill_net.augment.RandomAugment(
units='float',
rotate=[0,90,180,270],
rotate_is_resizable=0,
flip=1,
resize=final_shape,
normalise={'global_or_pixel':'global',
'mu': 0.957,
'sigma': 0.142}
),
settings_path=os.path.abspath("settings.json"),
run_settings_path=os.path.abspath("run_settings/replicate_8aug.json"),
force=True
)
Explanation: Creating the dataset
To test this model we need a dataset that's going to supply the input data in the correct format. This should be a tuple of 4D arrays returns by the iterator in the tuple containing the input and target batches. We can create this pretty easily by just making a Dataset that inherits our old ListDataset and creates an iterator that contains two FlyIterators.
End of explanation
iterator = dataset.iterator(mode='even_shuffled_sequential',batch_size=128)
X1,X2,y = iterator.next()
Explanation: Testing this new dataset iterator:
End of explanation
channels = None
for i in range(20):
if not channels:
channels = hl.Image(X1[i,:].squeeze(),group="Iterator 1")
channels = hl.Image(X2[i,:].squeeze(),group="Iterator 2")
else:
channels += hl.Image(X1[i,:].squeeze(),group="Iterator 1")
channels += hl.Image(X2[i,:].squeeze(),group="Iterator 2")
channels
Explanation: Plotting some of the images it produces side by side to make sure they're the same:
End of explanation
import pylearn2.training_algorithms.sgd
import pylearn2.costs.mlp.dropout
import pylearn2.costs.cost
import pylearn2.termination_criteria
algorithm = pylearn2.training_algorithms.sgd.SGD(
train_iteration_mode='even_shuffled_sequential',
monitor_iteration_mode='even_sequential',
batch_size=128,
learning_rate=0.1,
learning_rule= pylearn2.training_algorithms.learning_rule.Momentum(
init_momentum=0.5
),
monitoring_dataset={
'train':dataset,
'valid':ParallelDataset(
transformer=neukrill_net.augment.RandomAugment(
units='float',
rotate=[0,90,180,270],
rotate_is_resizable=0,
flip=1,
resize=final_shape,
normalise={'global_or_pixel':'global',
'mu': 0.957,
'sigma': 0.142}
),
settings_path=os.path.abspath("settings.json"),
run_settings_path=os.path.abspath("run_settings/replicate_8aug.json"),
force=True, training_set_mode='validation'
)
},
cost=pylearn2.costs.cost.SumOfCosts(
costs=[
pylearn2.costs.mlp.dropout.Dropout(
input_include_probs={'h5':0.5},
input_scales={'h5':2.0}),
pylearn2.costs.mlp.WeightDecay(coeffs={'parallel_conv':0.00005,
'h5':0.00005})
]
),
termination_criterion=pylearn2.termination_criteria.EpochCounter(max_epochs=500)
)
import pylearn2.train_extensions
import pylearn2.train_extensions.best_params
extensions = [
pylearn2.training_algorithms.learning_rule.MomentumAdjustor(
start=1,
saturate=200,
final_momentum=0.95
),
pylearn2.training_algorithms.sgd.LinearDecayOverEpoch(
start=1,
saturate=200,
decay_factor=0.025
),
pylearn2.train_extensions.best_params.MonitorBasedSaveBest(
channel_name='valid_y_nll',
save_path='/disk/scratch/neuroglycerin/models/parallel_interactive.pkl'
),
pylearn2.training_algorithms.sgd.MonitorBasedLRAdjuster(
high_trigger=1.0,
low_trigger=0.999,
grow_amt=1.012,
shrink_amt=0.986,
max_lr=0.4,
min_lr=0.00005,
channel_name='valid_y_nll'
)
]
Explanation: Don't know why there's a single one from Iterator 2 at the start, but otherwise seems to have worked.
Creating the rest
The rest of the train object stays the same, apart from the save path and that the algorithm will have to load one of these new ParallelDataset objects for its validation set. So, we're missing:
algorithm - contains validation set, which must be set up as a parallel dataset.
extensions - keeping these the same but changing save paths
It's worth noting that when we define the cost and the weight decay we have to address the new convolutional layers inside the composite layer.
End of explanation
import pylearn2.train
train = pylearn2.train.Train(
dataset=dataset,
model=main_mlp,
algorithm=algorithm,
extensions=extensions,
save_path='/disk/scratch/neuroglycerin/models/parallel_interactive_recent.pkl',
save_freq=1
)
Explanation: Assembling the full train object
We now have everything we need to make up our train object, so we can put it together and see how well it runs.
End of explanation
train.main_loop()
Explanation: We can live with that warning.
Now, attempting to run the model:
End of explanation |
9,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wright-Fisher model of mutation and random genetic drift
A Wright-Fisher model has a fixed population size N and discrete non-overlapping generations. Each generation, each individual has a random number of offspring whose mean is proportional to the individual's fitness. Each generation, mutation may occur.
Setup
Step1: Make population dynamic model
Basic parameters
Step2: Setup a population of sequences
Store this as a lightweight Dictionary that maps a string to a count. All the sequences together will have count N.
Step3: Add mutation
Mutations occur each generation in each individual in every basepair.
Step4: Walk through population and mutate basepairs. Use Poisson splitting to speed this up (you may be familiar with Poisson splitting from its use in the Gillespie algorithm).
In naive scenario A
Step5: Here we use Numpy's Poisson random number.
Step6: We need to get random haplotype from the population.
Step7: Here we use Numpy's weighted random choice.
Step8: Here, we take a supplied haplotype and mutate a site at random.
Step9: Putting things together, in a single mutation event, we grab a random haplotype from the population, mutate it, decrement its count, and then check if the mutant already exists in the population. If it does, increment this mutant haplotype; if it doesn't create a new haplotype of count 1.
Step10: To create all the mutations that occur in a single generation, we draw the total count of mutations and then iteratively add mutation events.
Step11: Add genetic drift
Given a list of haplotype frequencies currently in the population, we can take a multinomial draw to get haplotype counts in the following generation.
Step12: Here we use Numpy's multinomial random sample.
Step13: We then need to assign this new list of haplotype counts to the pop dictionary. To save memory and computation, if a haplotype goes to 0, we remove it entirely from the pop dictionary.
Step14: Combine and iterate
Each generation is simply a mutation step where a random number of mutations are thrown down, and an offspring step where haplotype counts are updated.
Step15: Can iterate this over a number of generations.
Step16: Record
We want to keep a record of past population frequencies to understand dynamics through time. At each step in the simulation, we append to a history object.
Step17: Analyze trajectories
Calculate diversity
Here, diversity in population genetics is usually shorthand for the statistic π, which measures pairwise differences between random individuals in the population. π is usually measured as substitutions per site.
Step18: First, we need to calculate the number of differences per site between two arbitrary sequences.
Step19: We calculate diversity as a weighted average between all pairs of haplotypes, weighted by pairwise haplotype frequency.
Step20: Plot diversity
Here, we use matplotlib for all Python plotting.
Step21: Here, we make a simple line plot using matplotlib's plot function.
Step22: Here, we style the plot a bit with x and y axes labels.
Step23: Analyze and plot divergence
In population genetics, divergence is generally the number of substitutions away from a reference sequence. In this case, we can measure the average distance of the population to the starting haplotype. Again, this will be measured in terms of substitutions per site.
Step24: Plot haplotype trajectories
We also want to directly look at haplotype frequencies through time.
Step25: We want to plot all haplotypes seen during the simulation.
Step26: Here is a simple plot of their overall frequencies.
Step27: We can use stackplot to stack these trajectoies on top of each other to get a better picture of what's going on.
Step28: Plot SNP trajectories
Step29: Find all variable sites.
Step30: Scale up
Here, we scale up to more interesting parameter values.
Step31: In this case there are $\mu$ = 0.01 mutations entering the population every generation.
Step32: And the population genetic parameter $\theta$, which equals $2N\mu$, is 1. | Python Code:
import numpy as np
import itertools
Explanation: Wright-Fisher model of mutation and random genetic drift
A Wright-Fisher model has a fixed population size N and discrete non-overlapping generations. Each generation, each individual has a random number of offspring whose mean is proportional to the individual's fitness. Each generation, mutation may occur.
Setup
End of explanation
pop_size = 60
seq_length = 100
alphabet = ['A', 'T', 'G', 'C']
base_haplotype = "AAAAAAAAAA"
Explanation: Make population dynamic model
Basic parameters
End of explanation
pop = {}
pop["AAAAAAAAAA"] = 40
pop["AAATAAAAAA"] = 30
pop["AATTTAAAAA"] = 30
pop["AAATAAAAAA"]
Explanation: Setup a population of sequences
Store this as a lightweight Dictionary that maps a string to a count. All the sequences together will have count N.
End of explanation
mutation_rate = 0.0001 # per gen per individual per site
Explanation: Add mutation
Mutations occur each generation in each individual in every basepair.
End of explanation
def get_mutation_count():
mean = mutation_rate * pop_size * seq_length
return np.random.poisson(mean)
Explanation: Walk through population and mutate basepairs. Use Poisson splitting to speed this up (you may be familiar with Poisson splitting from its use in the Gillespie algorithm).
In naive scenario A: take each element and check for each if event occurs. For example, 100 elements, each with 1% chance. This requires 100 random numbers.
In Poisson splitting scenario B: Draw a Poisson random number for the number of events that occur and distribute them randomly. In the above example, this will most likely involve 1 random number draw to see how many events and then a few more draws to see which elements are hit.
First off, we need to get random number of total mutations
End of explanation
get_mutation_count()
Explanation: Here we use Numpy's Poisson random number.
End of explanation
pop.keys()
[x/float(pop_size) for x in pop.values()]
def get_random_haplotype():
haplotypes = pop.keys()
frequencies = [x/float(pop_size) for x in pop.values()]
total = sum(frequencies)
frequencies = [x / total for x in frequencies]
return np.random.choice(haplotypes, p=frequencies)
Explanation: We need to get random haplotype from the population.
End of explanation
get_random_haplotype()
Explanation: Here we use Numpy's weighted random choice.
End of explanation
def get_mutant(haplotype):
site = np.random.randint(seq_length)
possible_mutations = list(alphabet)
possible_mutations.remove(haplotype[site])
mutation = np.random.choice(possible_mutations)
new_haplotype = haplotype[:site] + mutation + haplotype[site+1:]
return new_haplotype
get_mutant("AAAAAAAAAA")
Explanation: Here, we take a supplied haplotype and mutate a site at random.
End of explanation
def mutation_event():
haplotype = get_random_haplotype()
if pop[haplotype] > 1:
pop[haplotype] -= 1
new_haplotype = get_mutant(haplotype)
if new_haplotype in pop:
pop[new_haplotype] += 1
else:
pop[new_haplotype] = 1
mutation_event()
pop
Explanation: Putting things together, in a single mutation event, we grab a random haplotype from the population, mutate it, decrement its count, and then check if the mutant already exists in the population. If it does, increment this mutant haplotype; if it doesn't create a new haplotype of count 1.
End of explanation
def mutation_step():
mutation_count = get_mutation_count()
for i in range(mutation_count):
mutation_event()
mutation_step()
pop
Explanation: To create all the mutations that occur in a single generation, we draw the total count of mutations and then iteratively add mutation events.
End of explanation
def get_offspring_counts():
haplotypes = pop.keys()
frequencies = [x/float(pop_size) for x in pop.values()]
return list(np.random.multinomial(pop_size, frequencies))
Explanation: Add genetic drift
Given a list of haplotype frequencies currently in the population, we can take a multinomial draw to get haplotype counts in the following generation.
End of explanation
get_offspring_counts()
Explanation: Here we use Numpy's multinomial random sample.
End of explanation
def offspring_step():
counts = get_offspring_counts()
for (haplotype, count) in zip(pop.keys(), counts):
if (count > 0):
pop[haplotype] = count
else:
del pop[haplotype]
offspring_step()
pop
Explanation: We then need to assign this new list of haplotype counts to the pop dictionary. To save memory and computation, if a haplotype goes to 0, we remove it entirely from the pop dictionary.
End of explanation
def time_step():
mutation_step()
offspring_step()
Explanation: Combine and iterate
Each generation is simply a mutation step where a random number of mutations are thrown down, and an offspring step where haplotype counts are updated.
End of explanation
generations = 500
def simulate():
for i in range(generations):
time_step()
simulate()
pop
Explanation: Can iterate this over a number of generations.
End of explanation
pop = {"AAAAAAAAAA": pop_size}
history = []
def simulate():
clone_pop = dict(pop)
history.append(clone_pop)
for i in range(generations):
time_step()
clone_pop = dict(pop)
history.append(clone_pop)
simulate()
pop
history[0]
history[1]
history[2]
Explanation: Record
We want to keep a record of past population frequencies to understand dynamics through time. At each step in the simulation, we append to a history object.
End of explanation
pop
Explanation: Analyze trajectories
Calculate diversity
Here, diversity in population genetics is usually shorthand for the statistic π, which measures pairwise differences between random individuals in the population. π is usually measured as substitutions per site.
End of explanation
def get_distance(seq_a, seq_b):
diffs = 0
length = len(seq_a)
assert len(seq_a) == len(seq_b)
for chr_a, chr_b in zip(seq_a, seq_b):
if chr_a != chr_b:
diffs += 1
return diffs / float(length)
get_distance("AAAAAAAAAA", "AAAAAAAAAB")
Explanation: First, we need to calculate the number of differences per site between two arbitrary sequences.
End of explanation
def get_diversity(population):
haplotypes = population.keys()
haplotype_count = len(haplotypes)
diversity = 0
for i in range(haplotype_count):
for j in range(haplotype_count):
haplotype_a = haplotypes[i]
haplotype_b = haplotypes[j]
frequency_a = population[haplotype_a] / float(pop_size)
frequency_b = population[haplotype_b] / float(pop_size)
frequency_pair = frequency_a * frequency_b
diversity += frequency_pair * get_distance(haplotype_a, haplotype_b)
return diversity
get_diversity(pop)
def get_diversity_trajectory():
trajectory = [get_diversity(generation) for generation in history]
return trajectory
get_diversity_trajectory()
Explanation: We calculate diversity as a weighted average between all pairs of haplotypes, weighted by pairwise haplotype frequency.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
Explanation: Plot diversity
Here, we use matplotlib for all Python plotting.
End of explanation
plt.plot(get_diversity_trajectory())
Explanation: Here, we make a simple line plot using matplotlib's plot function.
End of explanation
def diversity_plot():
mpl.rcParams['font.size']=14
trajectory = get_diversity_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("diversity")
plt.xlabel("generation")
diversity_plot()
Explanation: Here, we style the plot a bit with x and y axes labels.
End of explanation
def get_divergence(population):
haplotypes = population.keys()
divergence = 0
for haplotype in haplotypes:
frequency = population[haplotype] / float(pop_size)
divergence += frequency * get_distance(base_haplotype, haplotype)
return divergence
def get_divergence_trajectory():
trajectory = [get_divergence(generation) for generation in history]
return trajectory
get_divergence_trajectory()
def divergence_plot():
mpl.rcParams['font.size']=14
trajectory = get_divergence_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("divergence")
plt.xlabel("generation")
divergence_plot()
Explanation: Analyze and plot divergence
In population genetics, divergence is generally the number of substitutions away from a reference sequence. In this case, we can measure the average distance of the population to the starting haplotype. Again, this will be measured in terms of substitutions per site.
End of explanation
def get_frequency(haplotype, generation):
pop_at_generation = history[generation]
if haplotype in pop_at_generation:
return pop_at_generation[haplotype]/float(pop_size)
else:
return 0
get_frequency("AAAAAAAAAA", 4)
def get_trajectory(haplotype):
trajectory = [get_frequency(haplotype, gen) for gen in range(generations)]
return trajectory
get_trajectory("AAAAAAAAAA")
Explanation: Plot haplotype trajectories
We also want to directly look at haplotype frequencies through time.
End of explanation
def get_all_haplotypes():
haplotypes = set()
for generation in history:
for haplotype in generation:
haplotypes.add(haplotype)
return haplotypes
get_all_haplotypes()
Explanation: We want to plot all haplotypes seen during the simulation.
End of explanation
haplotypes = get_all_haplotypes()
for haplotype in haplotypes:
plt.plot(get_trajectory(haplotype))
plt.show()
colors = ["#781C86", "#571EA2", "#462EB9", "#3F47C9", "#3F63CF", "#447CCD", "#4C90C0", "#56A0AE", "#63AC9A", "#72B485", "#83BA70", "#96BD60", "#AABD52", "#BDBB48", "#CEB541", "#DCAB3C", "#E49938", "#E68133", "#E4632E", "#DF4327", "#DB2122"]
colors_lighter = ["#A567AF", "#8F69C1", "#8474D1", "#7F85DB", "#7F97DF", "#82A8DD", "#88B5D5", "#8FC0C9", "#97C8BC", "#A1CDAD", "#ACD1A0", "#B9D395", "#C6D38C", "#D3D285", "#DECE81", "#E8C77D", "#EDBB7A", "#EEAB77", "#ED9773", "#EA816F", "#E76B6B"]
Explanation: Here is a simple plot of their overall frequencies.
End of explanation
def stacked_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
haplotypes = get_all_haplotypes()
trajectories = [get_trajectory(haplotype) for haplotype in haplotypes]
plt.stackplot(range(generations), trajectories, colors=colors_lighter)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
stacked_trajectory_plot()
Explanation: We can use stackplot to stack these trajectoies on top of each other to get a better picture of what's going on.
End of explanation
def get_snp_frequency(site, generation):
minor_allele_frequency = 0.0
pop_at_generation = history[generation]
for haplotype in pop_at_generation.keys():
allele = haplotype[site]
frequency = pop_at_generation[haplotype] / float(pop_size)
if allele != "A":
minor_allele_frequency += frequency
return minor_allele_frequency
get_snp_frequency(3, 5)
def get_snp_trajectory(site):
trajectory = [get_snp_frequency(site, gen) for gen in range(generations)]
return trajectory
get_snp_trajectory(3)
Explanation: Plot SNP trajectories
End of explanation
def get_all_snps():
snps = set()
for generation in history:
for haplotype in generation:
for site in range(seq_length):
if haplotype[site] != "A":
snps.add(site)
return snps
def snp_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
snps = get_all_snps()
trajectories = [get_snp_trajectory(snp) for snp in snps]
data = []
for trajectory, color in itertools.izip(trajectories, itertools.cycle(colors)):
data.append(range(generations))
data.append(trajectory)
data.append(color)
plt.plot(*data)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
snp_trajectory_plot()
Explanation: Find all variable sites.
End of explanation
pop_size = 50
seq_length = 100
generations = 500
mutation_rate = 0.0001 # per gen per individual per site
Explanation: Scale up
Here, we scale up to more interesting parameter values.
End of explanation
seq_length * mutation_rate
Explanation: In this case there are $\mu$ = 0.01 mutations entering the population every generation.
End of explanation
2 * pop_size * seq_length * mutation_rate
base_haplotype = ''.join(["A" for i in range(seq_length)])
pop.clear()
del history[:]
pop[base_haplotype] = pop_size
simulate()
plt.figure(num=None, figsize=(14, 14), dpi=80, facecolor='w', edgecolor='k')
plt.subplot2grid((3,2), (0,0), colspan=2)
stacked_trajectory_plot(xlabel="")
plt.subplot2grid((3,2), (1,0), colspan=2)
snp_trajectory_plot(xlabel="")
plt.subplot2grid((3,2), (2,0))
diversity_plot()
plt.subplot2grid((3,2), (2,1))
divergence_plot()
Explanation: And the population genetic parameter $\theta$, which equals $2N\mu$, is 1.
End of explanation |
9,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to ITK Segmentation in SimpleITK Notebooks <a href="https
Step1: Thresholding
Thresholding is the most basic form of segmentation. It simply labels the pixels of an image based on the intensity range without respect to geometry or connectivity.
Step2: ITK has a number of histogram based automatic thresholding filters including Huang, MaximumEntropy, Triangle, and the popular Otsu's method. These methods create a histogram then use a heuristic to determine a threshold value.
Step3: Region Growing Segmentation
The first step of improvement upon the naive thresholding is a class of algorithms called region growing. This includes
Step4: Improving upon this is the ConfidenceConnected filter, which uses the initial seed or current segmentation to estimate the threshold range.
Step5: Fast Marching Segmentation
The FastMarchingImageFilter implements a fast marching solution to a simple level set evolution problem (eikonal equation). In this example, the speed term used in the differential equation is provided in the form of an image. The speed image is based on the gradient magnitude and mapped with the bounded reciprocal $1/(1+x)$.
Step6: The output of the FastMarchingImageFilter is a <b>time-crossing map</b> that indicates, for each pixel, how much time it would take for the front to arrive at the pixel location.
Step7: Level-Set Segmentation
There are a variety of level-set based segmentation filter available in ITK
Step8: Use the seed to estimate a reasonable threshold range. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact, FloatSlider
import SimpleITK as sitk
# Download data to work on
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
from myshow import myshow, myshow3d
img_T1 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT1.nrrd"))
img_T2 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT2.nrrd"))
# To visualize the labels image in RGB with needs a image with 0-255 range
img_T1_255 = sitk.Cast(sitk.RescaleIntensity(img_T1), sitk.sitkUInt8)
img_T2_255 = sitk.Cast(sitk.RescaleIntensity(img_T2), sitk.sitkUInt8)
myshow3d(img_T1)
Explanation: Introduction to ITK Segmentation in SimpleITK Notebooks <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F300_Segmentation_Overview.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
<b>Goal</b>: To become familiar with basic segmentation algorithms available in ITK, and interactively explore their parameter space.
Image segmentation filters process an image to partition it into (hopefully) meaningful regions. The output is commonly an image of integers where each integer can represent an object. The value 0 is commonly used for the background, and 1 ( sometimes 255) for a foreground object.
End of explanation
seg = img_T1 > 200
myshow(sitk.LabelOverlay(img_T1_255, seg), "Basic Thresholding")
seg = sitk.BinaryThreshold(
img_T1, lowerThreshold=100, upperThreshold=400, insideValue=1, outsideValue=0
)
myshow(sitk.LabelOverlay(img_T1_255, seg), "Binary Thresholding")
Explanation: Thresholding
Thresholding is the most basic form of segmentation. It simply labels the pixels of an image based on the intensity range without respect to geometry or connectivity.
End of explanation
otsu_filter = sitk.OtsuThresholdImageFilter()
otsu_filter.SetInsideValue(0)
otsu_filter.SetOutsideValue(1)
seg = otsu_filter.Execute(img_T1)
myshow(sitk.LabelOverlay(img_T1_255, seg), "Otsu Thresholding")
print(otsu_filter.GetThreshold())
Explanation: ITK has a number of histogram based automatic thresholding filters including Huang, MaximumEntropy, Triangle, and the popular Otsu's method. These methods create a histogram then use a heuristic to determine a threshold value.
End of explanation
seed = (132, 142, 96)
seg = sitk.Image(img_T1.GetSize(), sitk.sitkUInt8)
seg.CopyInformation(img_T1)
seg[seed] = 1
seg = sitk.BinaryDilate(seg, [3] * 3)
myshow(sitk.LabelOverlay(img_T1_255, seg), "Initial Seed")
seg = sitk.ConnectedThreshold(img_T1, seedList=[seed], lower=100, upper=190)
myshow(sitk.LabelOverlay(img_T1_255, seg), "Connected Threshold")
Explanation: Region Growing Segmentation
The first step of improvement upon the naive thresholding is a class of algorithms called region growing. This includes:
<ul>
<li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ConnectedThresholdImageFilter.html">ConnectedThreshold</a></li>
<li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ConfidenceConnectedImageFilter.html">ConfidenceConnected</a></li>
<li><a href="http://www.itk.org/Doxygen/html/classitk_1_1VectorConfidenceConnectedImageFilter.html">VectorConfidenceConnected</a></li>
<li><a href="http://www.itk.org/Doxygen/html/classitk_1_1NeighborhoodConnectedImageFilter.html">NeighborhoodConnected</a></li>
</ul>
Earlier we used 3D Slicer to determine that index: (132,142,96) was a good seed for the left lateral ventricle.
End of explanation
seg = sitk.ConfidenceConnected(
img_T1,
seedList=[seed],
numberOfIterations=1,
multiplier=2.5,
initialNeighborhoodRadius=1,
replaceValue=1,
)
myshow(sitk.LabelOverlay(img_T1_255, seg), "ConfidenceConnected")
img_multi = sitk.Compose(img_T1, img_T2)
seg = sitk.VectorConfidenceConnected(
img_multi,
seedList=[seed],
numberOfIterations=1,
multiplier=2.5,
initialNeighborhoodRadius=1,
)
myshow(sitk.LabelOverlay(img_T2_255, seg))
Explanation: Improving upon this is the ConfidenceConnected filter, which uses the initial seed or current segmentation to estimate the threshold range.
End of explanation
seed = (132, 142, 96)
feature_img = sitk.GradientMagnitudeRecursiveGaussian(img_T1, sigma=0.5)
speed_img = sitk.BoundedReciprocal(
feature_img
) # This is parameter free unlike the Sigmoid
myshow(speed_img)
Explanation: Fast Marching Segmentation
The FastMarchingImageFilter implements a fast marching solution to a simple level set evolution problem (eikonal equation). In this example, the speed term used in the differential equation is provided in the form of an image. The speed image is based on the gradient magnitude and mapped with the bounded reciprocal $1/(1+x)$.
End of explanation
fm_filter = sitk.FastMarchingBaseImageFilter()
fm_filter.SetTrialPoints([seed])
fm_filter.SetStoppingValue(1000)
fm_img = fm_filter.Execute(speed_img)
myshow(
sitk.Threshold(
fm_img,
lower=0.0,
upper=fm_filter.GetStoppingValue(),
outsideValue=fm_filter.GetStoppingValue() + 1,
)
)
def fm_callback(img, time, z):
seg = img < time
myshow(sitk.LabelOverlay(img_T1_255[:, :, z], seg[:, :, z]))
interact(
lambda **kwargs: fm_callback(fm_img, **kwargs),
time=FloatSlider(min=0.05, max=1000.0, step=0.05, value=100.0),
z=(0, fm_img.GetSize()[2] - 1),
)
Explanation: The output of the FastMarchingImageFilter is a <b>time-crossing map</b> that indicates, for each pixel, how much time it would take for the front to arrive at the pixel location.
End of explanation
seed = (132, 142, 96)
seg = sitk.Image(img_T1.GetSize(), sitk.sitkUInt8)
seg.CopyInformation(img_T1)
seg[seed] = 1
seg = sitk.BinaryDilate(seg, [3] * 3)
Explanation: Level-Set Segmentation
There are a variety of level-set based segmentation filter available in ITK:
<ul>
<li><a href="http://www.itk.org/Doxygen/html/classitk_1_1GeodesicActiveContourLevelSetImageFilter.html">GeodesicActiveContour</a></li>
<li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ShapeDetectionLevelSetImageFilter.html">ShapeDetection</a></li>
<li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ThresholdSegmentationLevelSetImageFilter.html">ThresholdSegmentation</a></li>
<li><a href="http://www.itk.org/Doxygen/html/classitk_1_1LaplacianSegmentationLevelSetImageFilter.html">LaplacianSegmentation</a></li>
<li><a href="http://www.itk.org/Doxygen/html/classitk_1_1ScalarChanAndVeseDenseLevelSetImageFilter.html">ScalarChanAndVese</a></li>
</ul>
There is also a <a href="http://www.itk.org/Doxygen/html/group__ITKLevelSetsv4.html">modular Level-set framework</a> which allows composition of terms and easy extension in C++.
First we create a label image from our seed.
End of explanation
stats = sitk.LabelStatisticsImageFilter()
stats.Execute(img_T1, seg)
factor = 3.5
lower_threshold = stats.GetMean(1) - factor * stats.GetSigma(1)
upper_threshold = stats.GetMean(1) + factor * stats.GetSigma(1)
print(lower_threshold, upper_threshold)
init_ls = sitk.SignedMaurerDistanceMap(seg, insideIsPositive=True, useImageSpacing=True)
lsFilter = sitk.ThresholdSegmentationLevelSetImageFilter()
lsFilter.SetLowerThreshold(lower_threshold)
lsFilter.SetUpperThreshold(upper_threshold)
lsFilter.SetMaximumRMSError(0.02)
lsFilter.SetNumberOfIterations(1000)
lsFilter.SetCurvatureScaling(0.5)
lsFilter.SetPropagationScaling(1)
lsFilter.ReverseExpansionDirectionOn()
ls = lsFilter.Execute(init_ls, sitk.Cast(img_T1, sitk.sitkFloat32))
print(lsFilter)
myshow(sitk.LabelOverlay(img_T1_255, ls > 0))
Explanation: Use the seed to estimate a reasonable threshold range.
End of explanation |
9,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Below image is from nd101, yet results in negative values?
Normalisation, Lesson 1.19 Intro to Tensor Flow, Normalised inputs
Alternate source
https
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
import math
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
# Explore the dataset
batch_id = 2
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
# Explore the dataset
batch_id = 3
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
# Explore the dataset
batch_id = 4
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
# Explore the dataset
batch_id = 5
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
# Explore the dataset
batch_id = 6
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
min_x = np.min(x)
max_x = np.max(x)
#print("min x =", min_x,"& max x =", max_x)
x_prime = list()
for i in x:
#print(i)
x_prime.append((i-min_x) / (max_x-min_x))
#print(np.array(x_prime))
# I guess x / 255 would always work as its highly likely min is always 0 and max is always 255
# I wonder if we need to normalise each point on its own with its own min/max or if we do this
# for the entire array?
return np.array(x_prime)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Below image is from nd101, yet results in negative values?
Normalisation, Lesson 1.19 Intro to Tensor Flow, Normalised inputs
Alternate source
https://en.wikipedia.org/wiki/Feature_scaling or https://stats.stackexchange.com/questions/70801/how-to-normalize-data-to-0-1-range
Rescaling
The simplest method is rescaling the range of features to scale the range in [0, 1] or [−1, 1]. Selecting the target range depends on the nature of the data. The general formula is given as:
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# this is lifted from the class. Doesn't quite work.
#from sklearn import preprocessing
#
#print(x)
#y = np.zeros((len(x), 10))
#lb = preprocessing.LabelBinarizer()
#lb.fit(x)
#return lb.transform(x)
#from sklearn.preprocessing import OneHotEncoder
#print(x)
#enc = OneHotEncoder(10)
#y = enc.fit_transform(np.array(x).reshape(-1, 1)).toarray()
#print(y)
#return y
y = np.zeros((len(x), 10))
#print(x)
#print(y)
for i in range(len(x)):
y[i,x[i]] = 1
# print(i, x[i], y[i,x[i]])
#print(y)
return y
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
One-Hot Encoding, Intro to TensorFlow Lesson 1.14
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
import inspect
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
#print("neural_net_image_input =", image_shape)
return tf.placeholder(tf.float32, shape=[None, *image_shape], name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
#print("neural_net_label_input =", n_classes)
return tf.placeholder(tf.float32, shape=[None, n_classes], name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
#print("neural_net_keep_prob_input")
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
'''
# These are the two seprate examples from Lesson 4.31 and 4.33 from Convolutional Networks
def conv2d(input):
# Filter (weights and bias)
F_W = tf.Variable(tf.truncated_normal((2, 2, 1, 3)))
F_b = tf.Variable(tf.zeros(3))
strides = [1, 2, 2, 1]
padding = 'VALID'
return tf.nn.conv2d(input, F_W, strides, padding) + F_b
def maxpool(input):
# Set the ksize (filter size) for each dimension (batch_size, height, width, depth)
ksize = [1, 2, 2, 1]
# Set the stride for each dimension (batch_size, height, width, depth)
strides = [1, 2, 2, 1]
# set the padding, either 'VALID' or 'SAME'.
padding = 'VALID'
# https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#max_pool
return tf.nn.max_pool(input, ksize, strides, padding)
'''
#print(x_tensor, "conv num output", conv_num_outputs, "conv_ksize", conv_ksize, "conv_strides", conv_strides,
# "pool_ksize", pool_ksize, "pool_strides", pool_strides)
# Filter (weights and bias)
conv_filter_weights = tf.Variable(tf.truncated_normal([conv_ksize[0],
conv_ksize[1],
x_tensor.get_shape().as_list()[-1],
conv_num_outputs],
stddev=0.1))
conv_filter_bias = tf.Variable(tf.zeros(conv_num_outputs, dtype=tf.float32))
# Create Conv Layer
conv_layer = tf.nn.conv2d(x_tensor, conv_filter_weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding = 'SAME')
# Add Bias
conv_layer = tf.nn.bias_add(conv_layer, conv_filter_bias)
conv_layer = tf.nn.relu(conv_layer)
# Apply Max pooling
max_pooling_layer = tf.nn.max_pool(conv_layer,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1],
padding='SAME')
return max_pooling_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, tf.nn.relu)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# Should I attempt the Siraj's VGG16 conv model? xxx given we have merged conv2d and max into
# a single function, I guess below is not possible the way this project is setup. Another day.
# Conv block 1 with 064 output filters - Conv2d > Conv2d > MaxPooling2D
# Conv block 2 with 128 output filters - Conv2d > Conv2d > MaxPooling2D
# Conv block 3 with 256 output filters - Conv2d > Conv2d > Conv2d > MaxPooling2d
# Conv block 4 with 512 output filters - Conv2d > Conv2d > Conv2d > MaxPooling2d
# Fully-connected classifier - Flatten > Dense > Dense > Dense
'''
model_vgg = Sequential()
model_vgg.add(ZeroPadding2D((1, 1), input_shape=(img_width, img_height,3)))
model_vgg.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_2'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_2'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_2'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_3'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_2'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_3'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_2'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_3'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
'''
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize_1 = (5,5)
conv_strides_1 = (1,1)
pool_ksize_1 = (2,2)
pool_strides_1 = (2,2)
conv_ksize_2 = (3,3)
conv_strides_2 = (1,1)
pool_ksize_2 = (2,2)
pool_strides_2 = (2,2)
conv_ksize_3 = (2,2)
conv_strides_3 = (1,1)
pool_ksize_3 = (2,2)
pool_strides_3 = (2,2)
block_1 = conv2d_maxpool( x, 32, conv_ksize_1, conv_strides_1, pool_ksize_1, pool_strides_1)
block_2 = conv2d_maxpool(block_1, 64, conv_ksize_2, conv_strides_2, pool_ksize_2, pool_strides_2)
block_3 = conv2d_maxpool(block_2, 96, conv_ksize_3, conv_strides_3, pool_ksize_3, pool_strides_3)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat_world = flatten(block_3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc_1 = fully_conn(flat_world, 30)
fc_1 = tf.nn.dropout(fc_1, keep_prob)
#fc_2 = fully_conn(fc_1, 20)
#fc_2 = tf.nn.dropout(fc_2, keep_prob)
#fc_3 = fully_conn(fc_2, 512)
#fc_3 = tf.nn.dropout(fc_3, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
# TODO: return output
return output(fc_1, 10)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x:feature_batch, y:label_batch, keep_prob:keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x:feature_batch, y:label_batch, keep_prob:1.0})
valid_acc = sess.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss,valid_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 20
batch_size = 128
keep_probability = .75
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
9,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic regression
In this example, I classify a popular database of seeds of 2 cathegories using a logistic regression algorithm.
This is a first simple example to show how to apply learning algorithms into a dataset.
Step7: Logistic regression model by Rabst
This is a sample logistic regression model found in the internet.
Several functions exists, I recommend to use a more robust function for real problems, such as in sklearn.
Step8: Pandas
Use pandas to read a database.
In this example, I use the popular seed's database at the UCL site.
In the UCL web site (https
Step9: We take only two classes from the dataset and we standarize features.
Standarization is a common practice in machine learning algorithms to give the same weight to all features.
To standarize the values of a given feature, just use | Python Code:
import warnings # avoid a bunch of warnings that we'll ignore
warnings.filterwarnings("ignore")
Explanation: Logistic regression
In this example, I classify a popular database of seeds of 2 cathegories using a logistic regression algorithm.
This is a first simple example to show how to apply learning algorithms into a dataset.
End of explanation
class LogisticRegression(object):
LogisticRegression classifier from the the Rasbt machine learning book.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
cost_ : list
Cost in every epoch.
def __init__(self, eta=0.01, n_iter=50):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
y_val = self.activation(X)
errors = (y - y_val)
neg_grad = X.T.dot(errors)
self.w_[1:] += self.eta * neg_grad
self.w_[0] += self.eta * errors.sum()
self.cost_.append(self._logit_cost(y, self.activation(X)))
return self
def _logit_cost(self, y, y_val):
logit = -y.dot(np.log(y_val)) - ((1 - y).dot(np.log(1 - y_val)))
return logit
def _sigmoid(self, z):
return 1.0 / (1.0 + np.exp(-z))
def net_input(self, X):
Calculate net input
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
Activate the logistic neuron
z = self.net_input(X)
return self._sigmoid(z)
def predict_proba(self, X):
Predict class probabilities for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
Class 1 probability : float
return activation(X)
def predict(self, X):
Predict class labels for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
class : int
Predicted class label.
# equivalent to np.where(self.activation(X) >= 0.5, 1, 0)
return np.where(self.net_input(X) >= 0.0, 1, 0)
Explanation: Logistic regression model by Rabst
This is a sample logistic regression model found in the internet.
Several functions exists, I recommend to use a more robust function for real problems, such as in sklearn.
End of explanation
#Import data from files
import pandas as pd
#I use this dataset because this has clearly separated cathegories,
#Read the database using pandas,
#Note that bad lines are omitted with error_bad_lines=False
df = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/00236/seeds_dataset.txt', header=None, sep="\t", error_bad_lines=False)
#The headers are not given in the dataset, so we give them afterwords:
#1. area A,
#2. perimeter P,
#3. compactness C = 4*pi*A/P^2,
#4. length of kernel,
#5. width of kernel,
#6. asymmetry coefficient
#7. length of kernel groove.
#8. Class: 1=Kama, 2=Rosa, 3=Canadian
df.columns = ["area","perimeter","compactness","kernel-length","kernel-width",
"asymetry","kernel-groove-length","class"]
#This shows the header of the database:
df.head()
Explanation: Pandas
Use pandas to read a database.
In this example, I use the popular seed's database at the UCL site.
In the UCL web site (https://archive.ics.uci.edu/ml/datasets.html), one can find many useful academic and real databases.
End of explanation
#In the database there are 3 classes of seeds:
#We will just focus on two classes: 2 and 3:
df=df.loc[(df["class"]==2) | (df["class"]==3)]
import numpy as np
#This sets class=2 to 0 and 3 to 1:
y = df.loc[:,'class']
y = np.where(y == 2 , 0, 1)
#Extract some cathegories:
X=df.loc[:,["area","perimeter"]]
#This is to convert the csv dictionary into a numpy matrix to later standarize:
X=X.as_matrix()
# standardize features
X_std = np.copy(X)
X_std[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()
X_std[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
%matplotlib inline
import matplotlib.pyplot as plt
lr = LogisticRegression(n_iter=5000, eta=0.1).fit(X_std, y)
plt.plot(range(1, len(lr.cost_) + 1), np.log10(lr.cost_))
plt.xlabel('Samples')
plt.ylabel('Cost')
plt.title('Logistic Regression - Learning rate 0.01')
plt.tight_layout()
plt.show()
#You can see that the logistic regression algorithm converges nicely:
#Finally we plot the decision boundary:
plot_decision_regions(X_std, y, classifier=lr)
plt.title('Logistic Regression - Gradient Descent')
plt.xlabel('Area [standardized]')
plt.ylabel('Perimeter [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
Explanation: We take only two classes from the dataset and we standarize features.
Standarization is a common practice in machine learning algorithms to give the same weight to all features.
To standarize the values of a given feature, just use:
X_i = (X_i - M) / D
Where X_i is a given entry, M is the statistical mean and D is the standard deviation (https://en.wikipedia.org/wiki/Standard_deviation).
These functions are provided in numpy: see mean() and std().
End of explanation |
9,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 12 - Bayesian Approaches to Testing a Point ("Null") Hypothesis
12.2.2 - Are different groups equal or not?
Step1: Data
Using R, I executed lines 18-63 from the script OneOddGroupModelComp2E.R to generate the exact same data used in the book. The script can be downloaded from the book's website. After executing the lines, the List object dataList in R contains five elements
Step2: 12.2.2 - Are different groups equal or not?
Given the data, how credible is it that the 4 types of background music influence the ability to recall words
differently?
Step3: Note
Step4: Figure 12.5 | Python Code:
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import theano.tensor as tt
from matplotlib import gridspec
%matplotlib inline
plt.style.use('seaborn-white')
color = '#87ceeb'
%load_ext watermark
%watermark -p pandas,numpy,pymc3,matplotlib,seaborn,theano
Explanation: Chapter 12 - Bayesian Approaches to Testing a Point ("Null") Hypothesis
12.2.2 - Are different groups equal or not?
End of explanation
df = pd.read_csv('data/background_music.csv', dtype={'CondOfSubj':'category'})
# Mapping the condition descriptions to the condition codes. Just for illustrative purposes.
bgmusic = {0:'Das Kruschke', 1:'Mozart', 2:'Bach', 3:'Beethoven'}
df['CondText'] = df.CondOfSubj.cat.codes.map(bgmusic)
cond_idx = df.CondOfSubj.cat.codes.values
cond_codes = df.CondOfSubj.cat.categories
nCond = cond_codes.size
nSubj = df.index.size
df.info()
df.groupby('CondOfSubj').head(3)
Explanation: Data
Using R, I executed lines 18-63 from the script OneOddGroupModelComp2E.R to generate the exact same data used in the book. The script can be downloaded from the book's website. After executing the lines, the List object dataList in R contains five elements:
1. nCond: A scalar value (4) representing the number of conditions (background music types).
2. nSubj: A scalar value (80) representing the number of subjects.
3. CondOfSubj: A vector representing the condition (1, 2, 3 or 4) of a subject during a test.
4. nTrlOfSubj: A vector with the number of trials/words per subject (20 for all subjects).
5. nCorrOfSubj: A vector with number of correct recalls per subject.
I exported the last three elements of dataList to a csv file using the following command in R:
write.csv(data.frame(dataList[c(3:5)]), file='background_music.csv', row.names=FALSE)
End of explanation
# The means as mentioned in section 12.2.2
df.groupby('CondText', sort=False)['nCorrOfSubj'].mean()
Explanation: 12.2.2 - Are different groups equal or not?
Given the data, how credible is it that the 4 types of background music influence the ability to recall words
differently?
End of explanation
with pm.Model() as model_1:
# constants
aP, bP = 1., 1.
# Pseudo- and true priors for model 1.
a0 = tt.as_tensor([.48*500, aP])
b0 = tt.as_tensor([(1-.48)*500, bP])
# True and pseudopriors for model 0
a = tt.as_tensor(np.c_[np.tile(aP, 4), [(.40*125), (.50*125), (.51*125), (.52*125)]])
b = tt.as_tensor(np.c_[np.tile(bP, 4), [(1-.40)*125, (1-.50)*125, (1-.51)*125, (1-.52)*125]])
# Prior on model index [0,1]
m_idx = pm.Categorical('m_idx', np.asarray([.5, .5]))
# Priors on concentration parameters
kappa_minus2 = pm.Gamma('kappa_minus2', 2.618, 0.0809, shape=nCond)
kappa = pm.Deterministic('kappa', kappa_minus2 +2)
# omega0
omega0 = pm.Beta('omega0', a0[m_idx], b0[m_idx])
# omega (condition specific)
omega = pm.Beta('omega', a[:,m_idx], b[:,m_idx], shape=nCond)
# Use condition specific omega when m_idx = 0, else omega0
aBeta = pm.math.switch(pm.math.eq(m_idx, 0), omega * (kappa-2)+1, omega0 * (kappa-2)+1)
bBeta = pm.math.switch(pm.math.eq(m_idx, 0), (1-omega) * (kappa-2)+1, (1-omega0) * (kappa-2)+1)
# Theta
theta = pm.Beta('theta', aBeta[cond_idx], bBeta[cond_idx], shape=nSubj)
# Likelihood
y = pm.Binomial('y', n=df.nTrlOfSubj.values, p=theta, observed=df.nCorrOfSubj)
pm.model_to_graphviz(model_1)
with model_1:
trace1 = pm.sample(5000, target_accept=.95)
pm.traceplot(trace1);
Explanation: Note: in contrast to the R output in the book, the parameters in PyMC3 (like $\omega$ and model index) are indexed starting with 0.
Model 0 = condition specific $\omega_c$
Model 1 = same $\omega$ for all conditions
End of explanation
fig = plt.figure(figsize=(12,8))
# Define gridspec
gs = gridspec.GridSpec(3, 3)
ax1 = plt.subplot(gs[0,0])
ax2 = plt.subplot(gs[0,1])
ax3 = plt.subplot(gs[0,2])
ax4 = plt.subplot(gs[1,0])
ax5 = plt.subplot(gs[1,1])
ax6 = plt.subplot(gs[1,2])
ax7 = plt.subplot(gs[2,:])
# Group the first six axes in a list for easier access in loop below
axes = [ax1, ax2, ax3, ax4, ax5, ax6]
# Differences of posteriors to be displayed: omega x - omega y
x = [0,0,0,1,1,2]
y = [1,2,3,2,3,3]
# Plot histograms
for ax, a, b in zip(axes, x, y):
diff = trace1['omega'][:,a]-trace1['omega'][:,b]
pm.plot_posterior(diff, ref_val=0, point_estimate='mode', color=color, ax=ax)
ax.set_xlabel('$\omega_{}$ - $\omega_{}$'.format(a,b), fontdict={'size':18})
ax.xaxis.set_ticks([-.2, -.1, 0.0, 0.1, 0.2])
# Plot trace values of model index (0, 1)
ax7.plot(np.arange(1, len(trace1['m_idx'])+1),trace1['m_idx'], color=color, linewidth=4)
ax7.set_xlabel('Step in Markov chain', fontdict={'size':14})
ax7.set_ylabel('Model Index (0, 1)', fontdict={'size':14})
ax7.set_ylim(-0.05,1.05)
fig.tight_layout()
Explanation: Figure 12.5
End of explanation |
9,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
troubleshooting optimization performance in wobble
Step1: viewing more optimization info
Step2: toggle on the save_history keyword (which is False by default) to generate a wobble.History object when optimizing the order -- this object will keep track of the best-fit parameters and goodness-of-fit metric at each step of the optimization
Step3: The wobble.History class includes several convenience functions for plotting - here are a couple of examples
Step4: tuneable knobs in wobble
Step5: Do the optimizer learning rates work well?
wobble uses the Tensorflow implementation of the Adam optimizer, a gradient-based method. The performance of Adam can be sensitive to the learning rate used. Basically, a small learning rate means that the optimizer takes small steps along the gradient descent with each iteration, which can be inefficient and take a long time. On the other hand, a large learning rate leads to large steps which can overshoot, leading to the "ringing" effect seen in some of the previous plots.
The learning rates can be tuned with keyword arguments at the stage of adding the components. This is because each component - and within the component, each set of variables (spectral template, RVs, basis vectors, basis weights) - have independent optimizer instances with their own learning rate. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import warnings
with warnings.catch_warnings(): # suppress annoying TensorFlow FutureWarnings
warnings.filterwarnings("ignore",category=FutureWarning)
import wobble
Explanation: troubleshooting optimization performance in wobble
End of explanation
data = wobble.Data('../data/51peg_e2ds.hdf5')
results = wobble.Results(data)
r = 67 # index into data.orders for the desired order
model = wobble.Model(data, results, r)
model.add_star('star')
model.add_telluric('tellurics')
Explanation: viewing more optimization info:
set up the basics (see demo notebook for more explanation):
End of explanation
history = wobble.optimize_order(model, niter=40, save_history=True, rv_uncertainties=False)
Explanation: toggle on the save_history keyword (which is False by default) to generate a wobble.History object when optimizing the order -- this object will keep track of the best-fit parameters and goodness-of-fit metric at each step of the optimization:
End of explanation
history.plot_nll();
template_ani_star = history.plot_template(0, nframes=50)
from IPython.display import HTML
HTML(template_ani_star.to_html5_video())
Explanation: The wobble.History class includes several convenience functions for plotting - here are a couple of examples:
End of explanation
model = wobble.Model(data, results, r)
model.add_star('star')
model.add_telluric('tellurics')
history = wobble.optimize_order(model, niter=60, save_history=True, rv_uncertainties=False)
Explanation: tuneable knobs in wobble:
What if the optimization doesn't appear to be performing well? A few things to check:
- Is the optimization being run long enough?
If the -log(likelihood) graph shows that the fit is still actively improving during the last optimizer iterations, it might need to be iterated further. This can be fixed by changing the niter keyword:
End of explanation
# print the current (default) settings:
for c in model.components:
if not c.template_fixed:
print('template learning rate for {0}: {1:.0e}'.format(c.name, c.learning_rate_template))
if not c.rvs_fixed:
print('RVs learning rate for {0}: {1:.0e}'.format(c.name, c.learning_rate_template))
model = wobble.Model(data, results, r)
model.add_star('star', learning_rate_template=0.001)
model.add_telluric('tellurics', learning_rate_template=0.001)
history = wobble.optimize_order(model, niter=60, save_history=True, rv_uncertainties=False)
Explanation: Do the optimizer learning rates work well?
wobble uses the Tensorflow implementation of the Adam optimizer, a gradient-based method. The performance of Adam can be sensitive to the learning rate used. Basically, a small learning rate means that the optimizer takes small steps along the gradient descent with each iteration, which can be inefficient and take a long time. On the other hand, a large learning rate leads to large steps which can overshoot, leading to the "ringing" effect seen in some of the previous plots.
The learning rates can be tuned with keyword arguments at the stage of adding the components. This is because each component - and within the component, each set of variables (spectral template, RVs, basis vectors, basis weights) - have independent optimizer instances with their own learning rate.
End of explanation |
9,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The following code will print the prime numbers between 1 and 100. Modify the code so it prints every other prime number from 1 to 100
Original Code
Step1: Modified Code
Step2: Extra Credit | Python Code:
for num in range(1,101): # for-loop through the numbers
prime = True # boolean flag to check the number for being prime
for i in range(2,num): # for-loop to check for "primeness" by checking for divisors other than 1
if (num%i==0): # logical test for the number having a divisor other than 1 and itself
prime = False # if there's a divisor, the boolean value gets flipped to False
if prime: # if prime is still True after going through all numbers from 1 - 100, then it gets printed
print(num)
Explanation: The following code will print the prime numbers between 1 and 100. Modify the code so it prints every other prime number from 1 to 100
Original Code
End of explanation
count=0 # We take a count variable that counts the number of prime in the loop.
for num in range(1,101):
prime = True
for i in range(2,num):
if (num%i==0):
prime = False #
if prime:
count=count+1 #If the count variable is not divisible by two, only then the number is printed.
if (count%2!=0): #Thereby printing only every other prime number.
print(num)
Explanation: Modified Code
End of explanation
for num in range(1,101):
if all(num%i!=0 for i in range(2,num)):
print(num)
Explanation: Extra Credit: Can you write a procedure that runs faster than the one above?
End of explanation |
9,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<a href="https
Step1: Running k-means
Setting up $k$-means works exactly the same as in the previous examples. We tell the
algorithm to perform at most 10 iterations and stop the process if our prediction of the
cluster centers does not improve within a distance of 1.0
Step2: Then we apply $k$-means to the data as we did before. Since there are 10 different digits (0-9),
we tell the algorithm to look for 10 distinct clusters
Step3: And done!
Similar to the $N \times 3$ matrix that represented different RGB colors, this time, the centers array
consists of $N \times 8 \times 8$ center images, where $N$ is the number of clusters. Therefore, if we want
to plot the centers, we have to reshape the centers matrix back into 8 x 8 images
Step4: Look familiar?
Remarkably, $k$-means was able to partition the digit images not just into any 10 random
clusters, but into the digits 0-9! In order to find out which images were grouped into which
clusters, we need to generate a labels vector as we know it from supervised learning
problems
Step5: Then we can calculate the performance of the algorithm using scikit-learn's
accuracy_score metric
Step6: Remarkably, $k$-means achieved 78.4% accuracy without knowing the first thing about the
labels of the original images!
We can gain more insights about what went wrong and how by looking at the confusion
matrix. The confusion matrix is a 2D matrix $C$, where every element $C_{i,j}$ is equal to the
number of observations known to be in group (or cluster) $i$, but predicted to be in group $j$.
Thus, all elements on the diagonal of the matrix represent data points that have been
correctly classified (that is, known to be in group $i$ and predicted to be in group $i$). Off-diagonal
elements show misclassifications.
In scikit-learn, creating a confusion matrix is essentially a one-liner | Python Code:
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Compressing Color Spaces Using k-Means | Contents | Implementing Agglomerative Hierarchical Clustering >
Classifying handwritten digits using k-means
Although the last application was a pretty creative use of $k$-means, we can do better still.
We have previously discussed k-means in the context of unsupervised learning, where we
tried to discover some hidden structure in the data.
However, doesn't the same concept apply to most classification tasks? Let's say our task was
to classify handwritten digits. Don't most zeros look similar, if not the same? And don't all
zeros look categorically different from all possible ones? Isn't this exactly the kind of
"hidden structure" we set out to discover with unsupervised learning? Doesn't this mean we
could use clustering for classification as well?
Let's find out together. In this section, we will attempt to use k-means to try and classify
handwritten digits. In other words, we will try to identify similar digits without using the
original label information.
Loading the dataset
From the earlier chapters, you might recall that scikit-learn provides a whole range of
handwritten digits via its load_digits utility function. The dataset consists of 1,797
samples with 64 features each, where each of the features has the brightness of one pixel in
an 8 x 8 image:
End of explanation
import cv2
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
flags = cv2.KMEANS_RANDOM_CENTERS
Explanation: Running k-means
Setting up $k$-means works exactly the same as in the previous examples. We tell the
algorithm to perform at most 10 iterations and stop the process if our prediction of the
cluster centers does not improve within a distance of 1.0:
End of explanation
import numpy as np
compactness, clusters, centers = cv2.kmeans(digits.data.astype(np.float32), 10, None, criteria, 10, flags)
Explanation: Then we apply $k$-means to the data as we did before. Since there are 10 different digits (0-9),
we tell the algorithm to look for 10 distinct clusters:
End of explanation
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
fig, ax = plt.subplots(2, 5, figsize=(10, 4))
centers = centers.reshape(10, 8, 8)
for axi, center in zip(ax.flat, centers):
axi.set(xticks=[], yticks=[])
axi.imshow(center, interpolation='nearest', cmap=plt.cm.binary)
plt.savefig('digits.png')
Explanation: And done!
Similar to the $N \times 3$ matrix that represented different RGB colors, this time, the centers array
consists of $N \times 8 \times 8$ center images, where $N$ is the number of clusters. Therefore, if we want
to plot the centers, we have to reshape the centers matrix back into 8 x 8 images:
End of explanation
from scipy.stats import mode
labels = np.zeros_like(clusters.ravel())
for i in range(10):
mask = (clusters.ravel() == i)
labels[mask] = mode(digits.target[mask])[0]
Explanation: Look familiar?
Remarkably, $k$-means was able to partition the digit images not just into any 10 random
clusters, but into the digits 0-9! In order to find out which images were grouped into which
clusters, we need to generate a labels vector as we know it from supervised learning
problems:
End of explanation
from sklearn.metrics import accuracy_score
accuracy_score(digits.target, labels)
Explanation: Then we can calculate the performance of the algorithm using scikit-learn's
accuracy_score metric:
End of explanation
from sklearn.metrics import confusion_matrix
confusion_matrix(digits.target, labels)
Explanation: Remarkably, $k$-means achieved 78.4% accuracy without knowing the first thing about the
labels of the original images!
We can gain more insights about what went wrong and how by looking at the confusion
matrix. The confusion matrix is a 2D matrix $C$, where every element $C_{i,j}$ is equal to the
number of observations known to be in group (or cluster) $i$, but predicted to be in group $j$.
Thus, all elements on the diagonal of the matrix represent data points that have been
correctly classified (that is, known to be in group $i$ and predicted to be in group $i$). Off-diagonal
elements show misclassifications.
In scikit-learn, creating a confusion matrix is essentially a one-liner:
End of explanation |
9,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to implement different image models on MNIST using Estimator.
Note the MODEL_TYPE; change it to try out different models
Step1: Run as a Python module
In the previous notebook (mnist_linear.ipynb) we ran our code directly from the notebook.
Now since we want to run our code on Cloud ML Engine, we've packaged it as a python module.
The model.py and task.py containing the model code is in <a href="mnistmodel/trainer">mnistmodel/trainer</a>
Complete the TODOs in model.py before proceeding!
Once you've completed the TODOs, set MODEL_TYPE and run it locally for a few steps to test the code.
Step2: Now, let's do it on Cloud ML Engine so we can train on GPU
Step3: Monitoring training with TensorBoard
Use this cell to launch tensorboard
Step4: Here are my results
Step5: To predict with the model, let's take one of the example images.
Step6: Send it to the prediction service | Python Code:
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "linear" # "linear", "dnn", "dnn_dropout", or "cnn"
# do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "1.13" # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: MNIST Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to implement different image models on MNIST using Estimator.
Note the MODEL_TYPE; change it to try out different models
End of explanation
%%bash
rm -rf mnistmodel.tar.gz mnist_trained
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
-- \
--output_dir=${PWD}/mnist_trained \
--train_steps=100 \
--learning_rate=0.01 \
--model=$MODEL_TYPE
Explanation: Run as a Python module
In the previous notebook (mnist_linear.ipynb) we ran our code directly from the notebook.
Now since we want to run our code on Cloud ML Engine, we've packaged it as a python module.
The model.py and task.py containing the model code is in <a href="mnistmodel/trainer">mnistmodel/trainer</a>
Complete the TODOs in model.py before proceeding!
Once you've completed the TODOs, set MODEL_TYPE and run it locally for a few steps to test the code.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/mnist/trained_${MODEL_TYPE}
JOBNAME=mnist_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_steps=10000 --learning_rate=0.01 --train_batch_size=512 \
--model=$MODEL_TYPE --batch_norm
Explanation: Now, let's do it on Cloud ML Engine so we can train on GPU: --scale-tier=BASIC_GPU
Note the GPU speed up depends on the model type. You'll notice the more complex CNN model trains significantly faster on GPU, however the speed up on the simpler models is not as pronounced.
End of explanation
from google.datalab.ml import TensorBoard
TensorBoard().start("gs://{}/mnist/trained_{}".format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()["pid"]:
TensorBoard().stop(pid)
print("Stopped TensorBoard with pid {}".format(pid))
Explanation: Monitoring training with TensorBoard
Use this cell to launch tensorboard
End of explanation
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/mnist/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
Explanation: Here are my results:
Model | Accuracy | Time taken | Model description | Run time parameters
--- | :---: | ---
linear | 91.53 | 3 min | linear | 100 steps, LR=0.01, Batch=512
linear | 92.73 | 8 min | linear | 1000 steps, LR=0.01, Batch=512
linear | 92.29 | 18 min | linear | 10000 steps, LR=0.01, Batch=512
dnn | 98.14 | 15 min | 300-100-30 nodes fully connected | 10000 steps, LR=0.01, Batch=512
dnn | 97.99 | 48 min | 300-100-30 nodes fully connected | 100000 steps, LR=0.01, Batch=512
dnn_dropout | 97.84 | 29 min | 300-100-30-DL(0.1)- nodes | 20000 steps, LR=0.01, Batch=512
cnn | 98.97 | 35 min | maxpool(10 5x5 cnn, 2)-maxpool(20 5x5 cnn, 2)-300-DL(0.25) | 20000 steps, LR=0.01, Batch=512
cnn | 98.93 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25) | 20000 steps, LR=0.01, Batch=512
cnn | 99.17 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25), batch_norm (logits only) | 20000 steps, LR=0.01, Batch=512
cnn | 99.27 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25), batch_norm (logits, deep) | 10000 steps, LR=0.01, Batch=512
cnn | 99.48 | 12 hr | as-above but nfil1=20, nfil2=27, dprob=0.1, lr=0.001, batchsize=233 | (hyperparameter optimization)
Create a table to keep track of your own results as you experiment with model type and hyperparameters!
Deploying and predicting with model
Deploy the model:
End of explanation
import json, codecs
import matplotlib.pyplot as plt
import tensorflow as tf
HEIGHT = 28
WIDTH = 28
# Get mnist data
mnist = tf.keras.datasets.mnist
(_, _), (x_test, _) = mnist.load_data()
# Scale our features between 0 and 1
x_test = x_test / 255.0
IMGNO = 5 # CHANGE THIS to get different images
jsondata = {"image": x_test[IMGNO].reshape(HEIGHT, WIDTH).tolist()}
json.dump(jsondata, codecs.open("test.json", 'w', encoding = "utf-8"))
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
Explanation: To predict with the model, let's take one of the example images.
End of explanation
%%bash
gcloud ml-engine predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
Explanation: Send it to the prediction service
End of explanation |
9,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sandstone Model
First we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).
General note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means
Step1: Loading data from geomodeller
So there are 3 series, 2 of one single layer and 1 with 2. Therefore we need 3 potential fields, so lets begin.
Step2: Defining Series
Step3: Early granite
Step4: BIF Series
Step5: SImple mafic
Step6: Optimizing the export of lithologies
Here I am going to try to return from the theano interpolate function the internal type of the result (in this case DK I guess) so I can make another function in python I guess to decide which potential field I calculate at every grid_pos
Step8: Export vtk
Step9: Performance Analysis
CPU
Step10: GPU | Python Code:
# Setting extend, grid and compile
# Setting the extent
sandstone = GeoMig.Interpolator(696000,747000,6863000,6950000,-20000, 2000,
range_var = np.float32(110000),
u_grade = 9) # Range used in geomodeller
# Setting resolution of the grid
sandstone.set_resolutions(40,40,80)
sandstone.create_regular_grid_3D()
# Compiling
sandstone.theano_compilation_3D()
Explanation: Sandstone Model
First we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).
General note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means
End of explanation
sandstone.load_data_csv("foliations", os.pardir+"/input_data/a_Foliations.csv")
sandstone.load_data_csv("interfaces", os.pardir+"/input_data/a_Points.csv")
pn.set_option('display.max_rows', 25)
sandstone.Foliations;
sandstone.Foliations
Explanation: Loading data from geomodeller
So there are 3 series, 2 of one single layer and 1 with 2. Therefore we need 3 potential fields, so lets begin.
End of explanation
sandstone.set_series({"EarlyGranite_Series":sandstone.formations[-1],
"BIF_Series":(sandstone.formations[0], sandstone.formations[1]),
"SimpleMafic_Series":sandstone.formations[2]},
order = ["EarlyGranite_Series",
"BIF_Series",
"SimpleMafic_Series"])
sandstone.series
Explanation: Defining Series
End of explanation
sandstone.compute_potential_field("EarlyGranite_Series", verbose = 1)
sandstone.plot_potential_field_2D(direction = "y", cell_pos = 13, figsize=(7,6), contour_lines = 20,
potential_field = True)
sandstone.potential_interfaces;
%matplotlib qt4
block = np.ones_like(sandstone.Z_x)
block[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0
block[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 1
block = block.reshape(40,40,80)
#block = np.swapaxes(block, 0, 1)
plt.imshow(block[:,8,:].T, origin = "bottom", aspect = "equal", extent = (sandstone.xmin, sandstone.xmax,
sandstone.zmin, sandstone.zmax),
interpolation = "none")
Explanation: Early granite
End of explanation
sandstone.compute_potential_field("BIF_Series", verbose=1)
sandstone.plot_potential_field_2D(direction = "y", cell_pos = 12, figsize=(7,6), contour_lines = 100,
potential_field = True)
sandstone.potential_interfaces, sandstone.layers[0].shape;
%matplotlib qt4
block = np.ones_like(sandstone.Z_x)
block[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0
block[(sandstone.Z_x<sandstone.potential_interfaces[0]) * (sandstone.Z_x>sandstone.potential_interfaces[-1])] = 1
block[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 2
block = block.reshape(40,40,80)
plt.imshow(block[:,13,:].T, origin = "bottom", aspect = "equal", extent = (sandstone.xmin, sandstone.xmax,
sandstone.zmin, sandstone.zmax),
interpolation = "none")
Explanation: BIF Series
End of explanation
sandstone.compute_potential_field("SimpleMafic_Series", verbose = 1)
sandstone.plot_potential_field_2D(direction = "y", cell_pos = 15, figsize=(7,6), contour_lines = 20,
potential_field = True)
sandstone.potential_interfaces, sandstone.layers[0].shape;
%matplotlib qt4
block = np.ones_like(sandstone.Z_x)
block[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0
block[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 1
block = block.reshape(40,40,80)
#block = np.swapaxes(block, 0, 1)
plt.imshow(block[:,13,:].T, origin = "bottom", aspect = "equal", extent = (sandstone.xmin, sandstone.xmax,
sandstone.zmin, sandstone.zmax))
Explanation: SImple mafic
End of explanation
# Reset the block
sandstone.block.set_value(np.zeros_like(sandstone.grid[:,0]))
# Compute the block
sandstone.compute_block_model([0,1,2], verbose = 1)
%matplotlib qt4
plot_block = sandstone.block.get_value().reshape(40,40,80)
plt.imshow(plot_block[:,13,:].T, origin = "bottom", aspect = "equal",
extent = (sandstone.xmin, sandstone.xmax, sandstone.zmin, sandstone.zmax), interpolation = "none")
Explanation: Optimizing the export of lithologies
Here I am going to try to return from the theano interpolate function the internal type of the result (in this case DK I guess) so I can make another function in python I guess to decide which potential field I calculate at every grid_pos
End of explanation
Export model to VTK
Export the geology blocks to VTK for visualisation of the entire 3-D model in an
external VTK viewer, e.g. Paraview.
..Note:: Requires pyevtk, available for free on: https://github.com/firedrakeproject/firedrake/tree/master/python/evtk
**Optional keywords**:
- *vtk_filename* = string : filename of VTK file (default: output_name)
- *data* = np.array : data array to export to VKT (default: entire block model)
vtk_filename = "noddyFunct2"
extent_x = 10
extent_y = 10
extent_z = 10
delx = 0.2
dely = 0.2
delz = 0.2
from pyevtk.hl import gridToVTK
# Coordinates
x = np.arange(0, extent_x + 0.1*delx, delx, dtype='float64')
y = np.arange(0, extent_y + 0.1*dely, dely, dtype='float64')
z = np.arange(0, extent_z + 0.1*delz, delz, dtype='float64')
# self.block = np.swapaxes(self.block, 0, 2)
gridToVTK(vtk_filename, x, y, z, cellData = {"geology" : sol})
Explanation: Export vtk
End of explanation
%%timeit
sol = interpolator.geoMigueller(dips,dips_angles,azimuths,polarity, rest, ref)[0]
sandstone.block_export.profile.summary()
Explanation: Performance Analysis
CPU
End of explanation
%%timeit
# Reset the block
sandstone.block.set_value(np.zeros_like(sandstone.grid[:,0]))
# Compute the block
sandstone.compute_block_model([0,1,2], verbose = 0)
sandstone.block_export.profile.summary()
Explanation: GPU
End of explanation |
9,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Shelter Animal Outcomes 1
Data visualization
Step1: Overall it seems not many animals died of natural causes.
Doesn't seem like cats have nine lives unfortunately.
Probably because of their shitty attitude and general evilness they are likely to get transferred.
Dogs have tricked their masters with their sad puppy face to get returned more. Also they are told to be more loyal.
Step2: Overall sex likely does not play a big role in outcome, but spayed/neutered population is bigger they are more likely to get adopted
Step3: Cats and dogs have different probability distributions for outcome
Step4: As expected there are too many colors that makes it difficult to properly visualize without discarding a majority of colors. Thinking a bit, it makes more sense to have a combination of both color and breed to make a pet to be more appealing/attractive.
Step5: As expected there are animals over a wide spectrum of ages. Age should play a major role deciding the outcome.
Step6: Animals that didn't have names or their names were lost, as is evident from the graph above, that their outcome probability distribution would be very different. Named animals seem to be more popular for adoption. Named animals could mean that they had previous owners and possible stories.
Step7: We can see that out of the animals present in training set more than 2/3 had names and roughly about half of them got adopted. | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('train.csv')
df.head()
df['AnimalType'].unique()
df.groupby(['AnimalType']).get_group('Cat').shape[0]
df.groupby(['AnimalType']).get_group('Dog').shape[0]
df['OutcomeType'].unique()
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
sns.countplot(x="OutcomeType", data=df, ax=ax1)
sns.countplot(x="AnimalType", hue="OutcomeType", data=df, ax=ax2)
Explanation: Shelter Animal Outcomes 1
Data visualization
End of explanation
sns.countplot(x="SexuponOutcome", hue="OutcomeType", data=df)
Explanation: Overall it seems not many animals died of natural causes.
Doesn't seem like cats have nine lives unfortunately.
Probably because of their shitty attitude and general evilness they are likely to get transferred.
Dogs have tricked their masters with their sad puppy face to get returned more. Also they are told to be more loyal.
End of explanation
dfCat = df.groupby(['AnimalType']).get_group('Cat')
dfDog = df.groupby(['AnimalType']).get_group('Dog')
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
sns.countplot(x="SexuponOutcome", hue="OutcomeType", data=dfCat, ax=ax1)
sns.countplot(x="SexuponOutcome", hue="OutcomeType", data=dfDog, ax=ax2)
Explanation: Overall sex likely does not play a big role in outcome, but spayed/neutered population is bigger they are more likely to get adopted
End of explanation
dfCat['Color'].describe()
dfDog['Color'].describe()
Explanation: Cats and dogs have different probability distributions for outcome
End of explanation
df['AgeuponOutcome'].unique()
Explanation: As expected there are too many colors that makes it difficult to properly visualize without discarding a majority of colors. Thinking a bit, it makes more sense to have a combination of both color and breed to make a pet to be more appealing/attractive.
End of explanation
df['NameIsPresent'] = df['Name'].isnull()
sns.countplot(x="NameIsPresent", hue="OutcomeType", data=df)
Explanation: As expected there are animals over a wide spectrum of ages. Age should play a major role deciding the outcome.
End of explanation
df[df['NameIsPresent'] == True].shape[0]
df[df['NameIsPresent'] == False].shape[0]
Explanation: Animals that didn't have names or their names were lost, as is evident from the graph above, that their outcome probability distribution would be very different. Named animals seem to be more popular for adoption. Named animals could mean that they had previous owners and possible stories.
End of explanation
df['OutcomeSubtype'].unique()
sns.set_context("poster")
sns.countplot(x="OutcomeSubtype", hue="AnimalType", data=df)
df['DateTime']
Explanation: We can see that out of the animals present in training set more than 2/3 had names and roughly about half of them got adopted.
End of explanation |
9,174 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm using tensorflow 2.10.0. | Problem:
import tensorflow as tf
a = tf.constant(
[[0.3232, -0.2321, 0.2332, -0.1231, 0.2435, 0.6728],
[0.2323, -0.1231, -0.5321, -0.1452, 0.5435, 0.1722],
[0.9823, -0.1321, -0.6433, 0.1231, 0.023, 0.0711]]
)
def g(a):
return tf.argmax(a,axis=1)
result = g(a.__copy__()) |
9,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Data
Step1: Working with Datetime Objects
Step2: The Datetime Object
Step3: Making a datetime indexed dataframe
Step4: Time Resampling
Step5: Quicker (but less controlled) way
Step6: Resampling
Step7: <table border="1" class="docutils">
<colgroup>
<col width="13%" />
<col width="87%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Alias</th>
<th class="head">Description</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td>B</td>
<td>business day frequency</td>
</tr>
<tr class="row-odd"><td>C</td>
<td>custom business day frequency (experimental)</td>
</tr>
<tr class="row-even"><td>D</td>
<td>calendar day frequency</td>
</tr>
<tr class="row-odd"><td>W</td>
<td>weekly frequency</td>
</tr>
<tr class="row-even"><td>M</td>
<td>month end frequency</td>
</tr>
<tr class="row-odd"><td>SM</td>
<td>semi-month end frequency (15th and end of month)</td>
</tr>
<tr class="row-even"><td>BM</td>
<td>business month end frequency</td>
</tr>
<tr class="row-odd"><td>CBM</td>
<td>custom business month end frequency</td>
</tr>
<tr class="row-even"><td>MS</td>
<td>month start frequency</td>
</tr>
<tr class="row-odd"><td>SMS</td>
<td>semi-month start frequency (1st and 15th)</td>
</tr>
<tr class="row-even"><td>BMS</td>
<td>business month start frequency</td>
</tr>
<tr class="row-odd"><td>CBMS</td>
<td>custom business month start frequency</td>
</tr>
<tr class="row-even"><td>Q</td>
<td>quarter end frequency</td>
</tr>
<tr class="row-odd"><td>BQ</td>
<td>business quarter endfrequency</td>
</tr>
<tr class="row-even"><td>QS</td>
<td>quarter start frequency</td>
</tr>
<tr class="row-odd"><td>BQS</td>
<td>business quarter start frequency</td>
</tr>
<tr class="row-even"><td>A</td>
<td>year end frequency</td>
</tr>
<tr class="row-odd"><td>BA</td>
<td>business year end frequency</td>
</tr>
<tr class="row-even"><td>AS</td>
<td>year start frequency</td>
</tr>
<tr class="row-odd"><td>BAS</td>
<td>business year start frequency</td>
</tr>
<tr class="row-even"><td>BH</td>
<td>business hour frequency</td>
</tr>
<tr class="row-odd"><td>H</td>
<td>hourly frequency</td>
</tr>
<tr class="row-even"><td>T, min</td>
<td>minutely frequency</td>
</tr>
<tr class="row-odd"><td>S</td>
<td>secondly frequency</td>
</tr>
<tr class="row-even"><td>L, ms</td>
<td>milliseconds</td>
</tr>
<tr class="row-odd"><td>U, us</td>
<td>microseconds</td>
</tr>
<tr class="row-even"><td>N</td>
<td>nanoseconds</td>
</tr>
</tbody>
</table>
Step8: Time Shifts
Step9: Shift up by one
Step10: Rolling and Expanding
Step11: Bollinger Bands
determining if the price is high or not | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Time Series Data
End of explanation
from datetime import datetime
my_year = 2017
my_month = 10
my_day = 14
my_hour = 15
my_minute = 30
my_second = 15
Explanation: Working with Datetime Objects
End of explanation
my_date = datetime(my_year, my_month, my_day)
my_date
my_date_time = datetime(my_year, my_month, my_day, my_hour, my_minute, my_second)
my_date_time
type(my_date_time)
my_date_time.day
my_date_time.month
Explanation: The Datetime Object
End of explanation
first_two = [datetime(2016, 1, 1), datetime(2016, 1, 2)]
first_two
type(first_two)
type(first_two[0])
dt_ind = pd.DatetimeIndex(first_two)
dt_ind
data = np.random.randn(2, 2)
cols = ["a", "b"]
df = pd.DataFrame(data, dt_ind, cols)
df
df.index.argmax()
df.index.argmin()
df.index.min()
type(df.index.min())
Explanation: Making a datetime indexed dataframe
End of explanation
df = pd.read_csv("time_data/walmart_stock.csv")
df.head()
df.info()
df["Date"] = pd.to_datetime(df["Date"]) # Be aware of formatting!!
df.info()
df.set_index("Date", inplace=True)
df.head()
Explanation: Time Resampling
End of explanation
df2 = pd.read_csv("time_data/walmart_stock.csv", index_col="Date", parse_dates=True)
df2.head()
df2.index
type(df2.index[0])
Explanation: Quicker (but less controlled) way:
End of explanation
df.resample(rule="A")
Explanation: Resampling
End of explanation
df.resample(rule="A").mean()
df.resample(rule="BQ").mean()
df.resample(rule="A").max()
def first_day(entry):
return entry[0]
df.resample("A").apply(first_day)
df["Close"].resample("A").mean().plot(kind="bar")
df["Close"].resample("M").mean().plot(kind="bar", figsize=(16, 6))
Explanation: <table border="1" class="docutils">
<colgroup>
<col width="13%" />
<col width="87%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Alias</th>
<th class="head">Description</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td>B</td>
<td>business day frequency</td>
</tr>
<tr class="row-odd"><td>C</td>
<td>custom business day frequency (experimental)</td>
</tr>
<tr class="row-even"><td>D</td>
<td>calendar day frequency</td>
</tr>
<tr class="row-odd"><td>W</td>
<td>weekly frequency</td>
</tr>
<tr class="row-even"><td>M</td>
<td>month end frequency</td>
</tr>
<tr class="row-odd"><td>SM</td>
<td>semi-month end frequency (15th and end of month)</td>
</tr>
<tr class="row-even"><td>BM</td>
<td>business month end frequency</td>
</tr>
<tr class="row-odd"><td>CBM</td>
<td>custom business month end frequency</td>
</tr>
<tr class="row-even"><td>MS</td>
<td>month start frequency</td>
</tr>
<tr class="row-odd"><td>SMS</td>
<td>semi-month start frequency (1st and 15th)</td>
</tr>
<tr class="row-even"><td>BMS</td>
<td>business month start frequency</td>
</tr>
<tr class="row-odd"><td>CBMS</td>
<td>custom business month start frequency</td>
</tr>
<tr class="row-even"><td>Q</td>
<td>quarter end frequency</td>
</tr>
<tr class="row-odd"><td>BQ</td>
<td>business quarter endfrequency</td>
</tr>
<tr class="row-even"><td>QS</td>
<td>quarter start frequency</td>
</tr>
<tr class="row-odd"><td>BQS</td>
<td>business quarter start frequency</td>
</tr>
<tr class="row-even"><td>A</td>
<td>year end frequency</td>
</tr>
<tr class="row-odd"><td>BA</td>
<td>business year end frequency</td>
</tr>
<tr class="row-even"><td>AS</td>
<td>year start frequency</td>
</tr>
<tr class="row-odd"><td>BAS</td>
<td>business year start frequency</td>
</tr>
<tr class="row-even"><td>BH</td>
<td>business hour frequency</td>
</tr>
<tr class="row-odd"><td>H</td>
<td>hourly frequency</td>
</tr>
<tr class="row-even"><td>T, min</td>
<td>minutely frequency</td>
</tr>
<tr class="row-odd"><td>S</td>
<td>secondly frequency</td>
</tr>
<tr class="row-even"><td>L, ms</td>
<td>milliseconds</td>
</tr>
<tr class="row-odd"><td>U, us</td>
<td>microseconds</td>
</tr>
<tr class="row-even"><td>N</td>
<td>nanoseconds</td>
</tr>
</tbody>
</table>
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv("time_data/walmart_stock.csv", index_col="Date", parse_dates=True)
df.head()
df.tail()
Explanation: Time Shifts
End of explanation
df.shift(periods=1).head() # first gets filled in by NaN
df.shift(periods=-1).tail() # last gets filled by NaN
df.head()
df.tshift(freq="M").head() # shifts to end of month
df.shift(freq="A").head() # shifts to end of year
Explanation: Shift up by one
End of explanation
df.head()
df["Open"].plot(figsize=(16, 6))
df.rolling(window=7).mean().head(20) # not inplace!
df["Open"].plot()
df.rolling(window=7).mean()["Close"].plot(figsize=(16, 6))
df.rolling(window=14).mean()["Close"].plot(figsize=(16, 6))
df.rolling(window=28).mean()["Close"].plot(figsize=(16, 6))
df["Close 30 Day MA"] = df["Close"].rolling(window=30).mean()
df[["Close 30 Day MA", "Close"]].plot(figsize=(16, 6))
df["Close"].expanding().mean().plot(figsize=(16, 6))
Explanation: Rolling and Expanding
End of explanation
# Close 20 MA
df["Close: 20 Day Mean"] = df["Close"].rolling(20).mean()
# Upper = 20MA + 2 * std(20)
df["Upper"] = df["Close: 20 Day Mean"] + 2 * (df["Close"].rolling(20).std())
# Lower = 20MA - 2 * std(20)
df["Lower"] = df["Close: 20 Day Mean"] - 2 * (df["Close"].rolling(20).std())
# Close
df[["Close", "Close: 20 Day Mean", "Upper", "Lower"]].plot(figsize=(16, 6))
Explanation: Bollinger Bands
determining if the price is high or not
End of explanation |
9,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploratory Data Analysis
In this tutorial we focus on two popular methods for exploring high dimensional datasets.
Principal Component Analysis
Latent Semantic Analysis
The first method is a general scheme for dimensionality reduction, but the second one is specifically used in the text domain.
Principal Component Analysis (PCA)
PCA is a popular method for summarizing datasets. Suppose, we have a dataset of different wine types. We describe each wine sample by its Alcohol content, color, and so on (see this very nice visualization of wine properties taken from here). Some of these features will measure related properties and so will be redundant. So, we can summarize each wine sample with fewer features! PCA is one such way to do this. It's also called as a method for dimensionality reduction.
Here we have a scatter plot of different wine samples (synthetic). It's based on two wine characteristics, color intensity and alcohol content.
<img src="http
Step1: Let's first look at two wine characteristics
Step2: PCA on a Subset of the Wine Data
Step3: Let's visualize the normalized data and its principal components.
Step4: Let's transform the normalized data to the principal component space
Step5: Homework $1$
Step6: We consider a toy document collection (corpus) and a query for this tutorial.
Step7: We now build a term frequency (TF) matrix from the corpus using the Python sklearn package.
Step8: Let's look at the corpus vocabulary terms.
Some of these terms are noninformative or stopwords, e.g., a, an, the, and, etc. One can use a standard or a custom stopword list to remove these terms.
The vocabulary also contains different forms for a single word, e.g., die, died. One can use methods such are stemming and lemmatization to get root forms of words in a corpus.
There are several open source libraries available to perform all these for you, e.g., Python Natural Language Processing Toolkit (NLTK)
Step9: TF-IDF
Here, we compute the TF-IDF matrix for the normalized corpus and the sample query die dagger. We consider the query as a document in the corpus.
Step10: Information Retrieval via TF-IDF
Now, we solve the document ranking problem for the given query
Step11: Latent Semantic Analysis (LSA)
We perform LSA using the well-known matrix factorization technique Singular Value Decomposition (SVD).
We consider the TF matrix for SVD. In practice, one can also perform SVD on the TF-IDF matrix.
Note that
$A$ is a $V \times D$ data matrix
$U$ is the matrix of the eigenvectors of $C = AA'$ (the term-term matrix). It's a $V \times V$ matrix.
$V$ is the matrix of the eigenvectors of $B = A'A$ (the document-document matrix). It's a $D \times D$ matrix
$s$ is the vector singular values, obtained as square roots of the eigenvalues of $B$.
More info can be found in the python SVD documentation
Step12: Information Retrieval via LSA
Now we would like to represent the query in the LSA space. A natural choice is to compute a vector that is the centroid of the semantic vectors for its terms.
In our example, the keyword query is die dagger. We compute the query vector as
We now solve the document ranking problem given the query die dagger as follows. | Python Code:
# We will first read the wine data headers
f = open("wine.data")
header = f.readlines()[0]
Explanation: Exploratory Data Analysis
In this tutorial we focus on two popular methods for exploring high dimensional datasets.
Principal Component Analysis
Latent Semantic Analysis
The first method is a general scheme for dimensionality reduction, but the second one is specifically used in the text domain.
Principal Component Analysis (PCA)
PCA is a popular method for summarizing datasets. Suppose, we have a dataset of different wine types. We describe each wine sample by its Alcohol content, color, and so on (see this very nice visualization of wine properties taken from here). Some of these features will measure related properties and so will be redundant. So, we can summarize each wine sample with fewer features! PCA is one such way to do this. It's also called as a method for dimensionality reduction.
Here we have a scatter plot of different wine samples (synthetic). It's based on two wine characteristics, color intensity and alcohol content.
<img src="http://i.stack.imgur.com/jPw90.png">
We notice a correlation between these two features. We can construct a new property or feature (that summarizes the two features) by drawing a line through the center of the scatter plot and projecting all points onto this line. We construct these lines via linear combinations of $x$ and $y$ coordinates, i.e., $w_1 x + w_2 y$. Each configuration of $(w_1, w_2)$ will give us a new line.
Now we will look at the projections -- The below animation shows how the projections of data points look like for different lines (red dots are projections of the blue dots):
<img src="http://i.stack.imgur.com/Q7HIP.gif">
PCA aims to find the best line according to the following two criteria.
The variation of (projected) values along the line should be maximal. Have look at how the "variance" of the red dots changes while the line rotates...
The line should give the lowest reconstruction error. By reconstruction, we mean that constructing the original two characteristics (the position ($x$, $y$) of a blue dot) from the new one (the position of a red dot). This reconstruction error is proportional to the length of the connecting red line.
<img src="http://i.stack.imgur.com/XFngC.png">
We will notice that the maximum variance and the minimum error are happened at the same time, when the line points to the magenta ticks. This line corresponds to the first principal component constructed by PCA.
PCA objective: Given the data covariance matrix $C$, we look for a vector $u$ having unit length ($\|u\| = 1$) such that $u^TCu$ is maximal. We will see that we can do this with the help of eigenvectors and eigenvalues of the covariance matrix.
We will look at the intuition behind this approach using the example above.
Let $C$ be an $n \times n$ matrix and $u$ is an $n \times 1$ vector. The operation $C u$ is well-defined. An eigenvector of $C$ is, by definition, any vector $u$ such that $C u = \lambda u$. For the dataset $A$ ($n \times 2$ matrix) above, the covariance matrix C ($2 \times 2$ matrix) is (we assume that the data is centered.)
\begin{equation}
\begin{vmatrix}
1.07 & 0.63 \
0.63 & 0.64
\end{vmatrix}
\end{equation}
It's a square symmetric matrix. Thus, one can diagonalize it by choosing a new orthogonal coordinate system, given by its eigenvectors (spectral theorem):
\begin{equation}
C = U \Lambda U^{T}
\end{equation}
where $U$ is a matrix of eigenvectors $u_i$'s (each column is an eigenvector) and $\Lambda$ is a diagonal matrix with eigenvalues $\lambda_i$'s on the diagonal.
In the new (eigen) space, the covariance matrix is diagonal, as follows:
\begin{equation}
\begin{vmatrix}
1.52 & 0 \
0 & 0.18
\end{vmatrix}
\end{equation}
It means that there is no correlation between points in this new system. The maximum possible variance is $1.52$, which is given by the first eigenvalue. We achieve this variance by taking the projection on the first principal axis. The direction of this axis is given by the first eigen vector of $C$.
This example/discussion is adapted from here.
PCA on a Real Dataset
For illustration, we will use the wine dataset. Each wine sample is described by 14 features as follows:
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg as la
# Read the data file (text format): wine.data, delimiter=',', use columns 0, 1, 10, skip the header
wine_class, wine_alc, wine_col = np.loadtxt("wine.data", delimiter=',', usecols=(0, 1, 10), unpack=True, skiprows=1)
# draw a scatter plot of wine color intensity and alcohol content
Explanation: Let's first look at two wine characteristics: Alcohol Content and Color Intensity.
<!--img src="http://winefolly.com/wp-content/uploads/2013/02/wine-color-chart1.jpg"-->
We can draw a scatter plot:
End of explanation
# Perform PCA on two wine characteristics: **Alcohol Content** and **Color Intensity**
col_alc = np.matrix([wine_col, wine_alc]).T
m, n = col_alc.shape
# compute column means
# center the data with column means
# calculate the covariance matrix
# calculate eigenvectors & eigenvalues of the covariance matrix
# sort eigenvalues and eigenvectors in decreasing order
Explanation: PCA on a Subset of the Wine Data
End of explanation
# Create a scatter plot of the normalized data
# color intensity of the x-axis and alcohol content on the y-axis
# Plot the principal component line
Explanation: Let's visualize the normalized data and its principal components.
End of explanation
# the PCA tranformation
# Plot the data points in the new space
Explanation: Let's transform the normalized data to the principal component space
End of explanation
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from scipy.spatial.distance import cosine
Explanation: Homework $1$: Apply PCA on the whole set of features and analyze its principal components.
Exploratory Text Analysis
First, let's import numpy and a couple other modules we'll need.
End of explanation
corpus = [
"Romeo and Juliet.", # document 1
"Juliet: O happy dagger!", # document 2
"Romeo died by dagger.", # document 3
"'Live free or die', that's the New-Hampshire's motto.", # document 4
"Did you know, New-Hampshire is in New-England." # document 5
]
key_words = [
'die',
'dagger'
]
Explanation: We consider a toy document collection (corpus) and a query for this tutorial.
End of explanation
# initialize the countvetorizer class
vectorizer = CountVectorizer(min_df=0, stop_words=None)
# transform the corpus based on the count vectorizer
# print the vocabulary
Explanation: We now build a term frequency (TF) matrix from the corpus using the Python sklearn package.
End of explanation
# A custom stopword list
stop_words = ["a", "an", "the", "and", "in", "by", "or", "did", "you", "is", "that"]
# Here, we assume that we preprocessed the corpus
preprocessed_corpus = [
"Romeo and Juliet",
"Juliet O happy dagger",
"Romeo die by dagger",
"Live free or die that the NewHampshire motto",
"Did you know NewHampshire is in NewEngland"
]
# Customize the vectorizer class
# transform the corpus based on the count vectorizer
# print the vocabulary
Explanation: Let's look at the corpus vocabulary terms.
Some of these terms are noninformative or stopwords, e.g., a, an, the, and, etc. One can use a standard or a custom stopword list to remove these terms.
The vocabulary also contains different forms for a single word, e.g., die, died. One can use methods such are stemming and lemmatization to get root forms of words in a corpus.
There are several open source libraries available to perform all these for you, e.g., Python Natural Language Processing Toolkit (NLTK)
End of explanation
# query keywords
key_words = ['die', 'dagger']
# To keep the development simple, we build a composite model for both the corpus and the query
corpus = preprocessed_corpus + [' '.join(key_words)]
# transform the corpus based on the count vectorizer
# TF-IDF transform using TfidfTransformer
# transform the TF matrix to TF-IDF matrix
# D x V document-term matrix
# 1 x V query-term vector
Explanation: TF-IDF
Here, we compute the TF-IDF matrix for the normalized corpus and the sample query die dagger. We consider the query as a document in the corpus.
End of explanation
# Find cosine distance b/w the TF-IDF vectors of every document and the query
# Sort them and create the rank list
Explanation: Information Retrieval via TF-IDF
Now, we solve the document ranking problem for the given query: die dagger. We use cosine distance to measure similarity between each document vector and the query vector in the TF-IDF vector space. Once we have the distance scores we can sort them to get a rank list as follows.
End of explanation
K = 2 # number of components
Explanation: Latent Semantic Analysis (LSA)
We perform LSA using the well-known matrix factorization technique Singular Value Decomposition (SVD).
We consider the TF matrix for SVD. In practice, one can also perform SVD on the TF-IDF matrix.
Note that
$A$ is a $V \times D$ data matrix
$U$ is the matrix of the eigenvectors of $C = AA'$ (the term-term matrix). It's a $V \times V$ matrix.
$V$ is the matrix of the eigenvectors of $B = A'A$ (the document-document matrix). It's a $D \times D$ matrix
$s$ is the vector singular values, obtained as square roots of the eigenvalues of $B$.
More info can be found in the python SVD documentation: https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html
We now perform data reduction or transform documents in a $V$-dimensional space to a lower dimensional space. Let's take the number dimensions $K = 3$, i.e., the number of semantic components in the corpus.
Using LSA, we can represent vocabulary terms in the semantic space.
End of explanation
# Find cosine distance b/w the TF-IDF vectors of every document and the query
# Sort them and create the rank list
Explanation: Information Retrieval via LSA
Now we would like to represent the query in the LSA space. A natural choice is to compute a vector that is the centroid of the semantic vectors for its terms.
In our example, the keyword query is die dagger. We compute the query vector as
We now solve the document ranking problem given the query die dagger as follows.
End of explanation |
9,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro
Step1: Step #1 - Exploring/Cleaning the data
Summary of Statistics
Step2: Summary of Statistics [separated by class]
Step3: Data Visualization & Exploratory Analysis
Libraries used
Step4: Categorical Feature Analysis - Parent Satisfation & Relation [Separated by Class
Step5: Once again, by using the pandas groupby function, we can get a table showing the exact count of the distribution of students in H, L, M --- looking at the first feature "Parent School Satisfaction" only.
Step6: Digging a little deeper
Step7: Here is a visualization of the above pivot table. The combination of two features do not vary too much within the 'H, M, L' classes, but there is a distict gap showing that students in 'H' class raised their hands at an average of around 3 times as often as 'L' students.
Step8: Categorical Feature Analysis Cont.
Another good feature that may add value to our analysis of factors that contributes to a student's class is their attendance record.
Again, looking at value counts for the separate classes, we see that the number of students who had under-7 absences and received a 'Low' class/grade is very low. (and vice versa for above-7 absences & 'H')
Step9: Moving on to numerical feature analysis
Step10: Numerical Feature Analysis
Step11: A more detailed view of correlation
Step12: Scatterplot Visualization
Step13: While there is a decent amount of noise, we can still see a linear relationship where students who raise their hands and visit class resources are in the group with the higher class.
There is an evident pattern where there's a cluster of 'H's at the top right hand corner of the graph and a cluster of 'L's at the bottom left hand corner of the graph.
Step14: Exploring correlations between multidimensional data & plotting all pairs of values against each other
We see the relationship between class is strongest between 'raisedhands' and other features as well as 'visitedresources' and other features
The 2 weaker features seem to be 'announcements view' and 'discussion' where the points for the different 'H, M, L' classes seem to be scattered all over with no particular pattern.
Now that we have this analysis, we have a better understanding of which features will help us predict the target classes we want and which will be added noise. This is an importance part of the process of feature analysis.
Step15: Step #2. - Data Analysis & Application of Machine Learning Classification Models with Performance Evaluation
1. Preprocessing Data
Removal of outliers
Encoding categorical data
2. Machine Learning Models Used
Logistic Regression Classifier
Support Vector Machines Classifer
Random Forest Classifier (ensemble)
Gradient Boosting Classifier (ensemble)
Bagging Classifier (ensemble)
3. Evaluation Methods Used
Prediction Accuracy Score (on test data)
K-fold cross validation (on training data)
Plotting learning curves to assess Bias vs. Variance
Precision, Recall & F1 Scores
4. Implementation Process
Once the independent variables (features) have been determined, we split the data set into testing and training sets using sklearn's cross validation
Step16: Preprocessing Data
Step17: Preprocessing Data
Step18: Logistic Regression Classifer
The logistic regression classifier is a method used to generalizes logistic regression to multiclass problems (applicable here since we are trying to predict more than two possible discrete outcomes). It is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable
Dependent variable (y values) = Class ('H, L, M')
Independent variables (or features / x values) = 'raisedhands', 'VisITedResources', 'ParentschoolSatisfaction_Bad', 'Relation_Father', Relation_Mum','ParentschoolSatisfaction_Good', 'StudentAbsenceDays_Above-7','StudentAbsenceDays_Under-7'
In Logistic Regression modeling, input values (X) are combined linearly using weights or coefficient values to predict an output value (y)--- or more specifically, the probability that an input (X) belongs to the default class.
Pros
Step19: Visualization of Learning Curves
Step20: Random Forest Classifier
A random forest is an ensemble method (combination of learning algorithms) that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging (majority of votes to make a prediction) to improve the predictive accuracy and control over-fitting.
At the root of random forest are decision trees, which are a type of flowchart which assist in the decision making process. Internal nodes represent tests on particular attributes, while branches exiting nodes represent a single test outcome, and leaf nodes represent class labels. The goal is to split on the attributes which create the purest child nodes possible.
Random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of reducing the variance. This comes at the expense of a small increase in the bias and some loss of interpretability, but generally greatly boosts the performance in the final model.
Pros
Step21: Random Forest
Step22: Gradient Boosting Classifier
Gradient Boosting is based on the the idea of boosting, which is a method of trying to modify a weak learner into a becoming a better one. It starts with filtering observations, leaving those observations that the weak learner can handle and focusing on developing new weak learns to handle the remaining difficult observations. For example, the model will build trees one at a time, where each new tree helps to correct errors made by previously trained tree.
With Gradient Boosting, the objective is to minimize the loss of the model by adding weak learners using a gradient descent like procedure. This type of algorithm can be described as a stage-wise additive model. This is because one new weak learner is added at a time and existing weak learners in the model are frozen and left unchanged.
Pros | Python Code:
import numpy as np
import pandas as pd
data = pd.read_csv('xAPI-Edu-Data.csv') #columns = ['Gender','Nationality', 'PlaceofBirth','StageID','GradeID','SectionID'
#,'Topic','Semester','Relation','RaisedHands','VisitedResources'
#,'AnnoucementsView','Discussion','ParentAnsweringSurvey',
#'ParentSchoolSatisfaction','StudentAbsenceDays','Class/FinalGrade'])
print (data.shape)
data.head(15)
Explanation: Intro: A Student Grade Classification Project
There is a rising trend in using data to determine student performance and provide timely intervention for low-performing students/at-risk students. This project hopes to help highlight and contribute to these efforts in education innovation.
<img src="student intervention graphic.png" alt="Drawing" style="width: 600px;"/>
Having previously worked at a classroom learning/management platform edtech startup, I was inspired to do this project as a way to learn more about the process of building a data-driven student intervention system as well as to understand how data science can be used to transform education.
The dataset below was provided by Kaggle, and gives a snapshot of student engagement and student background as well as their corresponding final grades (classified by 3 categories: high-level grades, middle-levelgrades, & low-level grades.
This project aims to determine which factors are the greatest indicators in identifying a student as being at‐risk either behaviorally or academically and to make predictions, based on these factors, of which students belong to which grade class (high, middle or low).
<img src="stoplight assessment.jpg" alt="Drawing" style="width: 350px;"/>
Data Collection & Features
Source of Dataset: Kaggle
The following dataset includes many factors which may influence a student's final grades. Below is a description of each:
1 Gender - student's gender (nominal: 'Male' or 'Female’)
2 Nationality- student's nationality (nominal:’ Kuwait’,’ Lebanon’,’ Egypt’,’ SaudiArabia’,’ USA’,’ Jordan’,’ Venezuela’,’ Iran’,’ Tunis’,’ Morocco’,’ Syria’,’ Palestine’,’ Iraq’,’ Lybia’)
3 Place of birth- student's Place of birth (nominal:’ Kuwait’,’ Lebanon’,’ Egypt’,’ SaudiArabia’,’ USA’,’ Jordan’,’ Venezuela’,’ Iran’,’ Tunis’,’ Morocco’,’ Syria’,’ Palestine’,’ Iraq’,’ Lybia’)
4 Educational Stages- educational level student belongs (nominal: ‘lowerlevel’,’MiddleSchool’,’HighSchool’)
5 Grade Levels- grade student belongs (nominal: ‘G-01’, ‘G-02’, ‘G-03’, ‘G-04’, ‘G-05’, ‘G-06’, ‘G-07’, ‘G-08’, ‘G-09’, ‘G-10’, ‘G-11’, ‘G-12 ‘)
6 Section ID- classroom student belongs (nominal:’A’,’B’,’C’)
7 Topic- course topic (nominal:’ English’,’ Spanish’, ‘French’,’ Arabic’,’ IT’,’ Math’,’ Chemistry’, ‘Biology’, ‘Science’,’ History’,’ Quran’,’ Geology’)
8 Semester- school year semester (nominal:’ First’,’ Second’)
9 Parent responsible for student (nominal:’mom’,’father’)
10 Raised hand- how many times the student raises his/her hand on classroom (numeric:0-100)
11- Visited resources- how many times the student visits a course content(numeric:0-100)
12 Viewing announcements-how many times the student checks the new announcements(numeric:0-100)
13 Discussion groups- how many times the student participate on discussion groups (numeric:0-100)
14 Parent Answering Survey- parent answered the surveys which are provided from school or not (nominal:’Yes’,’No’)
15 Parent School Satisfaction- the Degree of parent satisfaction from school(nominal:’Yes’,’No’)
16 Student Absence Days-the number of absence days for each student (nominal: above-7, under-7)
The students are classified into three numerical intervals based on their total grade/mark:
Low-Level: interval includes values from 0 to 69,
Middle-Level: interval includes values from 70 to 89,
High-Level: interval includes values from 90-100.
Data Handling
Importing Data with Pandas: Read in a csv of our data
Identify the shape of the dataset: Tells us we have 480 observations/students to analyze
data --> Summary of our data contained in a Pandas DataFrame: Preview of features and values
End of explanation
data.info()
data.describe()
Explanation: Step #1 - Exploring/Cleaning the data
Summary of Statistics:
Use Pandas' describe function to generate descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution (for all numeric columns).
By looking at info: We have the total number of observations = 480, and we see there are no missing (non-null) values (i.e., we have a clean data set).
End of explanation
data.groupby('Class').aggregate(['min', np.median, np.mean, max])
Explanation: Summary of Statistics [separated by class]: 'High, Middle, Low'
Now that we have an overview for the entire dataset, let's dig deeper and focus on the features we're primarily interested in: the grades the student receive.
Using pandas grouby we can split the data into groups based on some criteria (in this case --- the class column 'H, M, L').
Using the aggregate function with the groupby, we are able to compute the summary statistics about each group (in this case --- the min, median, mean and max for each separate class 'H, M, L').
We then compare the summary statistics for total students (table above) vs. students for each separate classes 'H, M, L' (table below). The information presented below is more granular & outlines differences in class.
For example: We see that the average number of times students with 'H' raised their hands is around 70.23, whereas for students with 'L' that number drops to around 17. This separation of mean values will become important later when we preprocess the data.
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
data.Class.value_counts().plot(kind='bar')
data.Class.value_counts()
Explanation: Data Visualization & Exploratory Analysis
Libraries used: Matplotlib & Seaborn
First step in exploring the data to classify the grades of students is to look at a simple value count.
Next we visualize the number of students in each of the separate classes to get an idea of how evenly spread the 'H, L, and M's are among the students in the dataset.
As the bar graph shows, distribution is fairly even, no major skews toward any individual class.
End of explanation
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(18,4))
plt.subplot(141)
good_sat= data.Class[data.ParentschoolSatisfaction== 'Good'].value_counts()
good_sat.plot(kind='bar')
plt.title('Parent school satisfaction = Good')
plt.subplot(142)
bad_sat = data.Class[data.ParentschoolSatisfaction== 'Bad'].value_counts()
bad_sat.plot(kind='bar')
plt.title('Parent school satisfaction = Bad')
plt.subplot(143)
survey_no = data.Class[data.Relation == 'Father'].value_counts()
survey_no.plot(kind='bar')
plt.title('Father Responsible for Student')
plt.subplot(144)
survey_yes = data.Class[data.Relation == 'Mum'].value_counts()
survey_yes.plot(kind='bar')
plt.title('Mother Responsible for Student')
Explanation: Categorical Feature Analysis - Parent Satisfation & Relation [Separated by Class: H, M, L]
Now we want to pair those 'High, Low and Middle'grade value counts with other features that may indicate patterns in the distribution of grades and add value to our analysis.
Below are 4 bar graphs for which grades students received based on Parent satisfaction (good or bad) and Relation responsible for the student (Father or Mum)
There is a clear pattern where:
High grade = Parent schoool satisfaction (good) & Relation responsible (Mum)
Low grade = Parent schoool satisfaction (bad) & Relation responsible (Father)
End of explanation
grades_count = data.groupby(['ParentschoolSatisfaction','Class'])['Class'].aggregate('count').unstack()
grades_count
Explanation: Once again, by using the pandas groupby function, we can get a table showing the exact count of the distribution of students in H, L, M --- looking at the first feature "Parent School Satisfaction" only.
End of explanation
#shows that adding relation only adds noise, not a significant difference between father & mum
data.pivot_table('raisedhands', index = ['ParentschoolSatisfaction','Relation'],
columns = 'Class', aggfunc = 'mean')
Explanation: Digging a little deeper:
Using pandas pivot_table we can capture more complex insights from the data & further break down 'H, L, M' class structure by including two levels of analysis (both Parent school satisfaction & Relation).
It is common to start with simple analysis with one feature and add complexity with multiple features as we understand how they interact with our target values of interest.
End of explanation
parents_hands = data.pivot_table('raisedhands', index = ['ParentschoolSatisfaction','Relation'],
columns = 'Class', aggfunc = 'mean')
parents_hands.plot()
plt.ylabel('Average number of times student raised hand')
Explanation: Here is a visualization of the above pivot table. The combination of two features do not vary too much within the 'H, M, L' classes, but there is a distict gap showing that students in 'H' class raised their hands at an average of around 3 times as often as 'L' students.
End of explanation
import matplotlib.pyplot as plt
attendance = pd.crosstab(index=data['StudentAbsenceDays'], columns=[data['Class']], normalize='columns')
attendance.plot(kind='bar', figsize=(6,6), stacked=True)
Explanation: Categorical Feature Analysis Cont.
Another good feature that may add value to our analysis of factors that contributes to a student's class is their attendance record.
Again, looking at value counts for the separate classes, we see that the number of students who had under-7 absences and received a 'Low' class/grade is very low. (and vice versa for above-7 absences & 'H')
End of explanation
fig = plt.figure(figsize=(18,8))
plt.subplot(221)
data.raisedhands[data.Class == 'H'].plot(kind='kde')
data.raisedhands[data.Class == 'M'].plot(kind='kde')
data.raisedhands[data.Class == 'L'].plot(kind='kde')
plt.legend(('High', 'Middle','Low'),loc='best')
plt.title('Raised Hands')
plt.subplot(222)
data.VisITedResources[data.Class == 'H'].plot(kind='kde')
data.VisITedResources[data.Class == 'M'].plot(kind='kde')
data.VisITedResources[data.Class == 'L'].plot(kind='kde')
plt.legend(('High', 'Middle','Low'),loc='best')
plt.title('Visited Resources')
plt.subplot(223)
data.Discussion[data.Class == 'H'].plot(kind='kde')
data.Discussion[data.Class == 'M'].plot(kind='kde')
data.Discussion[data.Class == 'L'].plot(kind='kde')
plt.legend(('High', 'Middle','Low'),loc='best')
plt.title('Discussion')
plt.subplot(224)
data.AnnouncementsView[data.Class == 'H'].plot(kind='kde')
data.AnnouncementsView[data.Class == 'M'].plot(kind='kde')
data.AnnouncementsView[data.Class == 'L'].plot(kind='kde')
plt.legend(('High', 'Middle','Low'),loc='best')
plt.title('Viewed Announcements')
Explanation: Moving on to numerical feature analysis:
Look at the distribution and probability density function of the 4 numerical columns:
1) raised hands
2) Visited Resources
3) Announcements Views
4) Discussion
The graphs for 'raised hands' and 'visited resources' show the most promise in differentiating between 'H' and 'L' classes. The bimodal shape of their density curves show distinct peaks at the opposite ends of the graph.
End of explanation
raised_hands = data['raisedhands']
discussion = data['Discussion']
v_resources = data['VisITedResources']
v_announcements = data['AnnouncementsView']
def correlation(x,y):
std_x = (x-x.mean())/x.std(ddof=0)
std_y = (y-y.mean())/y.std(ddof=0)
return (std_x * std_y).mean()
print ('Raised hands & Discussion: ', correlation(raised_hands, discussion))
print ('Visted Resources & Discussion: ', correlation(v_resources, discussion))
print ('Raised hands & Visited Resources: ', correlation(raised_hands, v_resources))
Explanation: Numerical Feature Analysis: Relationship & Correlation
Correlations can tell us about the direction, and the degree (strength) of the relationship between two variables (or features)
Using Pearson's correlation coefficient (Pearson's r) where r ranges from -1 (negative relationship) to 0 (no relationship to 1 (positive relationship).
We see that the only relatively strong relationship r = 0.69 is between the two strongest indications of class differences as noted above: 'raised hands' and 'visited resources'.
End of explanation
fig,ax= plt.subplots(figsize=(9,7))
sns.heatmap(data.corr(),annot=True)
Explanation: A more detailed view of correlation: Heatmap visualization
The stronger the correlation, the deeper the shade of purple, really just confirming our calcuations previously
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
def new_grade(grade):
if grade == 'H':
return 100
elif grade == 'M':
return 50
elif grade == 'L':
return 10
def new_grades(grades):
return grades.apply(new_grade)
print (new_grades(data['Class']).head())
converted_grades = new_grades(data['Class'])
fig = plt.figure(figsize=(12, 9))
plt.scatter(data['VisITedResources'], data['raisedhands'],c= converted_grades, s = converted_grades, cmap = 'RdYlGn')
plt.ylabel("Number of times Student Raised their hand in class")
plt.xlabel("Number of times Student Visited Resources")
plt.title ('Interaction Correlation by Class Marks')
plt.colorbar()
Explanation: Scatterplot Visualization: Class ['H, M, L'] according to 'raisedhands' & 'visitedresources'
Zoom in on main features of importance:
To get a basic understanding of what we are trying to predict, we narrow down the analysis to the two main features that have thus far provided the most information in determining 'H, M, L' classes.
First we need a procedure to turn the class values (categorical) into numerical values in order to plot them and show the different points which are 'H'(large green points) or 'M' (medium yellow points) or 'L' (small red points).
End of explanation
sns.lmplot(x="VisITedResources", y="raisedhands", data=data)
sns.plt.show()
Explanation: While there is a decent amount of noise, we can still see a linear relationship where students who raise their hands and visit class resources are in the group with the higher class.
There is an evident pattern where there's a cluster of 'H's at the top right hand corner of the graph and a cluster of 'L's at the bottom left hand corner of the graph.
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.pairplot(data, hue='Class', size=2.5);
Explanation: Exploring correlations between multidimensional data & plotting all pairs of values against each other
We see the relationship between class is strongest between 'raisedhands' and other features as well as 'visitedresources' and other features
The 2 weaker features seem to be 'announcements view' and 'discussion' where the points for the different 'H, M, L' classes seem to be scattered all over with no particular pattern.
Now that we have this analysis, we have a better understanding of which features will help us predict the target classes we want and which will be added noise. This is an importance part of the process of feature analysis.
End of explanation
#Creating the criteria for an outlier based on multiple conditions. Only dealing with 'H' and 'L' class to be safe.
outliers_1 = data[(data['Class'] == 'H') & (data['raisedhands'] <= 17)]
outliers_2 = data[(data['Class'] == 'H') & (data['VisITedResources'] <= 18)]
outliers_3 = data[(data['Class'] == 'L') & (data['raisedhands'] >= 70)]
outliers_4 = data[(data['Class'] == 'L') & (data['VisITedResources'] >= 78)]
outliers_5 = data[(data['Class'] == 'H') & (data['StudentAbsenceDays'] == 'Above-7')]
outliers_6 = data[(data['Class'] == 'L') & (data['StudentAbsenceDays'] == 'Under-7')]
#dropping the rows which contained the outliers as indicated by the above criteria
new_data = data.drop([14,47,48,72,74,80,84,86,87,88,94,96,124,128,129,190,200,205,226,227,
228,248,250,255,344,345,444,445,450])
#Using shape to check the number of outliers dropped, usually no more than 10%.
#However, since the dataset is small, better to leave more data to work with.
#451 left out of the orginal 480 observations: We only dropped ~ 6% of data.
print(new_data.shape)
Explanation: Step #2. - Data Analysis & Application of Machine Learning Classification Models with Performance Evaluation
1. Preprocessing Data
Removal of outliers
Encoding categorical data
2. Machine Learning Models Used
Logistic Regression Classifier
Support Vector Machines Classifer
Random Forest Classifier (ensemble)
Gradient Boosting Classifier (ensemble)
Bagging Classifier (ensemble)
3. Evaluation Methods Used
Prediction Accuracy Score (on test data)
K-fold cross validation (on training data)
Plotting learning curves to assess Bias vs. Variance
Precision, Recall & F1 Scores
4. Implementation Process
Once the independent variables (features) have been determined, we split the data set into testing and training sets using sklearn's cross validation: train_test_split function
Next we import the models we need from sklearn
Scaling features (using standard scaler) & PCA (if applicable)
Tuning hyperparameters to optimize model performance (each model could require different constraints, weights or learning rates to generalize different data patterns)
Fit/train the data
Predict y-values using test data
Use sklearn's metrics to determine accuracy score
Use k-fold cross validation to avoid overfitting
Get additional performance metrics such as precision, recall and F1 score for comparison
Preprocessing data: remove outliers to reduce noise and improve accuracy
Here, the criteria used for determining outliers is based upon the summary of statistics table above (specifically the one separated by class). Using the mean for 'H' and 'L' students in the first two features, we identify a cutoff point for each. (For example, ff a student with an 'H' raised their hands less than the mean for which a student with 'L' raised their hands--they are removed from the dataset.)
Similarly, when looking at the bar graphs for attendance, we saw that very few 'H' students were absent more than 7 days, and very few 'L' students were absent less than 7 days-- also labeling them as outliers/candidates for removal.
End of explanation
#Preprocessing data to encode categorical values for the y-target column
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
target_value = le.fit_transform(new_data.Class)
print (target_value)
Explanation: Preprocessing Data: Label Encoding Categorical Values to transform the prediction target (y)
Libraries used: Scikit-Learn
In Sklearn, machine learning algorithms require the input variables from the data to be real values. Therefore we must format (or transform) the data into the structure that allows us to feed it into the model.
Here, we use a label encoder to prepare the data where numerical dimensions are used to represent membership in the categories ('H, L, M')
As we can see, once encoded:
|Categorical Class | Numerical value|
|-----------------:| -------------- |
|H (High grade) | 0 |
|L (Low grade) | 1 |
|M (Middle grade) | 2 |
End of explanation
data_dummies = pd.get_dummies(new_data, columns = ['ParentschoolSatisfaction', 'StudentAbsenceDays','Relation'])
print(data_dummies.shape)
data_dummies.head()
Explanation: Preprocessing Data: Dummy/One-hot Encoding to transform the features values (x)
A common alternative approach to encoding categorical values is called dummy (or one-hot) encoding where the the basic strategy is to convert each category value into a new column and assigns a 1 or 0 (True/False) value to the column
Pandas supports this feature using get_dummies to create dummy/indicator variables (aka 1 or 0).
End of explanation
feature_cols = feature_cols = ['raisedhands', 'VisITedResources', 'ParentschoolSatisfaction_Bad', 'Relation_Father',
'Relation_Mum','ParentschoolSatisfaction_Good', 'StudentAbsenceDays_Above-7',
'StudentAbsenceDays_Under-7']
X = data_dummies[feature_cols]
y = target_value
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.15)
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
pipe_lr = Pipeline([('rs', StandardScaler()), ('pca', PCA(n_components = 4)),
('logreg', LogisticRegression(C=1e9))])
pipe_lr.fit(X_train, y_train)
y_pred = pipe_lr.predict(X_test)
from sklearn.metrics import accuracy_score
print ('Prediction Accuracy:', accuracy_score(y_test, y_pred))
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(estimator = pipe_lr, X= X_train, y = y_train, cv = 10, n_jobs =1)
print ('Cross-validated Scores: %s' %scores)
print("CV Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
from sklearn.metrics import classification_report
print(' ', classification_report(y_test, y_pred))
Explanation: Logistic Regression Classifer
The logistic regression classifier is a method used to generalizes logistic regression to multiclass problems (applicable here since we are trying to predict more than two possible discrete outcomes). It is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable
Dependent variable (y values) = Class ('H, L, M')
Independent variables (or features / x values) = 'raisedhands', 'VisITedResources', 'ParentschoolSatisfaction_Bad', 'Relation_Father', Relation_Mum','ParentschoolSatisfaction_Good', 'StudentAbsenceDays_Above-7','StudentAbsenceDays_Under-7'
In Logistic Regression modeling, input values (X) are combined linearly using weights or coefficient values to predict an output value (y)--- or more specifically, the probability that an input (X) belongs to the default class.
Pros:
Low variance
Provides probabilities for outcomes
works well with diagonal (feature) decision boundaries
Cons:
Doesn’t perform well when feature space is too large
High bias
Relies on entire data
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
def plot_learning_curve(estimator, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Testing score")
plt.legend(loc="best")
return plt
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)
estimator = svc
plot_learning_curve(estimator, X, y, (0.7, 1.01), cv=cv, n_jobs=4)
Explanation: Visualization of Learning Curves: Test & Training Data
A learning curve shows the validation and training score of an estimator for varying numbers of training samples. It is a tool to find out how much we benefit from adding more training data and whether the estimator suffers more from a variance error or a bias error.
If both the validation score and the training score converge to a value that is too low with increasing size of the training set, we will not benefit much from more training data.
We can use the sklearn learning_curve function to generate the values that are required to plot such a learning curve (number of samples that have been used, the average scores on the training sets and the average scores on the validation sets)
Learning curve allows us to verify when a model has learning as much as it can about the data, indicated by: a) the performances on the training and testing sets reach a plateau and b) here is a consistent gap between the two error rates, as is consistent with our graph below.
Our Learning Curves show a decent model because a) the testing and training learning curves converge at similar values and b) the smaller the gap between curves, the better our model generalizes. The results of our graph represent moderate bias and low variance, which is an indication that we should increase model complexity--leading us to the ensemble methods we'll be using moving forward.
End of explanation
# RANDOM FOREST CLASSIFIER model -- ensemble method No.1
feature_cols = ['raisedhands', 'VisITedResources', 'ParentschoolSatisfaction_Bad',
'ParentschoolSatisfaction_Good', 'StudentAbsenceDays_Above-7', 'Relation_Father',
'Relation_Mum',
'StudentAbsenceDays_Under-7']
X = data_dummies[feature_cols]
y = target_value
# train/test split
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.15)
from sklearn.preprocessing import StandardScaler
scl = StandardScaler()
scl.fit_transform(X_train, y_train)
scl.transform(X_test)
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(max_depth=3, n_estimators=200, oob_score = True, n_jobs = -1,
random_state=50)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
from sklearn.metrics import accuracy_score
print ('Prediction Accuracy:', accuracy_score(y_test, y_pred))
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(estimator = pipe_lr, X= X_train, y = y_train, cv = 10, n_jobs =1)
print ('Cross-validated Scores: %s' %scores)
print("CV Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
from sklearn.metrics import classification_report
print('Scores', classification_report(y_test, y_pred))
Explanation: Random Forest Classifier
A random forest is an ensemble method (combination of learning algorithms) that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging (majority of votes to make a prediction) to improve the predictive accuracy and control over-fitting.
At the root of random forest are decision trees, which are a type of flowchart which assist in the decision making process. Internal nodes represent tests on particular attributes, while branches exiting nodes represent a single test outcome, and leaf nodes represent class labels. The goal is to split on the attributes which create the purest child nodes possible.
Random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of reducing the variance. This comes at the expense of a small increase in the bias and some loss of interpretability, but generally greatly boosts the performance in the final model.
Pros:
Accurate and does not tend to overfit
Robust against outliers in the predictive variables
It gives estimates of what variables are important in the classification--> feature selection
Cons:
Not as easy to visually interpret
Slower runtime
End of explanation
# Taking a look at feature importance via Random Forest
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
feat_imp = pd.Series(rfc.feature_importances_, index=X.columns)
feat_imp.sort_values(inplace=True, ascending=False)
feat_imp.head(20).plot(kind='barh', title='Feature importance')
Explanation: Random Forest: Feature Importance
One of the best use cases for random forest is that it's a great tool for feature selection. Since random forest is built on having multiple decision tress, one of the byproducts of trying lots of decision tree variations is that you can examine which variables are working best/worst in each tree.
Random forests measures feature importance through something called the Gini Importance or Mean Decrease in Impurity (MDI) calculates each feature importance as the sum over the number of splits (accross all tress) that include the feature, proportionaly to the number of samples it splits.
End of explanation
# Gradient Boosting Classifer: Ensemble method No.2
feature_cols = ['raisedhands', 'VisITedResources', 'ParentschoolSatisfaction_Bad',
'ParentschoolSatisfaction_Good', 'StudentAbsenceDays_Above-7', 'Relation_Father',
'Relation_Mum',
'StudentAbsenceDays_Under-7']
X = data_dummies[feature_cols]
y = target_value
# train/test split
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.15)
from sklearn.preprocessing import StandardScaler
scl = StandardScaler()
scl.fit_transform(X_train, y_train)
scl.transform(X_test)
from sklearn.ensemble import GradientBoostingClassifier
gbc = GradientBoostingClassifier(learning_rate = 0.15, n_estimators = 300, max_depth = 8, min_samples_leaf = 3,
max_features = 'log2')
gbc.fit(X_train,y_train)
y_pred = gbc.predict(X_test)
from sklearn.metrics import accuracy_score
print ('Prediction Accuracy:', accuracy_score(y_test, y_pred))
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(estimator = pipe_lr, X= X_train, y = y_train, cv = 10, n_jobs =1)
print ('Cross-validated Scores: %s' %scores)
print("CV Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
from sklearn.metrics import classification_report
print('Scores', classification_report(y_test, y_pred))
Explanation: Gradient Boosting Classifier
Gradient Boosting is based on the the idea of boosting, which is a method of trying to modify a weak learner into a becoming a better one. It starts with filtering observations, leaving those observations that the weak learner can handle and focusing on developing new weak learns to handle the remaining difficult observations. For example, the model will build trees one at a time, where each new tree helps to correct errors made by previously trained tree.
With Gradient Boosting, the objective is to minimize the loss of the model by adding weak learners using a gradient descent like procedure. This type of algorithm can be described as a stage-wise additive model. This is because one new weak learner is added at a time and existing weak learners in the model are frozen and left unchanged.
Pros:
Can easily handle qualitative (categorical) features
Very powerful and performs well in most cases
Cons:
Training generally takes longer because of the fact that trees are built sequentially
Harder to fit/tune parameters
End of explanation |
9,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
Quick demo on how FloPy handles external files for arrays
Step1: make an hk and vka array. We'll save hk to files - pretent that you spent months making this important model property. Then make an lpf
Step2: Let's also have some recharge with mixed args as well. Pretend the recharge in the second stress period is very important and precise
Step3: Let's look at the files that were created
Step4: We see that a copy of the hk files as well as the important recharge file were made in the model_ws.Let's looks at the lpf file
Step5: We see that the open/close approach was used - this is because ml.array_free_format is True. Notice that vka is written internally
Step6: Now change model_ws
Step7: Now when we call write_input(), a copy of external files are made in the current model_ws
Step8: Now we see that the external files were copied to the new model_ws
Using external_path
It is sometimes useful when first building a model to write the model arrays as external files for processing and parameter estimation. The model attribute external_path triggers this behavior
Step9: We can see that the model constructor created both model_ws and external_path which is relative to the model_ws
Step10: Now, when we call write_input(), any array properties that were specified as np.ndarray will be written externally. If a scalar was passed as the argument, the value remains internal to the model input files
Step11: Now, vka was also written externally, but not the storage properties.Let's verify the contents of the external path directory. We see our hard-fought hk and important_recharge arrays, as well as the vka arrays.
Step12: Fixed format
All of this behavior also works for fixed-format type models (really, really old models - I mean OLD!)
Step13: We see that now the external arrays are being handled through the name file. Let's look at the name file
Step14: "free" and "binary" format
Step15: The .how attribute
Util2d includes a .how attribute that gives finer grained control of how arrays will written
Step16: This will raise an error since our model does not support free format...
Step17: So let's reset hk layer 1 back to external... | Python Code:
import os
import shutil
import flopy
import numpy as np
# make a model
nlay,nrow,ncol = 10,20,5
model_ws = os.path.join("data","external_demo")
if os.path.exists(model_ws):
shutil.rmtree(model_ws)
# the place for all of your hand made and costly model inputs
array_dir = os.path.join("data","array_dir")
if os.path.exists(array_dir):
shutil.rmtree(array_dir)
os.mkdir(array_dir)
ml = flopy.modflow.Modflow(model_ws=model_ws)
dis = flopy.modflow.ModflowDis(ml,nlay=nlay,nrow=nrow,ncol=ncol,steady=False,nper=2)
Explanation: FloPy
Quick demo on how FloPy handles external files for arrays
End of explanation
hk = np.zeros((nlay,nrow,ncol)) + 5.0
vka = np.zeros_like(hk)
fnames = []
for i,h in enumerate(hk):
fname = os.path.join(array_dir,"hk_{0}.ref".format(i+1))
fnames.append(fname)
np.savetxt(fname,h)
vka[i] = i+1
lpf = flopy.modflow.ModflowLpf(ml,hk=fnames,vka=vka)
Explanation: make an hk and vka array. We'll save hk to files - pretent that you spent months making this important model property. Then make an lpf
End of explanation
warmup_recharge = np.ones((nrow,ncol))
important_recharge = np.random.random((nrow,ncol))
fname = os.path.join(array_dir,"important_recharge.ref")
np.savetxt(fname,important_recharge)
rch = flopy.modflow.ModflowRch(ml,rech={0:warmup_recharge,1:fname})
ml.write_input()
Explanation: Let's also have some recharge with mixed args as well. Pretend the recharge in the second stress period is very important and precise
End of explanation
print("model_ws:",ml.model_ws)
print('\n'.join(os.listdir(ml.model_ws)))
Explanation: Let's look at the files that were created
End of explanation
open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()[:20]
Explanation: We see that a copy of the hk files as well as the important recharge file were made in the model_ws.Let's looks at the lpf file
End of explanation
ml.array_free_format
Explanation: We see that the open/close approach was used - this is because ml.array_free_format is True. Notice that vka is written internally
End of explanation
print(ml.model_ws)
ml.model_ws = os.path.join("data","new_external_demo_dir")
Explanation: Now change model_ws
End of explanation
ml.write_input()
# list the files in model_ws that have 'hk' in the name
print('\n'.join([name for name in os.listdir(ml.model_ws) if "hk" in name or "impor" in name]))
Explanation: Now when we call write_input(), a copy of external files are made in the current model_ws
End of explanation
# make a model - same code as before except for the model constructor
nlay,nrow,ncol = 10,20,5
model_ws = os.path.join("data","external_demo")
if os.path.exists(model_ws):
shutil.rmtree(model_ws)
# the place for all of your hand made and costly model inputs
array_dir = os.path.join("data","array_dir")
if os.path.exists(array_dir):
shutil.rmtree(array_dir)
os.mkdir(array_dir)
# lets make an external path relative to the model_ws
ml = flopy.modflow.Modflow(model_ws=model_ws, external_path="ref")
dis = flopy.modflow.ModflowDis(ml,nlay=nlay,nrow=nrow,ncol=ncol,steady=False,nper=2)
hk = np.zeros((nlay,nrow,ncol)) + 5.0
vka = np.zeros_like(hk)
fnames = []
for i,h in enumerate(hk):
fname = os.path.join(array_dir,"hk_{0}.ref".format(i+1))
fnames.append(fname)
np.savetxt(fname,h)
vka[i] = i+1
lpf = flopy.modflow.ModflowLpf(ml,hk=fnames,vka=vka)
warmup_recharge = np.ones((nrow,ncol))
important_recharge = np.random.random((nrow,ncol))
fname = os.path.join(array_dir,"important_recharge.ref")
np.savetxt(fname,important_recharge)
rch = flopy.modflow.ModflowRch(ml,rech={0:warmup_recharge,1:fname})
Explanation: Now we see that the external files were copied to the new model_ws
Using external_path
It is sometimes useful when first building a model to write the model arrays as external files for processing and parameter estimation. The model attribute external_path triggers this behavior
End of explanation
os.listdir(ml.model_ws)
Explanation: We can see that the model constructor created both model_ws and external_path which is relative to the model_ws
End of explanation
ml.write_input()
open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()[:20]
Explanation: Now, when we call write_input(), any array properties that were specified as np.ndarray will be written externally. If a scalar was passed as the argument, the value remains internal to the model input files
End of explanation
ml.lpf.ss.how = "internal"
ml.write_input()
open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()[:20]
print('\n'.join(os.listdir(os.path.join(ml.model_ws,ml.external_path))))
Explanation: Now, vka was also written externally, but not the storage properties.Let's verify the contents of the external path directory. We see our hard-fought hk and important_recharge arrays, as well as the vka arrays.
End of explanation
# make a model - same code as before except for the model constructor
nlay,nrow,ncol = 10,20,5
model_ws = os.path.join("data","external_demo")
if os.path.exists(model_ws):
shutil.rmtree(model_ws)
# the place for all of your hand made and costly model inputs
array_dir = os.path.join("data","array_dir")
if os.path.exists(array_dir):
shutil.rmtree(array_dir)
os.mkdir(array_dir)
# lets make an external path relative to the model_ws
ml = flopy.modflow.Modflow(model_ws=model_ws, external_path="ref")
# explicitly reset the free_format flag BEFORE ANY PACKAGES ARE MADE!!!
ml.array_free_format = False
dis = flopy.modflow.ModflowDis(ml,nlay=nlay,nrow=nrow,ncol=ncol,steady=False,nper=2)
hk = np.zeros((nlay,nrow,ncol)) + 5.0
vka = np.zeros_like(hk)
fnames = []
for i,h in enumerate(hk):
fname = os.path.join(array_dir,"hk_{0}.ref".format(i+1))
fnames.append(fname)
np.savetxt(fname,h)
vka[i] = i+1
lpf = flopy.modflow.ModflowLpf(ml,hk=fnames,vka=vka)
ml.lpf.ss.how = "internal"
warmup_recharge = np.ones((nrow,ncol))
important_recharge = np.random.random((nrow,ncol))
fname = os.path.join(array_dir,"important_recharge.ref")
np.savetxt(fname,important_recharge)
rch = flopy.modflow.ModflowRch(ml,rech={0:warmup_recharge,1:fname})
ml.write_input()
Explanation: Fixed format
All of this behavior also works for fixed-format type models (really, really old models - I mean OLD!)
End of explanation
open(os.path.join(ml.model_ws,ml.name+".nam"),'r').readlines()
Explanation: We see that now the external arrays are being handled through the name file. Let's look at the name file
End of explanation
ml.dis.botm[0].format.binary = True
ml.write_input()
open(os.path.join(ml.model_ws,ml.name+".nam"),'r').readlines()
open(os.path.join(ml.model_ws,ml.name+".dis"),'r').readlines()
Explanation: "free" and "binary" format
End of explanation
ml.lpf.hk[0].how
Explanation: The .how attribute
Util2d includes a .how attribute that gives finer grained control of how arrays will written
End of explanation
ml.lpf.hk[0].how = "openclose"
ml.lpf.hk[0].how
ml.write_input()
Explanation: This will raise an error since our model does not support free format...
End of explanation
ml.lpf.hk[0].how = "external"
ml.lpf.hk[0].how
ml.dis.top.how = "external"
ml.write_input()
open(os.path.join(ml.model_ws,ml.name+".dis"),'r').readlines()
open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()
open(os.path.join(ml.model_ws,ml.name+".nam"),'r').readlines()
Explanation: So let's reset hk layer 1 back to external...
End of explanation |
9,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Census income classification with scikit-learn
This example uses the standard <a href="https
Step1: Load the census data
Step2: Train a k-nearest neighbors classifier
Here we just train directly on the data, without any normalizations.
Step3: Explain predictions
Normally we would use a logit link function to allow the additive feature inputs to better map to the model's probabilistic output space, but knn's can produce infinite log odds ratios so we don't for this example.
It is important to note that Occupation is the dominant feature in the 1000 predictions we explain. This is because it has larger variations in value than the other features and so it impacts the k-nearest neighbors calculations more.
Step4: A summary beeswarm plot is an even better way to see the relative impact of all features over the entire dataset. Features are sorted by the sum of their SHAP value magnitudes across all samples.
Step5: A heatmap plot provides another global view of the model's behavior, this time with a focus on population subgroups.
Step6: Normalize the data before training the model
Here we retrain a KNN model on standardized data.
Step7: Explain predictions
When we explain predictions from the new KNN model we find that Occupation is no longer the dominate feature, but instead more predictive features, such as marital status, drive most predictions. This is simple example of how explaining why your model is making it's predicitons can uncover problems in the training process.
Step8: With a summary plot with see marital status is the most important on average, but other features (such as captial gain) can have more impact on a particular individual.
Step9: A dependence scatter plot shows how the number of years of education increases the chance of making over 50K annually. | Python Code:
import sklearn
import shap
Explanation: Census income classification with scikit-learn
This example uses the standard <a href="https://archive.ics.uci.edu/ml/datasets/Adult">adult census income dataset</a> from the UCI machine learning data repository. We train a k-nearest neighbors classifier using sci-kit learn and then explain the predictions.
End of explanation
X,y = shap.datasets.adult()
X["Occupation"] *= 1000 # to show the impact of feature scale on KNN predictions
X_display,y_display = shap.datasets.adult(display=True)
X_train, X_valid, y_train, y_valid = sklearn.model_selection.train_test_split(X, y, test_size=0.2, random_state=7)
Explanation: Load the census data
End of explanation
knn = sklearn.neighbors.KNeighborsClassifier()
knn.fit(X_train, y_train)
Explanation: Train a k-nearest neighbors classifier
Here we just train directly on the data, without any normalizations.
End of explanation
f = lambda x: knn.predict_proba(x)[:,1]
med = X_train.median().values.reshape((1,X_train.shape[1]))
explainer = shap.Explainer(f, med)
shap_values = explainer(X_valid.iloc[0:1000,:])
shap.plots.waterfall(shap_values[0])
Explanation: Explain predictions
Normally we would use a logit link function to allow the additive feature inputs to better map to the model's probabilistic output space, but knn's can produce infinite log odds ratios so we don't for this example.
It is important to note that Occupation is the dominant feature in the 1000 predictions we explain. This is because it has larger variations in value than the other features and so it impacts the k-nearest neighbors calculations more.
End of explanation
shap.plots.beeswarm(shap_values)
Explanation: A summary beeswarm plot is an even better way to see the relative impact of all features over the entire dataset. Features are sorted by the sum of their SHAP value magnitudes across all samples.
End of explanation
shap.plots.heatmap(shap_values)
Explanation: A heatmap plot provides another global view of the model's behavior, this time with a focus on population subgroups.
End of explanation
# normalize data
dtypes = list(zip(X.dtypes.index, map(str, X.dtypes)))
X_train_norm = X_train.copy()
X_valid_norm = X_valid.copy()
for k,dtype in dtypes:
m = X_train[k].mean()
s = X_train[k].std()
X_train_norm[k] -= m
X_train_norm[k] /= s
X_valid_norm[k] -= m
X_valid_norm[k] /= s
knn_norm = sklearn.neighbors.KNeighborsClassifier()
knn_norm.fit(X_train_norm, y_train)
Explanation: Normalize the data before training the model
Here we retrain a KNN model on standardized data.
End of explanation
f = lambda x: knn_norm.predict_proba(x)[:,1]
med = X_train_norm.median().values.reshape((1,X_train_norm.shape[1]))
explainer = shap.Explainer(f, med)
shap_values_norm = explainer(X_valid_norm.iloc[0:1000,:])
Explanation: Explain predictions
When we explain predictions from the new KNN model we find that Occupation is no longer the dominate feature, but instead more predictive features, such as marital status, drive most predictions. This is simple example of how explaining why your model is making it's predicitons can uncover problems in the training process.
End of explanation
shap.summary_plot(shap_values_norm, X_valid.iloc[0:1000,:])
Explanation: With a summary plot with see marital status is the most important on average, but other features (such as captial gain) can have more impact on a particular individual.
End of explanation
shap.plots.scatter(shap_values_norm[:,"Education-Num"])
Explanation: A dependence scatter plot shows how the number of years of education increases the chance of making over 50K annually.
End of explanation |
9,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-1', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
9,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
检索,查询数据
这一节学习如何检索pandas数据。
Step1: Python和Numpy的索引操作符[]和属性操作符‘.’能够快速检索pandas数据。
然而,这两种方式的效率在pandas中可能不是最优的,我们推荐使用专门优化过的pandas数据检索方法。而这些方法则是本节要介绍的。
多种索引方式
pandas支持三种不同的索引方式:
* .loc 基于label进行索引,当然也可以和boolean数组一起使用。‘.loc’接受的输入:
* 一个单独的label,比如5、'a',注意,这里的5是index值,而不是整形下标
* label列表或label数组,比如['a', 'b', 'c']
* .iloc 是基本的基于整数位置(从0到axis的length-1)的,当然也可以和一个boolean数组一起使用。当提供检索的index越界时会有IndexError错误,注意切片索引(slice index)允许越界。
* .ix 支持基于label和整数位置混合的数据获取方式。默认是基本label的. .ix是最常用的方式,它支持所有.loc和.iloc的输入。如果提供的是纯label或纯整数索引,我们建议使用.loc或 .iloc。
以 .loc为例看一下使用方式:
对象类型 | Indexers
Series | s.loc[indexer]
DataFrame | df.loc[row_indexer, column_indexer]
Panel | p.loc[item_indexer, major_indexer, minor_indexer]
最基本的索引和选择
最基本的选择数据方式就是使用[]操作符进行索引,
对象类型 | Selection | 返回值类型
Series | series[label],这里的label是index名 | 常数
DataFrame| frame[colname],使用列名 | Series对象,相应的colname那一列
Panel | panel[itemname] | DataFrame对象,相应的itemname那一个
下面用示例展示一下
Step2: 我们使用最基本的[]操作符
Step3: Series使用index索引
Step4: 也可以给[]传递一个column name组成的的list,形如df[[col1,col2]], 如果给出的某个列名不存在,会报错
Step5: 通过属性访问 把column作为DataFrame对象的属性
可以直接把Series的index、DataFrame中的column、Panel中的item作为这些对象的属性使用,然后直接访问相应的index、column、item
Step6: 注意:使用属性和[] 有一点区别:
如果要新建一个column,只能使用[]
毕竟属性的含义就是现在存在的!不存在的列名当然不是属性了
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; if you try to use attribute access to create a new column, it fails silently, creating a new attribute rather than a new column.
使用属性要注意的:
* 如果一个已经存在的函数和列名相同,则不存在相应的属性哦
* 总而言之,属性的适用范围要比[]小
切片范围 Slicing ranges
可以使用 [] 还有.iloc切片,这里先介绍使用[]
对于Series来说,使用[]进行切片就像ndarray一样,
Step7: []不但可以检索,也可以赋值
Step8: 对于DataFrame对象来说,[]操作符按照行进行切片,非常有用。
Step9: 使用Label进行检索
警告:
.loc要求检索时输入必须严格遵守index的类型,一旦输入类型不对,将会引起TypeError。
Step10: 输入string进行检索没问题
Step11: 细心地你一定发现了,index='20160104'那一行也被检索出来了,没错,loc检索时范围是闭集合[start,end].
整型可以作为label检索,这是没问题的,不过要记住此时整型表示的是label而不是index中的下标!
.loc操作是检索时的基本操作,以下输入格式都是合法的:
* 一个label,比如:5、'a'. 记住这里的5表示的是index中的一个label而不是index中的一个下标。
* label组成的列表或者数组比如['a','b','c']
* 切片,比如'a'
Step12: loc同样支持赋值操作
Step13: 再来看看DataFramed的例子
Step14: 使用切片检索
Step15: 使用布尔数组检索
Step16: 得到DataFrame中的某一个值, 等同于df1.get_value('a','A')
Step17: 根据下标进行检索 Selection By Position
pandas提供了一系列的方法实现基于整型的检索。语义和python、numpy切片几乎一样。下标同样都是从0开始,并且进行的是半闭半开的区间检索[start,end)。如果输入 非整型label当做下标进行检索会引起IndexError。
.iloc的合法输入包括:
* 一个整数,比如5
* 整数组成的列表或者数组,比如[4,3,0]
* 整型表示的切片,比如1
Step18: iloc同样也可以进行赋值
Step19: DataFrame的示例
Step20: 进行行和列的检索
Step21: 注意下面两个例子的区别:
Step22: 如果切片检索时输入的范围越界,没关系,只要pandas版本>=v0.14.0, 就能如同Python/Numpy那样正确处理。
注意:仅限于 切片检索
Step23: 上面说到,这种优雅处理越界的能力仅限于输入全是切片,如果输入是越界的 列表或者整数,则会引起IndexError
Step24: 输入有切片,有整数,如果越界同样不能处理
Step25: 选择随机样本 Selecting Random Samples
使用sample()方法能够从行或者列中进行随机选择,适用对象包括Series、DataFrame和Panel。sample()方法默认对行进行随机选择,输入可以是整数或者小数。
Step26: 也可以输入小数,则会随机选择N*frac个样本, 结果进行四舍五入
Step27: sample()默认进行的无放回抽样,可以利用replace=True参数进行可放回抽样。
Step28: 默认情况下,每一行/列都被等可能的采样,如果你想为每一行赋予一个被抽样选择的权重,可以利用weights参数实现。
注意:如果weights中各概率相加和不等于1,pandas会先对weights进行归一化,强制转为概率和为1!
Step29: 注意:由于sample默认进行的是无放回抽样,所以输入必须n<=行数,除非进行可放回抽样。
Step30: 如果是对DataFrame对象进行有权重采样,一个简单 的方法是新增一列用于表示每一行的权重
Step31: 对列进行采样, axis=1
Step32: 我们也可以使用random_state参数 为sample内部的随机数生成器提供种子数。
Step33: 注意下面两个示例,输出是相同的,因为使用了相同的种子数
Step34: 使用赋值的方式扩充对象 Setting With Enlargement
用.loc/.ix/[]对不存在的键值进行赋值时,将会导致在对象中添加新的元素,它的键即为赋值时不存在的键。
对于Series来说,这是一种有效的添加操作。
Step35: DataFrame可以在行或者列上扩充数据
Step36: 标量值的快速获取和赋值
如果仅仅想获取一个元素,使用[]未免太繁重了。pandas提供了快速获取一个元素的方法:at和iat. 适用于Series、DataFrame和Panel。
如果loc方法,at方法的合法输入是label,iat的合法输入是整型。
Step37: 也可以进行赋值操作
Step38: 布尔检索 Boolean indexing
另一种常用的操作是使用布尔向量过滤数据。运算符有三个
Step39: DataFrame示例:
Step40: 利用列表解析和map方法能够产生更加复杂的选择标准。
Step41: 结合loc、iloc等方法可以检索多个坐标下的数据.
Step42: 使用isin方法检索 Indexing with isin
isin(is in)
对于Series对象来说,使用isin方法时传入一个列表,isin方法会返回一个布尔向量。布尔向量元素为1的前提是列表元素在Series对象中存在。看起来比较拗口,还是看例子吧:
Step43: Index对象中也有isin方法.
Step44: DataFrame同样有isin方法,参数是数组或字典。二者的区别看例子吧:
Step45: 输入一个字典的情形:
Step46: 结合isin方法和any() all()可以对DataFrame进行快速查询。比如选择每一列都符合标准的行
Step47: where()方法 The where() Method and Masking
使用布尔向量对Series对象查询时通常返回的是对象的子集。如果想要返回的shape和原对象相同,可以使用where方法。
使用布尔向量对DataFrame对象查询返回的shape和原对象相同,这是因为底层用的where方法实现。
Step48: 使用where方法
Step49: where方法还有一个可选的other参数,作用是替换返回结果中是False的值,并不会改变原对象。
Step50: 你可能想基于某种判断条件来赋值。一种直观的方法是:
Step51: 默认情况下,where方法并不会修改原始对象,它返回的是一个修改过的原始对象副本,如果你想直接修改原始对象,方法是将inplace参数设置为True
Step52: 对齐
where方法会将输入的布尔条件对齐,因此允许部分检索时的赋值。
Step53: mask
Step54: query()方法 The query() Method (Experimental)
DataFrame对象拥有query方法,允许使用表达式检索。
比如,检索列'b'的值介于列‘a’和‘c’之间的行。
注意: 需要安装numexptr。
Step55: MultiIndex query() 语法
对于DataFrame对象,可以使用MultiIndex,如同操作列名一样。
Step56: 如果index没有名字,可以给他们命名
Step57: ilevl_0意思是 0级index。
query() 用例 query() Use Cases
一个使用query()的情景是面对DataFrame对象组成的集合,并且这些对象有共同的的列名,则可以利用query方法对这个集合进行统一检索。
Step58: Python中query和pandas中query语法比较 query() Python versus pandas Syntax Comparison
Step59: query()可以去掉圆括号, 也可以用and 代替&运算符
Step60: in 和not in 运算符 The in and not in operators
query()也支持Python中的in和not in运算符,实际上是底层调用isin
Step61: ==和列表对象一起使用 Special use of the == operator with list objects
可以使用==/!=将列表和列名直接进行比较,等价于使用in/not in.
三种方法功能等价: ==/!= VS in/not in VS isin()/~isin()
Step62: 布尔运算符 Boolean Operators
可以使用not或者~对布尔表达式进行取非。
Step63: 表达式任意复杂都没关系。
Step64: query()的性能
DataFrame.query()底层使用numexptr,所以速度要比Python快,特别时当DataFrame对象非常大时。
重复数据的确定和删除 Duplicate Data
如果你想确定和去掉DataFrame对象中重复的行,pandas提供了两个方法:duplicated和drop_duplicates. 两个方法的参数都是列名。
* duplicated 返回一个布尔向量,长度等于行数,表示每一行是否重复
* drop_duplicates 则删除重复的行
默认情况下,首次遇到的行被认为是唯一的,以后遇到内容相同的行都被认为是重复的。不过两个方法都有一个keep参数来确定目标行是否被保留。
* keep='first'(默认):标记/去掉重复行除了第一次出现的那一行
* keep='last'
Step65: 可以传递列名组成的列表
Step66: 也可以检查index值是否重复来去掉重复行,方法是Index.duplicated然后使用切片操作(因为调用Index.duplicated会返回布尔向量)。keep参数同上。
Step67: 形似字典的get()方法
Serires, DataFrame和Panel都有一个get方法来得到一个默认值。
Step68: select()方法 The select() Method
Series, DataFrame和Panel都有select()方法来检索数据,这个方法作为保留手段通常其他方法都不管用的时候才使用。select接受一个函数(在label上进行操作)作为输入返回一个布尔值。
Step69: lookup()方法 The lookup()方法
输入行label和列label,得到一个numpy数组,这就是lookup方法的功能。
Step70: Index对象 Index objects
pandas中的Index类和它的子类可以被当做一个序列可重复集合(ordered multiset),允许数据重复。然而,如果你想把一个有重复值Index对象转型为一个集合这是不可以的。创建Index最简单的方法就是通过传递一个列表或者其他序列创建。
Step71: 还可以个Index命名
Step72: 返回视图VS返回副本 Returning a view versus a copy
当对pandas对象赋值时,一定要注意避免链式索引(chained indexing)。看下面的例子:
Step73: 比较下面两种访问方式:
Step74: 上面两种方法返回的结果抖一下,那么应该使用哪种方法呢?答案是我们更推荐大家使用方法二。
dfmi['one']选择了第一级列然后返回一个DataFrame对象,然后另一个Python操作dfmi_with_one['second']根据'second'检索出了一个Series。对pandas来说,这两个操作是独立、有序执行的。而.loc方法传入一个元组(slice(None),('one','second')),pandas把这当作一个事件执行,所以执行速度更快。
为什么使用链式索引赋值为报错?
刚才谈到不推荐使用链式索引是出于性能的考虑。接下来从赋值角度谈一下不推荐使用链式索引。首先,思考Python怎么解释执行下面的代码?
Step75: 但下面的代码解释后结果却不一样:
Step76: 看到__getitem__了吗?除了最简单的情况,我们很难预测他到底返回的是视图还是副本(哲依赖于数组的内存布局,这是pandas没有硬性要求的),因此不推荐使用链式索引赋值!
而dfmi.loc.__setitem__直接对dfmi进行操作。
有时候明明没有使用链式索引,也会引起SettingWithCopy警告,这是Pandas设计的bug~
Step77: 链式索引中顺序也很重要
此外,在链式表达式中,不同的顺序也可能导致不同的结果。这里的顺序指的是检索时行和列的顺序。
Step78: 正确的方式是:老老实实使用.loc | Python Code:
import numpy as np
import pandas as pd
Explanation: 检索,查询数据
这一节学习如何检索pandas数据。
End of explanation
dates = pd.date_range('1/1/2000', periods=8)
dates
df = pd.DataFrame(np.random.randn(8,4), index=dates, columns=list('ABCD'))
df
panel = pd.Panel({'one':df, 'two':df-df.mean()})
panel
Explanation: Python和Numpy的索引操作符[]和属性操作符‘.’能够快速检索pandas数据。
然而,这两种方式的效率在pandas中可能不是最优的,我们推荐使用专门优化过的pandas数据检索方法。而这些方法则是本节要介绍的。
多种索引方式
pandas支持三种不同的索引方式:
* .loc 基于label进行索引,当然也可以和boolean数组一起使用。‘.loc’接受的输入:
* 一个单独的label,比如5、'a',注意,这里的5是index值,而不是整形下标
* label列表或label数组,比如['a', 'b', 'c']
* .iloc 是基本的基于整数位置(从0到axis的length-1)的,当然也可以和一个boolean数组一起使用。当提供检索的index越界时会有IndexError错误,注意切片索引(slice index)允许越界。
* .ix 支持基于label和整数位置混合的数据获取方式。默认是基本label的. .ix是最常用的方式,它支持所有.loc和.iloc的输入。如果提供的是纯label或纯整数索引,我们建议使用.loc或 .iloc。
以 .loc为例看一下使用方式:
对象类型 | Indexers
Series | s.loc[indexer]
DataFrame | df.loc[row_indexer, column_indexer]
Panel | p.loc[item_indexer, major_indexer, minor_indexer]
最基本的索引和选择
最基本的选择数据方式就是使用[]操作符进行索引,
对象类型 | Selection | 返回值类型
Series | series[label],这里的label是index名 | 常数
DataFrame| frame[colname],使用列名 | Series对象,相应的colname那一列
Panel | panel[itemname] | DataFrame对象,相应的itemname那一个
下面用示例展示一下
End of explanation
s = df['A'] #使用列名
s#返回的是 Series
Explanation: 我们使用最基本的[]操作符
End of explanation
s[dates[5]] #使用index名
panel['two']
Explanation: Series使用index索引
End of explanation
df
df[['B', 'A']] = df[['A', 'B']]
df
Explanation: 也可以给[]传递一个column name组成的的list,形如df[[col1,col2]], 如果给出的某个列名不存在,会报错
End of explanation
sa = pd.Series([1,2,3],index=list('abc'))
dfa = df.copy()
sa
sa.b #直接把index作为属性
dfa
dfa.A
panel.one
sa
sa.a = 5
sa
sa
dfa.A=list(range(len(dfa.index))) # ok if A already exists
dfa
dfa['A'] = list(range(len(dfa.index))) # use this form to create a new column
dfa
Explanation: 通过属性访问 把column作为DataFrame对象的属性
可以直接把Series的index、DataFrame中的column、Panel中的item作为这些对象的属性使用,然后直接访问相应的index、column、item
End of explanation
s
s[:5]
s[::2]
s[::-1]
Explanation: 注意:使用属性和[] 有一点区别:
如果要新建一个column,只能使用[]
毕竟属性的含义就是现在存在的!不存在的列名当然不是属性了
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; if you try to use attribute access to create a new column, it fails silently, creating a new attribute rather than a new column.
使用属性要注意的:
* 如果一个已经存在的函数和列名相同,则不存在相应的属性哦
* 总而言之,属性的适用范围要比[]小
切片范围 Slicing ranges
可以使用 [] 还有.iloc切片,这里先介绍使用[]
对于Series来说,使用[]进行切片就像ndarray一样,
End of explanation
s2 = s.copy()
s2[:5]=0 #赋值
s2
Explanation: []不但可以检索,也可以赋值
End of explanation
df[:3]
df[::-1]
Explanation: 对于DataFrame对象来说,[]操作符按照行进行切片,非常有用。
End of explanation
df1 = pd.DataFrame(np.random.rand(5,4), columns=list('ABCD'), index=pd.date_range('20160101',periods=5))
df1
df1.loc[2:3]
Explanation: 使用Label进行检索
警告:
.loc要求检索时输入必须严格遵守index的类型,一旦输入类型不对,将会引起TypeError。
End of explanation
df1.loc['20160102':'20160104']
Explanation: 输入string进行检索没问题
End of explanation
s1 = pd.Series(np.random.randn(6), index=list('abcdef'))
s1
s1.loc['c':]
s1.loc['b']
Explanation: 细心地你一定发现了,index='20160104'那一行也被检索出来了,没错,loc检索时范围是闭集合[start,end].
整型可以作为label检索,这是没问题的,不过要记住此时整型表示的是label而不是index中的下标!
.loc操作是检索时的基本操作,以下输入格式都是合法的:
* 一个label,比如:5、'a'. 记住这里的5表示的是index中的一个label而不是index中的一个下标。
* label组成的列表或者数组比如['a','b','c']
* 切片,比如'a':'f'.注意loc中切片范围是闭集合!
* 布尔数组
End of explanation
s1.loc['c':]=0
s1
Explanation: loc同样支持赋值操作
End of explanation
df1 = pd.DataFrame(np.random.randn(6,4), index=list('abcdef'),columns=list('ABCD'))
df1
df1.loc[['a','b','c','d'],:]
df1.loc[['a','b','c','d']] #可以省略 ':'
Explanation: 再来看看DataFramed的例子
End of explanation
df1.loc['d':,'A':'C'] #注意是闭集合
df1.loc['a']
Explanation: 使用切片检索
End of explanation
df1.loc['a']>0
df1.loc[:,df1.loc['a']>0]
Explanation: 使用布尔数组检索
End of explanation
df1.loc['a','A']
df1.get_value('a','A')
Explanation: 得到DataFrame中的某一个值, 等同于df1.get_value('a','A')
End of explanation
s1 = pd.Series(np.random.randn(5),index=list(range(0,10,2)))
s1
s1.iloc[:3] #注意检索是半闭半开区间
s1.iloc[3]
Explanation: 根据下标进行检索 Selection By Position
pandas提供了一系列的方法实现基于整型的检索。语义和python、numpy切片几乎一样。下标同样都是从0开始,并且进行的是半闭半开的区间检索[start,end)。如果输入 非整型label当做下标进行检索会引起IndexError。
.iloc的合法输入包括:
* 一个整数,比如5
* 整数组成的列表或者数组,比如[4,3,0]
* 整型表示的切片,比如1:7
* 布尔数组
看一下Series使用iloc检索的示例:
End of explanation
s1.iloc[:3]=0
s1
Explanation: iloc同样也可以进行赋值
End of explanation
df1 = pd.DataFrame(np.random.randn(6,4),index=list(range(0,12,2)), columns=list(range(0,8,2)))
df1
df1.iloc[:3]
Explanation: DataFrame的示例:
End of explanation
df1.iloc[1:5,2:4]
df1.iloc[[1,3,5],[1,2]]
df1.iloc[1:3,:]
df1.iloc[:,1:3]
df1.iloc[1,1]#只检索一个元素
Explanation: 进行行和列的检索
End of explanation
df1.iloc[1]
df1.iloc[1:2]
Explanation: 注意下面两个例子的区别:
End of explanation
x = list('abcdef')
x
x[4:10] #这里x的长度是6
x[8:10]
s = pd.Series(x)
s
s.iloc[4:10]
s.iloc[8:10]
df1 = pd.DataFrame(np.random.randn(5,2), columns=list('AB'))
df1
df1.iloc[:,2:3]
df1.iloc[:,1:3]
df1.iloc[4:6]
Explanation: 如果切片检索时输入的范围越界,没关系,只要pandas版本>=v0.14.0, 就能如同Python/Numpy那样正确处理。
注意:仅限于 切片检索
End of explanation
df1.iloc[[4,5,6]]
Explanation: 上面说到,这种优雅处理越界的能力仅限于输入全是切片,如果输入是越界的 列表或者整数,则会引起IndexError
End of explanation
df1.iloc[:,4]
Explanation: 输入有切片,有整数,如果越界同样不能处理
End of explanation
s = pd.Series([0,1,2,3,4,5])
s
s.sample()
s.sample(n=6)
s.sample(3) #直接输入整数即可
Explanation: 选择随机样本 Selecting Random Samples
使用sample()方法能够从行或者列中进行随机选择,适用对象包括Series、DataFrame和Panel。sample()方法默认对行进行随机选择,输入可以是整数或者小数。
End of explanation
s.sample(frac=0.5)
s.sample(0.5) #必须输入frac=0.5
s.sample(frac=0.8) #6*0.8=4.8
s.sample(frac=0.7)# 6*0.7=4.2
Explanation: 也可以输入小数,则会随机选择N*frac个样本, 结果进行四舍五入
End of explanation
s
s.sample(n=6,replace=False)
s.sample(6,replace=True)
Explanation: sample()默认进行的无放回抽样,可以利用replace=True参数进行可放回抽样。
End of explanation
s = pd.Series([0,1,2,3,4,5])
s
example_weights=[0,0,0.2,0.2,0.2,0.4]
s.sample(n=3,weights=example_weights)
example_weights2 = [0.5, 0, 0, 0, 0, 0]
s.sample(n=1, weights=example_weights2)
s.sample(n=2, weights=example_weights2) #n>1 会报错,
Explanation: 默认情况下,每一行/列都被等可能的采样,如果你想为每一行赋予一个被抽样选择的权重,可以利用weights参数实现。
注意:如果weights中各概率相加和不等于1,pandas会先对weights进行归一化,强制转为概率和为1!
End of explanation
s
s.sample(7) #7不行
s.sample(7,replace=True)
Explanation: 注意:由于sample默认进行的是无放回抽样,所以输入必须n<=行数,除非进行可放回抽样。
End of explanation
df2 = pd.DataFrame({'col1':[9,8,7,6], 'weight_column':[0.5, 0.4, 0.1, 0]})
df2
df2.sample(n=3,weights='weight_column')
Explanation: 如果是对DataFrame对象进行有权重采样,一个简单 的方法是新增一列用于表示每一行的权重
End of explanation
df3 = pd.DataFrame({'col1':[1,2,3], 'clo2':[2,3,4]})
df3
df3.sample(1,axis=1)
Explanation: 对列进行采样, axis=1
End of explanation
df4 = pd.DataFrame({'col1':[1,2,3], 'clo2':[2,3,4]})
df4
Explanation: 我们也可以使用random_state参数 为sample内部的随机数生成器提供种子数。
End of explanation
df4.sample(n=2, random_state=2)
df4.sample(n=2,random_state=2)
df4.sample(n=2,random_state=3)
Explanation: 注意下面两个示例,输出是相同的,因为使用了相同的种子数
End of explanation
se = pd.Series([1,2,3])
se
se[5]=5
se
Explanation: 使用赋值的方式扩充对象 Setting With Enlargement
用.loc/.ix/[]对不存在的键值进行赋值时,将会导致在对象中添加新的元素,它的键即为赋值时不存在的键。
对于Series来说,这是一种有效的添加操作。
End of explanation
dfi = pd.DataFrame(np.arange(6).reshape(3,2),columns=['A','B'])
dfi
dfi.loc[:,'C']=dfi.loc[:,'A'] #对列进行扩充
dfi
dfi.loc[3]=5 #对行进行扩充
dfi
Explanation: DataFrame可以在行或者列上扩充数据
End of explanation
s.iat[5]
df.at[dates[5],'A']
df.iat[3,0]
Explanation: 标量值的快速获取和赋值
如果仅仅想获取一个元素,使用[]未免太繁重了。pandas提供了快速获取一个元素的方法:at和iat. 适用于Series、DataFrame和Panel。
如果loc方法,at方法的合法输入是label,iat的合法输入是整型。
End of explanation
df.at[dates[-1]+1,0]=7
df
Explanation: 也可以进行赋值操作
End of explanation
s = pd.Series(range(-3, 4))
s
s[s>0]
s[(s<-1) | (s>0.5)]
s[~(s<0)]
Explanation: 布尔检索 Boolean indexing
另一种常用的操作是使用布尔向量过滤数据。运算符有三个:|(or), &(and), ~(not)。
注意:运算符的操作数要在圆括号内。
使用布尔向量检索Series的操作方式和numpy ndarray一样。
End of explanation
df[df['A'] > 0]
Explanation: DataFrame示例:
End of explanation
df2 = pd.DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
'b' : ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
'c' : np.random.randn(7)})
df2
criterion = df2['a'].map(lambda x:x.startswith('t'))
df2[criterion]
df2[[x.startswith('t') for x in df2['a']]]
df2[criterion & (df2['b'] == 'x')]
Explanation: 利用列表解析和map方法能够产生更加复杂的选择标准。
End of explanation
df2.loc[criterion & (df2['b'] == 'x'), 'b':'c']
Explanation: 结合loc、iloc等方法可以检索多个坐标下的数据.
End of explanation
s = pd.Series(np.arange(5), index=np.arange(5)[::-1],dtype='int64')
s
s.isin([2,4,6])
s[s.isin([2,4,6])]
Explanation: 使用isin方法检索 Indexing with isin
isin(is in)
对于Series对象来说,使用isin方法时传入一个列表,isin方法会返回一个布尔向量。布尔向量元素为1的前提是列表元素在Series对象中存在。看起来比较拗口,还是看例子吧:
End of explanation
s[s.index.isin([2,4,6])]
s[[2,4,6]]
Explanation: Index对象中也有isin方法.
End of explanation
df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],
'ids2':['a', 'n', 'c', 'n']})
df
values=['a', 'b', 1, 3]
df.isin(values)
Explanation: DataFrame同样有isin方法,参数是数组或字典。二者的区别看例子吧:
End of explanation
values = {'ids': ['a', 'b'], 'vals': [1, 3]}
df.isin(values)
Explanation: 输入一个字典的情形:
End of explanation
values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
row_mark = df.isin(values).all(1)
df[row_mark]
row_mark = df.isin(values).any(1)
df[row_mark]
Explanation: 结合isin方法和any() all()可以对DataFrame进行快速查询。比如选择每一列都符合标准的行:
End of explanation
s[s>0]
Explanation: where()方法 The where() Method and Masking
使用布尔向量对Series对象查询时通常返回的是对象的子集。如果想要返回的shape和原对象相同,可以使用where方法。
使用布尔向量对DataFrame对象查询返回的shape和原对象相同,这是因为底层用的where方法实现。
End of explanation
s.where(s>0)
df[df<0]
df.where(df<0)
Explanation: 使用where方法
End of explanation
df.where(df<0, 2)
df
df.where(df<0, df) #将df作为other的参数值
Explanation: where方法还有一个可选的other参数,作用是替换返回结果中是False的值,并不会改变原对象。
End of explanation
s2 = s.copy()
s2
s2[s2<0]=0
s2
Explanation: 你可能想基于某种判断条件来赋值。一种直观的方法是:
End of explanation
df = pd.DataFrame(np.random.randn(6,5), index=list('abcdef'), columns=list('ABCDE'))
df_orig = df.copy()
df_orig.where(df < 0, -df, inplace=True);
df_orig
Explanation: 默认情况下,where方法并不会修改原始对象,它返回的是一个修改过的原始对象副本,如果你想直接修改原始对象,方法是将inplace参数设置为True
End of explanation
df2 = df.copy()
df2[df2[1:4] >0]=3
df2
df2 = df.copy()
df2.where(df2>0, df2['A'], axis='index')
Explanation: 对齐
where方法会将输入的布尔条件对齐,因此允许部分检索时的赋值。
End of explanation
s.mask(s>=0)
df.mask(df >= 0)
Explanation: mask
End of explanation
n = 10
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df[(df.a<df.b) & (df.b<df.c)]
df.query('(a < b) & (b < c)') #
Explanation: query()方法 The query() Method (Experimental)
DataFrame对象拥有query方法,允许使用表达式检索。
比如,检索列'b'的值介于列‘a’和‘c’之间的行。
注意: 需要安装numexptr。
End of explanation
n = 10
colors = np.random.choice(['red', 'green'], size=n)
foods = np.random.choice(['eggs', 'ham'], size=n)
colors
foods
index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])
df = pd.DataFrame(np.random.randn(n,2), index=index)
df
df.query('color == "red"')
Explanation: MultiIndex query() 语法
对于DataFrame对象,可以使用MultiIndex,如同操作列名一样。
End of explanation
df.index.names = [None, None]
df
df.query('ilevel_0 == "red"')
Explanation: 如果index没有名字,可以给他们命名
End of explanation
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df2 = pd.DataFrame(np.random.randn(n+2, 3), columns=df.columns)
df2
expr = '0.0 <= a <= c <= 0.5'
map(lambda frame: frame.query(expr), [df, df2])
Explanation: ilevl_0意思是 0级index。
query() 用例 query() Use Cases
一个使用query()的情景是面对DataFrame对象组成的集合,并且这些对象有共同的的列名,则可以利用query方法对这个集合进行统一检索。
End of explanation
df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))
df
df.query('(a<b) &(b<c)')
df[(df.a < df.b) & (df.b < df.c)]
Explanation: Python中query和pandas中query语法比较 query() Python versus pandas Syntax Comparison
End of explanation
df.query('a < b & b < c')
df.query('a<b and b<c')
Explanation: query()可以去掉圆括号, 也可以用and 代替&运算符
End of explanation
df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
'c': np.random.randint(5, size=12),
'd': np.random.randint(9, size=12)})
df
df.query('a in b')
df[df.a.isin(df.b)]
df[~df.a.isin(df.b)]
df.query('a in b and c < d') #更复杂的例子
df[df.b.isin(df.a) & (df.c < df.d)] #Python语法
Explanation: in 和not in 运算符 The in and not in operators
query()也支持Python中的in和not in运算符,实际上是底层调用isin
End of explanation
df.query('b==["a", "b", "c"]')
df[df.b.isin(["a", "b", "c"])] #Python语法
df.query('c == [1, 2]')
df.query('c != [1, 2]')
df.query('[1, 2] in c') #使用in
df.query('[1, 2] not in c')
df[df.c.isin([1, 2])] #Python语法
Explanation: ==和列表对象一起使用 Special use of the == operator with list objects
可以使用==/!=将列表和列名直接进行比较,等价于使用in/not in.
三种方法功能等价: ==/!= VS in/not in VS isin()/~isin()
End of explanation
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df['bools']=np.random.randn(len(df))>0.5
df
df.query('bools')
df.query('not bools')
df.query('not bools') == df[~df.bools]
Explanation: 布尔运算符 Boolean Operators
可以使用not或者~对布尔表达式进行取非。
End of explanation
shorter = df.query('a<b<c and (not bools) or bools>2')
shorter
longer = df[(df.a < df.b) & (df.b < df.c) & (~df.bools) | (df.bools > 2)]
longer
shorter == longer
Explanation: 表达式任意复杂都没关系。
End of explanation
df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four'],
'b': ['x', 'y', 'x', 'y', 'x', 'x', 'x'],
'c': np.random.randn(7)})
df2
df2.duplicated('a') #只观察列a的值是否重复
df2.duplicated('a', keep='last')
df2.drop_duplicates('a')
df2.drop_duplicates('a', keep='last')
df2.drop_duplicates('a', keep=False)
Explanation: query()的性能
DataFrame.query()底层使用numexptr,所以速度要比Python快,特别时当DataFrame对象非常大时。
重复数据的确定和删除 Duplicate Data
如果你想确定和去掉DataFrame对象中重复的行,pandas提供了两个方法:duplicated和drop_duplicates. 两个方法的参数都是列名。
* duplicated 返回一个布尔向量,长度等于行数,表示每一行是否重复
* drop_duplicates 则删除重复的行
默认情况下,首次遇到的行被认为是唯一的,以后遇到内容相同的行都被认为是重复的。不过两个方法都有一个keep参数来确定目标行是否被保留。
* keep='first'(默认):标记/去掉重复行除了第一次出现的那一行
* keep='last': 标记/去掉重复行除了最后一次出现的那一行
* keep=False: 标记/去掉所有重复的行
End of explanation
df2.duplicated(['a', 'b']) #此时列a和b两个元素构成每一个检索的基本单位,
df2
Explanation: 可以传递列名组成的列表
End of explanation
df3 = pd.DataFrame({'a': np.arange(6),
'b': np.random.randn(6)},
index=['a', 'a', 'b', 'c', 'b', 'a'])
df3
df3.index.duplicated() #布尔表达式
df3[~df3.index.duplicated()]
df3[~df3.index.duplicated(keep='last')]
df3[~df3.index.duplicated(keep=False)]
Explanation: 也可以检查index值是否重复来去掉重复行,方法是Index.duplicated然后使用切片操作(因为调用Index.duplicated会返回布尔向量)。keep参数同上。
End of explanation
s = pd.Series([1,2,3], index=['a', 'b', 'c'])
s
s.get('a')
s.get('x', default=-1)
s.get('b')
Explanation: 形似字典的get()方法
Serires, DataFrame和Panel都有一个get方法来得到一个默认值。
End of explanation
df = pd.DataFrame(np.random.randn(10, 3), columns=list('ABC'))
df.select(lambda x: x=='A', axis=1)
Explanation: select()方法 The select() Method
Series, DataFrame和Panel都有select()方法来检索数据,这个方法作为保留手段通常其他方法都不管用的时候才使用。select接受一个函数(在label上进行操作)作为输入返回一个布尔值。
End of explanation
dflookup = pd.DataFrame(np.random.randn(20, 4), columns=list('ABCD'))
dflookup
dflookup.lookup(list(range(0,10,2)), ['B','C','A','B','D'])
Explanation: lookup()方法 The lookup()方法
输入行label和列label,得到一个numpy数组,这就是lookup方法的功能。
End of explanation
index = pd.Index(['e', 'd', 'a', 'b'])
index
'd' in index
Explanation: Index对象 Index objects
pandas中的Index类和它的子类可以被当做一个序列可重复集合(ordered multiset),允许数据重复。然而,如果你想把一个有重复值Index对象转型为一个集合这是不可以的。创建Index最简单的方法就是通过传递一个列表或者其他序列创建。
End of explanation
index = pd.Index(['e', 'd', 'a', 'b'], name='something')
index.name
index = pd.Index(list(range(5)), name='rows')
columns = pd.Index(['A', 'B', 'C'], name='cols')
df = pd.DataFrame(np.random.randn(5, 3), index=index, columns=columns)
df
df['A']
Explanation: 还可以个Index命名
End of explanation
dfmi = pd.DataFrame([list('abcd'),
list('efgh'),
list('ijkl'),
list('mnop')],
columns=pd.MultiIndex.from_product([['one','two'],
['first','second']]))
dfmi
Explanation: 返回视图VS返回副本 Returning a view versus a copy
当对pandas对象赋值时,一定要注意避免链式索引(chained indexing)。看下面的例子:
End of explanation
dfmi['one']['second']
dfmi.loc[:,('one','second')]
Explanation: 比较下面两种访问方式:
End of explanation
dfmi.loc[:,('one','second')]=value
#实际是
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
Explanation: 上面两种方法返回的结果抖一下,那么应该使用哪种方法呢?答案是我们更推荐大家使用方法二。
dfmi['one']选择了第一级列然后返回一个DataFrame对象,然后另一个Python操作dfmi_with_one['second']根据'second'检索出了一个Series。对pandas来说,这两个操作是独立、有序执行的。而.loc方法传入一个元组(slice(None),('one','second')),pandas把这当作一个事件执行,所以执行速度更快。
为什么使用链式索引赋值为报错?
刚才谈到不推荐使用链式索引是出于性能的考虑。接下来从赋值角度谈一下不推荐使用链式索引。首先,思考Python怎么解释执行下面的代码?
End of explanation
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
Explanation: 但下面的代码解释后结果却不一样:
End of explanation
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
foo['quux'] = value # We don't know whether this will modify df or not!
return foo
Explanation: 看到__getitem__了吗?除了最简单的情况,我们很难预测他到底返回的是视图还是副本(哲依赖于数组的内存布局,这是pandas没有硬性要求的),因此不推荐使用链式索引赋值!
而dfmi.loc.__setitem__直接对dfmi进行操作。
有时候明明没有使用链式索引,也会引起SettingWithCopy警告,这是Pandas设计的bug~
End of explanation
dfb = pd.DataFrame({'a' : ['one', 'one', 'two',
'three', 'two', 'one', 'six'],
'c' : np.arange(7)})
dfb
dfb['c'][dfb.a.str.startswith('o')] = 42 #虽然会引起SettingWithCopyWarning 但也能得到正确结果
pd.set_option('mode.chained_assignment','warn')
dfb[dfb.a.str.startswith('o')]['c'] = 42 #这实际上是对副本赋值!
Explanation: 链式索引中顺序也很重要
此外,在链式表达式中,不同的顺序也可能导致不同的结果。这里的顺序指的是检索时行和列的顺序。
End of explanation
dfc = pd.DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]})
dfc
dfc.loc[0,'A'] = 11
dfc
Explanation: 正确的方式是:老老实实使用.loc
End of explanation |
9,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python MinBLEP Generator
An iPython port of the MinBLEP generator from experimentalscene
This notebook takes a bottom-up approach to reconstructing the algorithms described there, and uses numpy where possible (most notably for sinc, fft/ifft, and automagically handling complex numbers).
First, some basic imports.
Step1: Blackman Window
It's possible that numpy actually handles this, but I didn't find anything quickly, and it was simple enough to implement.
Step2: Generating a sinc buffer
From the algorithm, we use the buffer size and number of zero crossings as parameters to the sinc.
In the ExperimentalScene algorithm, they use a strange b - a construct as a replacement for 2.0 * zeroCrossings. My guess is that this is an attempt to pre-optimize the code for the compiler, which makes it a moot point in Python.
Step3: The yielding approach helps keep the code brief, and means that we don't generate more lists than we need to. But, we do need a list to pass to plot.
Step4: Once more with the same size, and fewer crossings
Step5: Windowing the Sinc
The first conceptual stage of the ExperimentalScene code is to generate a windowed sinc, so here we have a generic way to apply a windowing function to the sinc. In our case, we'll never pass anything other than blackman_window, but you could apply a different window for a different purpose if you wanted out.
Step6: And again we're yielding, so we have to make sure that the result is turned into a list for plot.
Step7: And again, this time with more zero crossings!
Step8: Cepstrum and Minimum Phase calculation
Next, the algorithm calculates the Cepstrum of the windowed sinc to get a Minimum Phase version. Note that log and abs both come from numpy here (iPython Notebook's pylab inline function effectively calls from numpy import *, so if you use this in proper code, you should update this to read numpy.log, numpy.abs, etc. For the love of all that is holy, please do NOT simply from numpy import * to get this to work. You will thank yourself later.)
Step9: The Cepstrum requires a bit more setup to illustrate, so let's make a test signal of some example size and plot it
Step10: And now computing the minimum phase
Step11: Integrate and Normalize
The last steps are to integrate the minimum phase signal, and normalize it to be between 0 and 1.
Step12: There's a bit of magic happening in the normalization
Step13: Hey, that looks like a MinBLEP!
Bringing it all together
Finally, we take all of these pieces and put them into the core algorithm shown in the ExperimentalScene code | Python Code:
pylab inline
from itertools import izip
Explanation: Python MinBLEP Generator
An iPython port of the MinBLEP generator from experimentalscene
This notebook takes a bottom-up approach to reconstructing the algorithms described there, and uses numpy where possible (most notably for sinc, fft/ifft, and automagically handling complex numbers).
First, some basic imports.
End of explanation
def blackman_window(length):
for i in range(0, length):
f1 = 2.0 * pi * i / length
f2 = 2.0 * f1
val = 0.42 - (0.5 * cos(f1)) + (0.08 * cos(f2))
yield val
plot(list(blackman_window(200)))
Explanation: Blackman Window
It's possible that numpy actually handles this, but I didn't find anything quickly, and it was simple enough to implement.
End of explanation
def sinc_buffer(size, zero_crossings=10):
for i in range(0, size):
r = float(i) / (float(size) - 1.0)
yield sinc(float(-zero_crossings) + (r * 2.0 * zero_crossings))
Explanation: Generating a sinc buffer
From the algorithm, we use the buffer size and number of zero crossings as parameters to the sinc.
In the ExperimentalScene algorithm, they use a strange b - a construct as a replacement for 2.0 * zeroCrossings. My guess is that this is an attempt to pre-optimize the code for the compiler, which makes it a moot point in Python.
End of explanation
plot(list(sinc_buffer(101, 100)))
plot(sinc(linspace(-100, 100, 101)))
plot(list(sinc_buffer(31, 15)))
plot(list(windowed_sinc(blackman_window, 106, 100)))
Explanation: The yielding approach helps keep the code brief, and means that we don't generate more lists than we need to. But, we do need a list to pass to plot.
End of explanation
plot(list(sinc_buffer(200, 5)))
Explanation: Once more with the same size, and fewer crossings:
End of explanation
def windowed_sinc(windowing_func, num_samples, zero_crossings=10):
sinc_gen = sinc_buffer(num_samples, zero_crossings)
for sinc, window in izip(sinc_gen, windowing_func(num_samples)):
yield sinc * window
Explanation: Windowing the Sinc
The first conceptual stage of the ExperimentalScene code is to generate a windowed sinc, so here we have a generic way to apply a windowing function to the sinc. In our case, we'll never pass anything other than blackman_window, but you could apply a different window for a different purpose if you wanted out.
End of explanation
plot(list(windowed_sinc(blackman_window, 200)))
Explanation: And again we're yielding, so we have to make sure that the result is turned into a list for plot.
End of explanation
plot(list(windowed_sinc(blackman_window, 200, 20)))
Explanation: And again, this time with more zero crossings!
End of explanation
def cepstrum(signal, size):
fft_res = fft.fft(signal, size)
log_abs = log(abs(fft_res))
cep = fft.ifft(log_abs)
return cep
Explanation: Cepstrum and Minimum Phase calculation
Next, the algorithm calculates the Cepstrum of the windowed sinc to get a Minimum Phase version. Note that log and abs both come from numpy here (iPython Notebook's pylab inline function effectively calls from numpy import *, so if you use this in proper code, you should update this to read numpy.log, numpy.abs, etc. For the love of all that is holy, please do NOT simply from numpy import * to get this to work. You will thank yourself later.)
End of explanation
ex_size = 4000
sig = list(windowed_sinc(blackman_window, ex_size))
cep = cepstrum(sig, ex_size)
plot(cep.real)
Explanation: The Cepstrum requires a bit more setup to illustrate, so let's make a test signal of some example size and plot it:
End of explanation
def min_phase(signal, size):
half_size = size / 2
real_time_domain = []
for i in range(0, half_size):
real_time_domain.append(2.0 * signal[i])
if size % 2 == 1:
real_time_domain.append(2.0 * signal[half_size])
zero_start = half_size + 1
else:
zero_start = half_size
for i in range(zero_start, size):
real_time_domain.append(0.0)
fft_res = fft.fft(real_time_domain, size)
exp_freq = exp(fft_res)
time_res = fft.ifft(exp_freq)
return time_res.real
mp = min_phase(cep, ex_size)
plot(mp)
Explanation: And now computing the minimum phase:
End of explanation
def integrate(signal):
running_val = 0.0
for val in signal:
running_val += val
yield running_val
plot(list(integrate(mp)))
Explanation: Integrate and Normalize
The last steps are to integrate the minimum phase signal, and normalize it to be between 0 and 1.
End of explanation
def normalize(signal):
sig = list(signal)
scale = 1.0 / sig[-1]
normalized = [x * scale for x in sig]
return normalized
normalized_ex = normalize(integrate(mp))
plot(normalized_ex)
Explanation: There's a bit of magic happening in the normalization: Since the MinBLEP signal is supposed to be a band-limited step function, we know it should converge on 1.0. And since we know that the Gibbs ripples slowly converge to a fixed value, we just grab the last value of the signal array to get the scaling factor that will turn value that the Gibbs ripple converges onto into 1.
End of explanation
def generate_min_blep(zero_crossings, oversample_factor):
size = int(zero_crossings * 2 * oversample_factor) + 1
signal = list(windowed_sinc(blackman_window, size, zero_crossings))
cep = cepstrum(signal, size)
mp = min_phase(cep, size)
minblep = normalize(integrate(mp))
return minblep
plot(generate_min_blep(15, 1))
plot(generate_min_blep(15, 400))
plot(generate_min_blep(3, 10))
plot(generate_min_blep(10, 10))
Explanation: Hey, that looks like a MinBLEP!
Bringing it all together
Finally, we take all of these pieces and put them into the core algorithm shown in the ExperimentalScene code: a function that accepts the number of zero crossings, and, in their terms, "oversampling", but what seems to me to be the sample rate of the generated signal.
I still don't quite understand the correct parameters to pass to this function to use in my signal processing, but at least this will make it somewhat easy to experiment with generating look-up tables with different parameters.
End of explanation |
9,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exact explainer
This notebooks demonstrates how to use the Exact explainer on some simple datasets. The Exact explainer is model-agnostic, so it can compute Shapley values and Owen values exactly (without approximation) for any model. However, since it completely enumerates the space of masking patterns it has $O(2^M)$ complexity for Shapley values and $O(M^2)$ complexity for Owen values on a balanced clustering tree for M input features.
Because the exact explainer knows that it is fully enumerating the masking space it can use optimizations that are not possible with random sampling based approaches, such as using a grey code ordering to minimize the number of inputs that change between successive masking patterns, and so potentially reduce the number of times the model needs to be called.
Step1: Tabular data with independent (Shapley value) masking
Step2: Plot a global summary
Step3: Plot a single instance
Step4: Tabular data with partition (Owen value) masking
While Shapley values result from treating each feature independently of the other features, it is often useful to enforce a structure on the model inputs. Enforcing such a structure produces a structure game (i.e. a game with rules about valid input feature coalitions), and when that structure is a nest set of feature grouping we get the Owen values as a recursive application of Shapley values to the group. In SHAP, we take the partitioning to the limit and build a binary herarchial clustering tree to represent the structure of the data. This structure could be chosen in many ways, but for tabular data it is often helpful to build the structure from the redundancy of information between the input features about the output label. This is what we do below
Step5: Plot a global summary
Note that only the Relationship and Marital status features share more that 50% of their explanation power (as measured by R2) with each other, so all the other parts of the clustering tree are removed by the the default clustering_cutoff=0.5 setting
Step6: Plot a single instance
Note that there is a strong similarity between the explanation from the Independent masker above and the Partition masker here. In general the distinctions between these methods for tabular data are not large, though the Partition masker allows for much faster runtime and potentially more realistic manipulations of the model inputs (since groups of clustered features are masked/unmasked together). | Python Code:
import shap
import xgboost
# get a dataset on income prediction
X,y = shap.datasets.adult()
# train an XGBoost model (but any other model type would also work)
model = xgboost.XGBClassifier()
model.fit(X, y);
Explanation: Exact explainer
This notebooks demonstrates how to use the Exact explainer on some simple datasets. The Exact explainer is model-agnostic, so it can compute Shapley values and Owen values exactly (without approximation) for any model. However, since it completely enumerates the space of masking patterns it has $O(2^M)$ complexity for Shapley values and $O(M^2)$ complexity for Owen values on a balanced clustering tree for M input features.
Because the exact explainer knows that it is fully enumerating the masking space it can use optimizations that are not possible with random sampling based approaches, such as using a grey code ordering to minimize the number of inputs that change between successive masking patterns, and so potentially reduce the number of times the model needs to be called.
End of explanation
# build an Exact explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Exact(model.predict_proba, X)
shap_values = explainer(X[:100])
# get just the explanations for the positive class
shap_values = shap_values[...,1]
Explanation: Tabular data with independent (Shapley value) masking
End of explanation
shap.plots.bar(shap_values)
Explanation: Plot a global summary
End of explanation
shap.plots.waterfall(shap_values[0])
Explanation: Plot a single instance
End of explanation
# build a clustering of the features based on shared information about y
clustering = shap.utils.hclust(X, y)
# above we implicitly used shap.maskers.Independent by passing a raw dataframe as the masker
# now we explicitly use a Partition masker that uses the clustering we just computed
masker = shap.maskers.Partition(X, clustering=clustering)
# build an Exact explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Exact(model.predict_proba, masker)
shap_values2 = explainer(X[:100])
# get just the explanations for the positive class
shap_values2 = shap_values2[...,1]
Explanation: Tabular data with partition (Owen value) masking
While Shapley values result from treating each feature independently of the other features, it is often useful to enforce a structure on the model inputs. Enforcing such a structure produces a structure game (i.e. a game with rules about valid input feature coalitions), and when that structure is a nest set of feature grouping we get the Owen values as a recursive application of Shapley values to the group. In SHAP, we take the partitioning to the limit and build a binary herarchial clustering tree to represent the structure of the data. This structure could be chosen in many ways, but for tabular data it is often helpful to build the structure from the redundancy of information between the input features about the output label. This is what we do below:
End of explanation
shap.plots.bar(shap_values2)
Explanation: Plot a global summary
Note that only the Relationship and Marital status features share more that 50% of their explanation power (as measured by R2) with each other, so all the other parts of the clustering tree are removed by the the default clustering_cutoff=0.5 setting:
End of explanation
shap.plots.waterfall(shap_values2[0])
Explanation: Plot a single instance
Note that there is a strong similarity between the explanation from the Independent masker above and the Partition masker here. In general the distinctions between these methods for tabular data are not large, though the Partition masker allows for much faster runtime and potentially more realistic manipulations of the model inputs (since groups of clustered features are masked/unmasked together).
End of explanation |
9,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Siphon to query the NetCDF Subset Service
First we construct a TDSCatalog instance pointing to our dataset of interest, in
this case TDS' "Best" virtual dataset for the GFS global 0.5 degree collection of
GRIB files. We see this catalog contains a single dataset.
Step1: We pull out this dataset and look at the access urls.
Step2: Note the NetcdfSubset entry, which we will use with our NCSS class.
Step3: We can then use the ncss object to create a new query object, which
facilitates asking for data from the server.
Step4: We construct a query asking for data corresponding to latitude 40N and longitude 105W, for the current time.
We also ask for NetCDF version 4 data, for the variables 'Temperature_isobaric' and 'Relative_humidity_isobaric'. This request will return all vertical levels for a single point and single time. Note the string representation of the query is a properly encoded query string.
Step5: We now request data from the server using this query. The NCSS class handles parsing this NetCDF data (using the netCDF4 module). If we print out the variable names, we see our requested variables, as well as a few others (more metadata information)
Step6: We'll pull out the variables we want to use, as well as the pressure values (from the isobaric3 variable).
Step7: Now we can plot these up using matplotlib. | Python Code:
from siphon.catalog import TDSCatalog
best_gfs = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/grib/NCEP/GFS/Global_0p5deg/catalog.xml?dataset=grib/NCEP/GFS/Global_0p5deg/Best')
best_gfs.datasets
Explanation: Using Siphon to query the NetCDF Subset Service
First we construct a TDSCatalog instance pointing to our dataset of interest, in
this case TDS' "Best" virtual dataset for the GFS global 0.5 degree collection of
GRIB files. We see this catalog contains a single dataset.
End of explanation
best_ds = list(best_gfs.datasets.values())[0]
best_ds.access_urls
Explanation: We pull out this dataset and look at the access urls.
End of explanation
from siphon.ncss import NCSS
ncss = NCSS(best_ds.access_urls['NetcdfSubset'])
Explanation: Note the NetcdfSubset entry, which we will use with our NCSS class.
End of explanation
query = ncss.query()
Explanation: We can then use the ncss object to create a new query object, which
facilitates asking for data from the server.
End of explanation
from datetime import datetime
query.lonlat_point(-105, 40).time(datetime.utcnow())
query.accept('netcdf4')
query.variables('Temperature_isobaric', 'Relative_humidity_isobaric')
Explanation: We construct a query asking for data corresponding to latitude 40N and longitude 105W, for the current time.
We also ask for NetCDF version 4 data, for the variables 'Temperature_isobaric' and 'Relative_humidity_isobaric'. This request will return all vertical levels for a single point and single time. Note the string representation of the query is a properly encoded query string.
End of explanation
data = ncss.get_data(query)
list(data.variables.keys())
Explanation: We now request data from the server using this query. The NCSS class handles parsing this NetCDF data (using the netCDF4 module). If we print out the variable names, we see our requested variables, as well as a few others (more metadata information)
End of explanation
temp = data.variables['Temperature_isobaric']
relh = data.variables['Relative_humidity_isobaric']
press = data.variables['isobaric3']
press_vals = press[:].squeeze()
Explanation: We'll pull out the variables we want to use, as well as the pressure values (from the isobaric3 variable).
End of explanation
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(9, 8))
ax.plot(temp[:].squeeze(), press_vals, 'r', linewidth=2)
ax.set_xlabel(temp.standard_name + ' (%s)' % temp.units)
ax.set_ylabel(press.standard_name + ' (%s)' % press.units)
# Create second plot with shared y-axis
ax2 = plt.twiny(ax)
ax2.plot(relh[:].squeeze(), press_vals, 'g', linewidth=2)
ax2.set_xlabel(relh.standard_name + ' (%s)' % relh.units)
ax.set_ylim(press_vals.max(), press_vals.min())
ax.grid(True)
Explanation: Now we can plot these up using matplotlib.
End of explanation |
9,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VHM python implemented model structure
Import libraries and set image properties
Step1: Load observations
Step2: Model simulation
Parameter values, initial conditions and constant values
Step3: Structural options
fracthand 'relative' or 'sequentialx' with x [1-4]
storhand 'linear' or 'nonlinear'
interflowhand True or False
infexcesshand True or False
nres_g/nres_i/nres_o string of 3 options, each [1-2], eg 211, 121,...
Step4: Run the model
Step5: Focus on a subperiod to plot
Step6: Plot the model output and observations to evaluate the fit
Step7: Plot modelled and filtered subflows in function of time
Step8: Plot fractions in time overview
Step9: Soil moisture plot | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
from matplotlib.ticker import LinearLocator
sns.set_style('whitegrid')
mpl.rcParams['font.size'] = 16
mpl.rcParams['axes.labelsize'] = 16
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
from VHM import VHM_flexible
import brewer2mpl
setblue = brewer2mpl.get_map('Greys', 'Sequential', 6,
reverse = True).mpl_colors
Explanation: VHM python implemented model structure
Import libraries and set image properties
End of explanation
data = pd.read_csv("/media/DATA/Githubs/project_breach_pdm_application/data/data_brach_case_nete.csv",
parse_dates=True, index_col=0)
data.head()
Explanation: Load observations
End of explanation
# Parameters
umax =280.0
uevap = 150.0
c1s = 1.8
c2s = 0.4
c3s = 1.0
c1o = -3.9
c2o = 1.59
c3o = 0.0
c4o = 0.0
c1i = -2.7
c2i = 1.
c3i = 0.0
c4i = 0.0
nso = 50
nsi = 50
Kg = 2400.0
Ki =120.0
Ko =10.0
# Define the constants
area = 361.
timestep = 1.
# Define the initial conditions
u = 170.0
qg =1.0
cg =0.0
qo =0.0
co =0.0
qi =1.0
ci =0.0
pars = [umax,uevap,c1s,c2s,c3s,c1o,c2o,c3o,c4o,c1i,c2i,c3i,c4i,nso,nsi,Kg,Ki,Ko]
constants = [area,timestep]
init_conditions = [u, qg, cg, qo, co, qi, ci]
Explanation: Model simulation
Parameter values, initial conditions and constant values
End of explanation
structure_options=['relative', 'nonlinear', True, True, '211']
Explanation: Structural options
fracthand 'relative' or 'sequentialx' with x [1-4]
storhand 'linear' or 'nonlinear'
interflowhand True or False
infexcesshand True or False
nres_g/nres_i/nres_o string of 3 options, each [1-2], eg 211, 121,...
End of explanation
rain = data['rain'].values
pet = data['evapotranspiration'].values
vhm_output = VHM_flexible(pars, constants, init_conditions,
structure_options, rain, pet)
outflows, fractions, moisture = vhm_output
# create dataframe with
data['modtot'] = outflows[:, 0]
data['modover'] = outflows[:, 1]
data['modinter'] = outflows[:, 2]
data['modbase'] = outflows[:, 3]
data['fracover'] = fractions[:, 0]
data['fracinter'] = fractions[:, 1]
data['fracbase'] = fractions[:, 2]
data['fracsoil'] = fractions[:, 3]
data['fractotal'] = data['fracover'] + data['fracinter'] + data['fracbase'] + data['fracsoil']
data['soil'] = moisture
Explanation: Run the model
End of explanation
data2plot = data['2003':'2005']
Explanation: Focus on a subperiod to plot
End of explanation
fig, axs = plt.subplots(1, 1, figsize=(14, 6), sharex=True)
axs.plot(data2plot.index, data2plot['modtot'], label='modelled')
axs.plot(data2plot.index, data2plot['meas'], label='observed')
axs.set_ylabel("flow ($m^3s^{-1}$)")
axs.yaxis.labelpad = 15
axs.xaxis.set_major_locator(
mpl.dates.MonthLocator(interval = 12))
axs.xaxis.set_major_formatter(
mpl.dates.DateFormatter('%d %b \n %Y'))
axs.tick_params(axis = 'x', pad = 15, direction='out')
# y-axis
axs.tick_params(axis = 'y', pad = 5, direction='out')
#remove spines
axs.spines['bottom'].set_visible(False)
axs.spines['top'].set_visible(False)
# set grid
axs.grid(which='both', axis='both', color='0.7',
linestyle='--', linewidth=0.8)
# line colors of the plots
axs.lines[0].set_color(setblue[0])
axs.lines[1].set_color(setblue[2])
# line widths
for line in axs.lines:
line.set_linewidth(1.2)
axs.legend(loc='upper right', fontsize=16, ncol=2, bbox_to_anchor=(1., 1.1))
#plt.savefig('vhm_flow_example.pdf', dpi=300)
#plt.savefig('vhm_flow_example.png', dpi=300)
Explanation: Plot the model output and observations to evaluate the fit
End of explanation
overf = pd.read_csv("Filter_Overlandflow3.txt", index_col=0, sep='\t', parse_dates=True, dayfirst=True)
overf.columns = ['overland flow']
interf = pd.read_csv("Filter_Interflow3.txt", index_col=0, sep='\t', parse_dates=True, dayfirst=True)
interf.columns = ['interflow']
basef = pd.read_csv("Filter_Baseflow3.txt", index_col=0, sep='\t', parse_dates=True, dayfirst=True)
basef.columns = ['baseflow']
subflow_data = overf.join(interf).join(basef)
subflow2plot = subflow_data['2003':'2005']
fig, axs = plt.subplots(3, 1, figsize=(14, 6), sharex=True)
fig.subplots_adjust(hspace = 0.2)
#first plot
axs[0].plot(data2plot.index, data2plot['modover'], label='subflow modelled')
axs[0].plot(subflow2plot.index, subflow2plot['overland flow'].values, label='subflow seperation')
axs[0].set_ylabel("overland flow \n ($m^3s^{-1}$)")
axs[0].yaxis.labelpad = 15
#second plot
axs[1].plot(data2plot.index, data2plot['modinter'])
axs[1].plot(subflow2plot.index, subflow2plot['interflow'].values)
axs[1].yaxis.tick_right()
axs[1].yaxis.set_label_position("right")
axs[1].set_ylabel("interflow \n ($m^3s^{-1}$)")
axs[1].yaxis.labelpad = 15
# third plot
axs[2].plot(data2plot.index, data2plot['modbase'])
axs[2].plot(subflow2plot.index, subflow2plot['baseflow'].values)
axs[2].xaxis.set_major_locator(
mpl.dates.MonthLocator(interval = 12))
axs[2].xaxis.set_major_formatter(
mpl.dates.DateFormatter('%d %b \n %Y'))
axs[2].tick_params(axis = 'x', pad = 15, direction='out')
axs[2].set_ylabel("baseflow \n($m^3s^{-1}$)")
axs[2].yaxis.labelpad = 10
#editing of the style:
for ax in axs:
# y-axis
ax.tick_params(axis = 'y', pad = 5, direction='out')
ax.yaxis.set_major_locator(LinearLocator(3))
#remove spines
ax.spines['bottom'].set_visible(False)
ax.spines['top'].set_visible(False)
# set grid
ax.grid(which='both', axis='both', color='0.7',
linestyle='--', linewidth=0.8)
# line colors of the plots
ax.lines[0].set_color(setblue[0])
ax.lines[1].set_color(setblue[2])
# line widths
for line in ax.lines:
line.set_linewidth(1.2)
# remove ticklabels if redundant
if not ax.is_last_row():
ax.set_xlabel('')
plt.setp(axs[1].get_xminorticklabels(), visible=False)
plt.setp(axs[1].get_xmajorticklabels(), visible=False)
plt.setp(axs[1].get_xminorticklabels(), visible=False)
temp = axs[0]
temp.legend(loc='upper right', fontsize=16, ncol=2, bbox_to_anchor=(1., 1.4))
fig.savefig('vhm_subflow_example.pdf')
fig.savefig('vhm_subflow_example.png')
Explanation: Plot modelled and filtered subflows in function of time
End of explanation
fig, axs = plt.subplots(1, 1, figsize=(14, 6), sharex=True)
axs.plot(data2plot.index, data2plot['fracover'],'-', label='fraction overland flow')
axs.plot(data2plot.index, data2plot['fracinter'],'-.', label='fraction interflow')
axs.plot(data2plot.index, data2plot['fracbase'],':', label='fraction base flow')
axs.plot(data2plot.index, data2plot['fracsoil'],'-', label='fraction infiltration')
axs.plot(data2plot.index, data2plot['fractotal'],'-', label='total fractions')
axs.set_ylabel("fractions")
axs.yaxis.labelpad = 15
axs.xaxis.set_major_locator(
mpl.dates.MonthLocator(interval = 12))
axs.xaxis.set_major_formatter(
mpl.dates.DateFormatter('%d %b \n %Y'))
axs.tick_params(axis = 'x', pad = 15, direction='out')
# y-axis
axs.tick_params(axis = 'y', pad = 5, direction='out')
axs.yaxis.set_ticks([0,0.5,1.])
axs.set_ylim([0., 1.05])
#remove spines
axs.spines['bottom'].set_visible(False)
axs.spines['top'].set_visible(False)
# set grid
axs.grid(which='both', axis='both', color='0.7',
linestyle='--', linewidth=0.8)
# line colors of the plots
axs.lines[0].set_color(setblue[0])
axs.lines[1].set_color(setblue[0])
axs.lines[2].set_color(setblue[1])
axs.lines[3].set_color(setblue[1])
axs.lines[4].set_color(setblue[3])
# line widths
for line in axs.lines:
line.set_linewidth(1.2)
axs.legend(loc='upper right', fontsize=16, ncol=3, bbox_to_anchor=(1., 0.95))
#plt.savefig('vhm_fractions_example_noante.pdf', dpi=300)
#plt.savefig('vhm_fractions_example_noante.png', dpi=300)
Explanation: Plot fractions in time overview
End of explanation
fig, axs = plt.subplots(1, 1, figsize=(14, 6), sharex=True)
axs.plot(data2plot.index, data2plot['soil'],'-')
axs.set_ylabel(r"soil moisture ($mm$)")
axs.yaxis.labelpad = 15
axs.xaxis.set_major_locator(
mpl.dates.MonthLocator(interval = 12))
axs.xaxis.set_major_formatter(
mpl.dates.DateFormatter('%d %b \n %Y'))
axs.tick_params(axis = 'x', pad = 15, direction='out')
# y-axis
axs.tick_params(axis = 'y', pad = 5, direction='out')
#remove spines
axs.spines['bottom'].set_visible(False)
axs.spines['top'].set_visible(False)
# set grid
axs.grid(which='both', axis='both', color='0.7',
linestyle='--', linewidth=0.8)
# line colors of the plots
axs.lines[0].set_color(setblue[0])
# line widths
for line in axs.lines:
line.set_linewidth(1.2)
#plt.savefig('vhm_moisture_example.pdf', dpi=300)
#plt.savefig('vhm_moisture_example.png', dpi=300)
Explanation: Soil moisture plot
End of explanation |
9,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the first SimpleITK Notebook demo
Step1: Image Construction
There are a variety of ways to create an image. All images' initial value is well defined as zero.
Step2: Pixel Types
The pixel type is represented as an enumerated type. The following is a table of the enumerated list.
<table>
<tr><td>sitkUInt8</td><td>Unsigned 8 bit integer</td></tr>
<tr><td>sitkInt8</td><td>Signed 8 bit integer</td></tr>
<tr><td>sitkUInt16</td><td>Unsigned 16 bit integer</td></tr>
<tr><td>sitkInt16</td><td>Signed 16 bit integer</td></tr>
<tr><td>sitkUInt32</td><td>Unsigned 32 bit integer</td></tr>
<tr><td>sitkInt32</td><td>Signed 32 bit integer</td></tr>
<tr><td>sitkUInt64</td><td>Unsigned 64 bit integer</td></tr>
<tr><td>sitkInt64</td><td>Signed 64 bit integer</td></tr>
<tr><td>sitkFloat32</td><td>32 bit float</td></tr>
<tr><td>sitkFloat64</td><td>64 bit float</td></tr>
<tr><td>sitkComplexFloat32</td><td>complex number of 32 bit float</td></tr>
<tr><td>sitkComplexFloat64</td><td>complex number of 64 bit float</td></tr>
<tr><td>sitkVectorUInt8</td><td>Multi-component of unsigned 8 bit integer</td></tr>
<tr><td>sitkVectorInt8</td><td>Multi-component of signed 8 bit integer</td></tr>
<tr><td>sitkVectorUInt16</td><td>Multi-component of unsigned 16 bit integer</td></tr>
<tr><td>sitkVectorInt16</td><td>Multi-component of signed 16 bit integer</td></tr>
<tr><td>sitkVectorUInt32</td><td>Multi-component of unsigned 32 bit integer</td></tr>
<tr><td>sitkVectorInt32</td><td>Multi-component of signed 32 bit integer</td></tr>
<tr><td>sitkVectorUInt64</td><td>Multi-component of unsigned 64 bit integer</td></tr>
<tr><td>sitkVectorInt64</td><td>Multi-component of signed 64 bit integer</td></tr>
<tr><td>sitkVectorFloat32</td><td>Multi-component of 32 bit float</td></tr>
<tr><td>sitkVectorFloat64</td><td>Multi-component of 64 bit float</td></tr>
<tr><td>sitkLabelUInt8</td><td>RLE label of unsigned 8 bit integers</td></tr>
<tr><td>sitkLabelUInt16</td><td>RLE label of unsigned 16 bit integers</td></tr>
<tr><td>sitkLabelUInt32</td><td>RLE label of unsigned 32 bit integers</td></tr>
<tr><td>sitkLabelUInt64</td><td>RLE label of unsigned 64 bit integers</td></tr>
</table>
There is also sitkUnknown, which is used for undefined or erroneous pixel ID's. It has a value of -1.
The 64-bit integer types are not available on all distributions. When not available the value is sitkUnknown.
More Information about the Image class be obtained in the Docstring
SimpleITK classes and functions have the Docstrings derived from the C++ definitions and the Doxygen documentation.
Step3: Accessing Attributes
If you are familliar with ITK, then these methods will follow your expectations
Step4: Note
Step5: Since the dimension and pixel type of a SimpleITK image is determined at run-time accessors are needed.
Step6: What is the depth of a 2D image?
Step7: What is the dimension and size of a Vector image?
Step8: For certain file types such as DICOM, additional information about the image is contained in the meta-data dicitonary.
Step9: Accessing Pixels
There are the member functions GetPixel and SetPixel which provides an ITK-like interface for pixel access.
Step10: Conversion between numpy and SimpleITK
Step11: The order of index and dimensions need careful attention during conversion
ITK's Image class does not have a bracket operator. It has a GetPixel which takes an ITK Index object as an argument, which is an array ordered as (x,y,z). This is the convention that SimpleITK's Image class uses for the GetPixel method as well.
While in numpy, an array is indexed in the opposite order (z,y,x).
Step12: Are we still dealing with Image, because I haven't seen one yet...
While SimpleITK does not do visualization, it does contain a built in Show method. This function writes the image out to disk and than launches a program for visualization. By default it is configured to use ImageJ, because it is readily supports all the image types which SimpleITK has and load very quickly. However, it's easily customizable by setting enviroment variables.
Step13: By converting into a numpy array, matplotlob can be used for visualization for integration into the scientifc python enviroment. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import SimpleITK as sitk
Explanation: Welcome to the first SimpleITK Notebook demo:
SimpleITK Image Basics
This document will give a brief orientation to the SimpleITK Image class.
First we import the SimpleITK Python module. By convention our module is imported into the shorter and more pythonic "sitk" local name.
End of explanation
image = sitk.Image(256, 128, 64, sitk.sitkInt16)
image_2D = sitk.Image(64, 64, sitk.sitkFloat32)
image_2D = sitk.Image([32,32], sitk.sitkUInt32)
image_RGB = sitk.Image([128,128], sitk.sitkVectorUInt8, 3)
Explanation: Image Construction
There are a variety of ways to create an image. All images' initial value is well defined as zero.
End of explanation
help(image)
Explanation: Pixel Types
The pixel type is represented as an enumerated type. The following is a table of the enumerated list.
<table>
<tr><td>sitkUInt8</td><td>Unsigned 8 bit integer</td></tr>
<tr><td>sitkInt8</td><td>Signed 8 bit integer</td></tr>
<tr><td>sitkUInt16</td><td>Unsigned 16 bit integer</td></tr>
<tr><td>sitkInt16</td><td>Signed 16 bit integer</td></tr>
<tr><td>sitkUInt32</td><td>Unsigned 32 bit integer</td></tr>
<tr><td>sitkInt32</td><td>Signed 32 bit integer</td></tr>
<tr><td>sitkUInt64</td><td>Unsigned 64 bit integer</td></tr>
<tr><td>sitkInt64</td><td>Signed 64 bit integer</td></tr>
<tr><td>sitkFloat32</td><td>32 bit float</td></tr>
<tr><td>sitkFloat64</td><td>64 bit float</td></tr>
<tr><td>sitkComplexFloat32</td><td>complex number of 32 bit float</td></tr>
<tr><td>sitkComplexFloat64</td><td>complex number of 64 bit float</td></tr>
<tr><td>sitkVectorUInt8</td><td>Multi-component of unsigned 8 bit integer</td></tr>
<tr><td>sitkVectorInt8</td><td>Multi-component of signed 8 bit integer</td></tr>
<tr><td>sitkVectorUInt16</td><td>Multi-component of unsigned 16 bit integer</td></tr>
<tr><td>sitkVectorInt16</td><td>Multi-component of signed 16 bit integer</td></tr>
<tr><td>sitkVectorUInt32</td><td>Multi-component of unsigned 32 bit integer</td></tr>
<tr><td>sitkVectorInt32</td><td>Multi-component of signed 32 bit integer</td></tr>
<tr><td>sitkVectorUInt64</td><td>Multi-component of unsigned 64 bit integer</td></tr>
<tr><td>sitkVectorInt64</td><td>Multi-component of signed 64 bit integer</td></tr>
<tr><td>sitkVectorFloat32</td><td>Multi-component of 32 bit float</td></tr>
<tr><td>sitkVectorFloat64</td><td>Multi-component of 64 bit float</td></tr>
<tr><td>sitkLabelUInt8</td><td>RLE label of unsigned 8 bit integers</td></tr>
<tr><td>sitkLabelUInt16</td><td>RLE label of unsigned 16 bit integers</td></tr>
<tr><td>sitkLabelUInt32</td><td>RLE label of unsigned 32 bit integers</td></tr>
<tr><td>sitkLabelUInt64</td><td>RLE label of unsigned 64 bit integers</td></tr>
</table>
There is also sitkUnknown, which is used for undefined or erroneous pixel ID's. It has a value of -1.
The 64-bit integer types are not available on all distributions. When not available the value is sitkUnknown.
More Information about the Image class be obtained in the Docstring
SimpleITK classes and functions have the Docstrings derived from the C++ definitions and the Doxygen documentation.
End of explanation
print image.GetSize()
print image.GetOrigin()
print image.GetSpacing()
print image.GetDirection()
print image.GetNumberOfComponentsPerPixel()
Explanation: Accessing Attributes
If you are familliar with ITK, then these methods will follow your expectations:
End of explanation
print image.GetWidth()
print image.GetHeight()
print image.GetDepth()
Explanation: Note: The starting index of a SimpleITK Image is always 0. If the output of an ITK filter has non-zero starting index, then the index will be set to 0, and the origin adjusted accordingly.
The size of the image's dimensions have explicit accessors:
End of explanation
print image.GetDimension()
print image.GetPixelIDValue()
print image.GetPixelIDTypeAsString()
Explanation: Since the dimension and pixel type of a SimpleITK image is determined at run-time accessors are needed.
End of explanation
print image_2D.GetSize()
print image_2D.GetDepth()
Explanation: What is the depth of a 2D image?
End of explanation
print image_RGB.GetDimension()
print image_RGB.GetSize()
print image_RGB.GetNumberOfComponentsPerPixel()
Explanation: What is the dimension and size of a Vector image?
End of explanation
for key in image.GetMetaDataKeys():
print "\"{0}\":\"{1}\"".format(key, image.GetMetaData(key))
Explanation: For certain file types such as DICOM, additional information about the image is contained in the meta-data dicitonary.
End of explanation
help(image.GetPixel)
print image.GetPixel(0, 0, 0)
image.SetPixel(0, 0, 0, 1)
print image.GetPixel(0, 0, 0)
print image[0,0,0]
image[0,0,0] = 10
print image[0,0,0]
Explanation: Accessing Pixels
There are the member functions GetPixel and SetPixel which provides an ITK-like interface for pixel access.
End of explanation
nda = sitk.GetArrayFromImage(image)
print nda
help(sitk.GetArrayFromImage)
nda = sitk.GetArrayFromImage(image_RGB)
img = sitk.GetImageFromArray(nda)
img.GetSize()
help(sitk.GetImageFromArray)
img = sitk.GetImageFromArray(nda, isVector=True)
print img
Explanation: Conversion between numpy and SimpleITK
End of explanation
print img.GetSize()
print nda.shape
print nda.shape[::-1]
Explanation: The order of index and dimensions need careful attention during conversion
ITK's Image class does not have a bracket operator. It has a GetPixel which takes an ITK Index object as an argument, which is an array ordered as (x,y,z). This is the convention that SimpleITK's Image class uses for the GetPixel method as well.
While in numpy, an array is indexed in the opposite order (z,y,x).
End of explanation
sitk.Show(image)
sitk.Show?
Explanation: Are we still dealing with Image, because I haven't seen one yet...
While SimpleITK does not do visualization, it does contain a built in Show method. This function writes the image out to disk and than launches a program for visualization. By default it is configured to use ImageJ, because it is readily supports all the image types which SimpleITK has and load very quickly. However, it's easily customizable by setting enviroment variables.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
z = 0
slice = sitk.GetArrayFromImage(image)[z,:,:]
plt.imshow(slice)
Explanation: By converting into a numpy array, matplotlob can be used for visualization for integration into the scientifc python enviroment.
End of explanation |
9,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow运作方式入门
代码:tensorflow/examples/tutorials/mnist/
本篇教程的目的,是向大家展示如何利用TensorFlow使用(经典)MNIST数据集训练并评估一个用于识别手写数字的简易前馈神经网络(feed-forward neural network)。我们的目标读者,是有兴趣使用TensorFlow的资深机器学习人士。
因此,撰写该系列教程并不是为了教大家机器学习领域的基础知识。
在学习本教程之前,请确保您已按照安装TensorFlow教程中的要求,完成了安装。
教程使用的文件
本教程引用如下文件:
文件|目的
-----|-----
mnist.py | 构建一个完全连接(fully connected)的MINST模型所需的代码。
fully_connected_feed.py|利用下载的数据集训练构建好的MNIST模型的主要代码,以数据反馈字典(feed dictionary)的形式作为输入模型。
只需要直接运行fully_connected_feed.py文件,就可以开始训练
准备数据
MNIST是机器学习领域的一个经典问题,指的是让机器查看一系列大小为28x28像素的手写数字灰度图像,并判断这些图像代表0-9中的哪一个数字。
更多相关信息,请查阅Yann LeCun网站中关于MNIST的介绍或者Chris Olah对MNIST的可视化探索。
下载
在run_training()方法的一开始,input_data.read_data_sets()函数会确保你的本地训练文件夹中,已经下载了正确的数据,然后将这些数据解压并返回一个含有DataSet实例的字典。
Step1: 注意:fake_data标记是用于单元测试的,读者可以不必理会。
数据集|目的
---|---
data_sets.train|55000个图像和标签(labels),作为主要训练集。
data_sets.validation|5000个图像和标签,用于迭代验证训练准确度。
data_sets.test|10000个图像和标签,用于最终测试训练准确度(trained accuracy)。
输入与占位符
placeholder_inputs()函数将生成两个tf.placeholder操作,定义传入参数的维度,包括batch_size值,后续还会将实际的训练用例传入图。images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
mnist.IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
Step2: 在训练循环(training loop)的后续步骤中,传入的整个图像和标签数据集会被切片,以符合每一个操作所设置的batch_size值,占位符操作将会填补以符合这个batch_size值。然后使用feed_dict参数,将数据传入sess.run()函数。
构建图
在为数据创建占位符之后,就可以运行mnist.py文件,经过三阶段的模式函数操作:inference(), loss(),和training()。图就构建完成了。
inference() —— 尽可能地构建好图,满足促使神经网络向前反馈并做出预测的要求。
loss() —— 往inference图中添加生成损失(loss)所需要的操作(ops)。
training() —— 往损失图中添加计算并应用梯度(gradients)所需的操作。
推理
inference()函数会尽可能地构建图,做到返回包含了预测结果(output prediction)的Tensor。
它接受图像占位符为输入,在此基础上借助ReLu(Rectified Linear Units)激活函数,构建一对完全连接层(layers),以及一个有着十个节点(node)、指明了输出logits模型的线性层。
每一层都创建于一个唯一的tf.name_scope之下,创建于该作用域之下的所有元素都将带有其前缀。
Step3: 在定义的作用域中,每一层所使用的权重和偏差都在tf.Variable实例中生成,并且包含了各自期望的维度:
Step4: 例如,当这些层是在hidden1作用域下生成时,赋予权重变量的独特名称将会是"hidden1/weights"。
每个变量在构建时,都会获得初始化操作(initializer ops)。
在这种最常见的情况下,通过tf.truncated_normal函数初始化权重变量,给赋予的shape则是一个二维tensor,其中第一个维度代表该层中权重变量所连接(connect from)的单元数量,第二个维度代表该层中权重变量所连接到的(connect to)单元数量。对于名叫hidden1的第一层,相应的维度则是[IMAGE_PIXELS, hidden1_units],因为权重变量将图像输入连接到了hidden1层。tf.truncated_normal初始函数将根据所得到的均值和标准差,生成一个随机分布。
然后,通过tf.zeros函数初始化偏差变量(biases),确保所有偏差的起始值都是0,而它们的shape则是其在该层中所接到的(connect to)单元数量。
图的三个主要操作,分别是两个tf.nn.relu操作,它们中嵌入了隐藏层所需的tf.matmul;以及logits模型所需的另外一个tf.matmul。三者依次生成,各自的tf.Variable实例则与输入占位符或下一层的输出tensor所连接。
Step5: 最后,程序会返回包含了输出结果的logits张量。
损失
loss()函数通过添加所需的损失操作,进一步构建图。
首先,labels_placeholer中的值将被转化为64位整型,然后,自动使用tf.nn.sparse_softmax_cross_entropy_with_logits操作,
将labels_placeholer编码为一个独热码,并与inference()的输出logits比较。
Step6: 然后,使用tf.reduce_mean函数,计算batch维度(第一维度)下交叉熵(cross entropy)的平均值,将将该值作为总损失。
Step7: 最后,程序会返回包含了损失值的张量。
注意:交叉熵是信息理论中的概念,可以让我们描述如果基于已有事实,相信神经网络所做的推测最坏会导致什么结果。更多详情,请查阅博文《可视化信息理论》(http
Step8: 接下来,我们实例化一个tf.train.GradientDescentOptimizer,负责按照所要求的学习效率(learning rate)应用梯度下降法(gradients)。
Step9: 之后,我们生成一个变量用于保存全局训练步骤(global training step)的数值,并使用tf.train.Optimizer.minimize操作更新系统中的训练权重(trainable weights)、增加全局步骤。根据惯例,这个操作被称为train_op,是TensorFlow会话(session)诱发一个完整训练步骤所必须运行的操作(见下文)。
Step10: 训练模型
一旦图构建完毕,就通过fully_connected_feed.py文件中的用户代码进行循环地迭代式训练和评估。
图
在run_training()这个函数的一开始,是一个Python语言中的with命令,这个命令表明所有已经构建的操作都要与默认的tf.Graph全局实例关联起来。
Step11: tf.Graph实例是一系列可以作为整体执行的操作。TensorFlow的大部分场景只需要依赖默认图一个实例即可。
利用多个图的更加复杂的使用场景也是可能的,但是超出了本教程的范围。
会话
完成全部的构建准备、生成全部所需的操作之后,我们就可以创建一个tf.Session,用于运行图。
Step12: 另外,也可以利用with代码块生成Session,限制作用域:
Step13: Session函数中没有传入参数,表明该代码将会依附于(如果还没有创建会话,则会创建新的会话)默认的本地会话。
生成会话之后,所有tf.Variable实例都会立即通过调用各自初始化操作中的tf.Session.run函数进行初始化。
Step14: tf.Session.run方法将会运行图中与作为参数传入的操作相对应的完整子集。在初次调用时,init操作只包含了变量初始化程序tf.group。图的其他部分不会在这里,而是在下面的训练循环运行。
训练循环
完成会话中变量的初始化之后,就可以开始训练了。
训练的每一步都是通过用户代码控制,而能实现有效训练的最简单循环就是:
Step15: 但是,本教程中的例子要更为复杂一点,原因是我们必须把输入的数据根据每一步的情况进行切分,以匹配之前生成的占位符。
向图输入参数
执行每一步时,我们的代码会生成一个输入字典(feed dictionary),其中包含对应步骤中训练所要使用的例子,这些例子的键就是其所代表的占位符操作。
fill_feed_dict函数中,会查询给定的DataSet,给下一batch_size批次的图像和标签,与占位符相匹配的张量则会载入下一批次的图像和标签。
Step16: 然后,以占位符为键,创建一个Python字典对象,键值则是其代表的输入张量。
Step17: 这个字典随后作为feed_dict参数,传入sess.run()函数中,为这一步的训练提供输入样例。
检查状态
代码中明确其需要获取的两个值:[train_op, loss]。
Step18: 因为要获取这两个值,sess.run()会返回一个有两个元素的元组。其中每一个张量,对应了返回的元组中的numpy数组,而这些数组中包含了当前这步训练中对应张量的值。由于train_op并不会产生输出,其在返回的元祖中的对应元素就是None,所以会被抛弃。但是,如果模型在训练中出现偏差,loss张量的值可能会变成NaN,所以我们要获取它的值,并记录下来。
假设训练一切正常,没有出现NaN,训练循环会每隔100个训练步骤,就打印一行简单的状态文本,告知用户当前的训练状态。
Step19: 状态可视化
为了释放TensorBoard所使用的事件文件(events file),所有的即时数据(在这里只有一个)都要在图构建阶段合并至一个操作(op)中。
Step20: 在创建好会话(session)之后,可以实例化一个tf.summary.FileWriter,用于写入包含了图表本身和即时数据具体值的事件文件。
Step21: 最后,每次运行summary时,都会往事件文件中写入最新的即时数据,函数的输出会传入事件文件读写器(writer)的add_summary()函数。
Step22: 事件文件写入完毕之后,可以就训练文件夹打开一个TensorBoard,查看即时数据的情况。
注意:了解更多如何构建并运行TensorBoard的信息,请查看相关教程Tensorboard:训练过程可视化。
保存检查点
为了得到可以用来后续恢复模型以进一步训练或评估的检查点文件(checkpoint file),我们实例化一个tf.train.Saver。
Step23: 在训练循环中,将定期调用tf.train.Saver.save方法,向训练文件夹中写入包含了当前所有可训练变量值得检查点文件。
Step24: 这样,我们以后就可以使用tf.train.Saver.restore方法,重载模型的参数,继续训练。
Step25: 评估模型
每隔一千个训练步骤,我们的代码会尝试使用训练数据集与测试数据集,对模型进行评估。do_eval函数会被调用三次,分别使用训练数据集、验证数据集合测试数据集。
Step26: 注意,更复杂的使用场景通常是,先隔绝data_sets.test测试数据集,只有在大量的超参数优化调整(hyperparameter tuning)之后才进行检查。但是,由于MNIST问题比较简单,我们在这里一次性评估所有的数据。
构建评估图
在进入训练循环之前,我们应该先调用mnist.py文件中的evaluation()函数,传入的logits和标签参数要与loss()函数的一致。这样做事为了先构建Eval操作。
Step27: evaluation()函数会生成tf.nn.in_top_k操作,如果在K个最有可能的预测中可以发现真的标签,那么这个操作就会将模型输出标记为正确。在本文中,我们把K的值设置为1,也就是只有在预测是真的标签时,才判定它是正确的。
Step28: 评估输出
之后,我们可以创建一个循环,往其中添加feed_dict,并在调用sess.run()函数时传入eval_correct操作,目的就是用给定的数据集评估模型。
Step29: true_count变量会累加所有in_top_k操作判定为正确的预测之和。接下来,只需要将正确测试的总数,除以例子总数,就可以得出准确率了。 | Python Code:
data_sets = input_data.read_data_sets(FLAGS.train_dir, FLAGS.fake_data)
Explanation: TensorFlow运作方式入门
代码:tensorflow/examples/tutorials/mnist/
本篇教程的目的,是向大家展示如何利用TensorFlow使用(经典)MNIST数据集训练并评估一个用于识别手写数字的简易前馈神经网络(feed-forward neural network)。我们的目标读者,是有兴趣使用TensorFlow的资深机器学习人士。
因此,撰写该系列教程并不是为了教大家机器学习领域的基础知识。
在学习本教程之前,请确保您已按照安装TensorFlow教程中的要求,完成了安装。
教程使用的文件
本教程引用如下文件:
文件|目的
-----|-----
mnist.py | 构建一个完全连接(fully connected)的MINST模型所需的代码。
fully_connected_feed.py|利用下载的数据集训练构建好的MNIST模型的主要代码,以数据反馈字典(feed dictionary)的形式作为输入模型。
只需要直接运行fully_connected_feed.py文件,就可以开始训练
准备数据
MNIST是机器学习领域的一个经典问题,指的是让机器查看一系列大小为28x28像素的手写数字灰度图像,并判断这些图像代表0-9中的哪一个数字。
更多相关信息,请查阅Yann LeCun网站中关于MNIST的介绍或者Chris Olah对MNIST的可视化探索。
下载
在run_training()方法的一开始,input_data.read_data_sets()函数会确保你的本地训练文件夹中,已经下载了正确的数据,然后将这些数据解压并返回一个含有DataSet实例的字典。
End of explanation
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
mnist.IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
Explanation: 注意:fake_data标记是用于单元测试的,读者可以不必理会。
数据集|目的
---|---
data_sets.train|55000个图像和标签(labels),作为主要训练集。
data_sets.validation|5000个图像和标签,用于迭代验证训练准确度。
data_sets.test|10000个图像和标签,用于最终测试训练准确度(trained accuracy)。
输入与占位符
placeholder_inputs()函数将生成两个tf.placeholder操作,定义传入参数的维度,包括batch_size值,后续还会将实际的训练用例传入图。images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
mnist.IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
End of explanation
with tf.name_scope('hidden1'):
Explanation: 在训练循环(training loop)的后续步骤中,传入的整个图像和标签数据集会被切片,以符合每一个操作所设置的batch_size值,占位符操作将会填补以符合这个batch_size值。然后使用feed_dict参数,将数据传入sess.run()函数。
构建图
在为数据创建占位符之后,就可以运行mnist.py文件,经过三阶段的模式函数操作:inference(), loss(),和training()。图就构建完成了。
inference() —— 尽可能地构建好图,满足促使神经网络向前反馈并做出预测的要求。
loss() —— 往inference图中添加生成损失(loss)所需要的操作(ops)。
training() —— 往损失图中添加计算并应用梯度(gradients)所需的操作。
推理
inference()函数会尽可能地构建图,做到返回包含了预测结果(output prediction)的Tensor。
它接受图像占位符为输入,在此基础上借助ReLu(Rectified Linear Units)激活函数,构建一对完全连接层(layers),以及一个有着十个节点(node)、指明了输出logits模型的线性层。
每一层都创建于一个唯一的tf.name_scope之下,创建于该作用域之下的所有元素都将带有其前缀。
End of explanation
weights = tf.Variable(
tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
name='weights')
biases = tf.Variable(tf.zeros([hidden1_units]),
name='biases')
Explanation: 在定义的作用域中,每一层所使用的权重和偏差都在tf.Variable实例中生成,并且包含了各自期望的维度:
End of explanation
hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)
hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)
logits = tf.matmul(hidden2, weights) + biases
Explanation: 例如,当这些层是在hidden1作用域下生成时,赋予权重变量的独特名称将会是"hidden1/weights"。
每个变量在构建时,都会获得初始化操作(initializer ops)。
在这种最常见的情况下,通过tf.truncated_normal函数初始化权重变量,给赋予的shape则是一个二维tensor,其中第一个维度代表该层中权重变量所连接(connect from)的单元数量,第二个维度代表该层中权重变量所连接到的(connect to)单元数量。对于名叫hidden1的第一层,相应的维度则是[IMAGE_PIXELS, hidden1_units],因为权重变量将图像输入连接到了hidden1层。tf.truncated_normal初始函数将根据所得到的均值和标准差,生成一个随机分布。
然后,通过tf.zeros函数初始化偏差变量(biases),确保所有偏差的起始值都是0,而它们的shape则是其在该层中所接到的(connect to)单元数量。
图的三个主要操作,分别是两个tf.nn.relu操作,它们中嵌入了隐藏层所需的tf.matmul;以及logits模型所需的另外一个tf.matmul。三者依次生成,各自的tf.Variable实例则与输入占位符或下一层的输出tensor所连接。
End of explanation
labels = tf.to_int64(labels)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits, name='xentropy')
Explanation: 最后,程序会返回包含了输出结果的logits张量。
损失
loss()函数通过添加所需的损失操作,进一步构建图。
首先,labels_placeholer中的值将被转化为64位整型,然后,自动使用tf.nn.sparse_softmax_cross_entropy_with_logits操作,
将labels_placeholer编码为一个独热码,并与inference()的输出logits比较。
End of explanation
loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
Explanation: 然后,使用tf.reduce_mean函数,计算batch维度(第一维度)下交叉熵(cross entropy)的平均值,将将该值作为总损失。
End of explanation
tf.summary.scalar('loss', loss)
Explanation: 最后,程序会返回包含了损失值的张量。
注意:交叉熵是信息理论中的概念,可以让我们描述如果基于已有事实,相信神经网络所做的推测最坏会导致什么结果。更多详情,请查阅博文《可视化信息理论》(http://colah.github.io/posts/2015-09-Visual-Information/)
训练
training()函数添加了通过梯度下降(gradient descent)将损失最小化所需的操作。
首先,该函数从loss()函数中获取损失张量,将其交给tf.summary.scalar,后者在与tf.summary.FileWriter(见下文)配合使用时,可以向事件文件(events file)中生成汇总值(summary values)。在本篇教程中,每次写入汇总值时,它都会释放损失Tensor的当前值(snapshot value)。
End of explanation
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
Explanation: 接下来,我们实例化一个tf.train.GradientDescentOptimizer,负责按照所要求的学习效率(learning rate)应用梯度下降法(gradients)。
End of explanation
global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimizer.minimize(loss, global_step=global_step)
Explanation: 之后,我们生成一个变量用于保存全局训练步骤(global training step)的数值,并使用tf.train.Optimizer.minimize操作更新系统中的训练权重(trainable weights)、增加全局步骤。根据惯例,这个操作被称为train_op,是TensorFlow会话(session)诱发一个完整训练步骤所必须运行的操作(见下文)。
End of explanation
with tf.Graph().as_default():
Explanation: 训练模型
一旦图构建完毕,就通过fully_connected_feed.py文件中的用户代码进行循环地迭代式训练和评估。
图
在run_training()这个函数的一开始,是一个Python语言中的with命令,这个命令表明所有已经构建的操作都要与默认的tf.Graph全局实例关联起来。
End of explanation
sess = tf.Session()
Explanation: tf.Graph实例是一系列可以作为整体执行的操作。TensorFlow的大部分场景只需要依赖默认图一个实例即可。
利用多个图的更加复杂的使用场景也是可能的,但是超出了本教程的范围。
会话
完成全部的构建准备、生成全部所需的操作之后,我们就可以创建一个tf.Session,用于运行图。
End of explanation
with tf.Session() as sess:
Explanation: 另外,也可以利用with代码块生成Session,限制作用域:
End of explanation
init = tf.global_variables_initializer()
sess.run(init)
Explanation: Session函数中没有传入参数,表明该代码将会依附于(如果还没有创建会话,则会创建新的会话)默认的本地会话。
生成会话之后,所有tf.Variable实例都会立即通过调用各自初始化操作中的tf.Session.run函数进行初始化。
End of explanation
for step in xrange(FLAGS.max_steps):
sess.run(train_op)
Explanation: tf.Session.run方法将会运行图中与作为参数传入的操作相对应的完整子集。在初次调用时,init操作只包含了变量初始化程序tf.group。图的其他部分不会在这里,而是在下面的训练循环运行。
训练循环
完成会话中变量的初始化之后,就可以开始训练了。
训练的每一步都是通过用户代码控制,而能实现有效训练的最简单循环就是:
End of explanation
images_feed, labels_feed = data_set.next_batch(FLAGS.batch_size,
FLAGS.fake_data)
Explanation: 但是,本教程中的例子要更为复杂一点,原因是我们必须把输入的数据根据每一步的情况进行切分,以匹配之前生成的占位符。
向图输入参数
执行每一步时,我们的代码会生成一个输入字典(feed dictionary),其中包含对应步骤中训练所要使用的例子,这些例子的键就是其所代表的占位符操作。
fill_feed_dict函数中,会查询给定的DataSet,给下一batch_size批次的图像和标签,与占位符相匹配的张量则会载入下一批次的图像和标签。
End of explanation
feed_dict = {
images_placeholder: images_feed,
labels_placeholder: labels_feed,
}
Explanation: 然后,以占位符为键,创建一个Python字典对象,键值则是其代表的输入张量。
End of explanation
for step in xrange(FLAGS.max_steps):
feed_dict = fill_feed_dict(data_sets.train,
images_placeholder,
labels_placeholder)
_, loss_value = sess.run([train_op, loss],
feed_dict=feed_dict)
Explanation: 这个字典随后作为feed_dict参数,传入sess.run()函数中,为这一步的训练提供输入样例。
检查状态
代码中明确其需要获取的两个值:[train_op, loss]。
End of explanation
if step % 100 == 0:
print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value, duration))
Explanation: 因为要获取这两个值,sess.run()会返回一个有两个元素的元组。其中每一个张量,对应了返回的元组中的numpy数组,而这些数组中包含了当前这步训练中对应张量的值。由于train_op并不会产生输出,其在返回的元祖中的对应元素就是None,所以会被抛弃。但是,如果模型在训练中出现偏差,loss张量的值可能会变成NaN,所以我们要获取它的值,并记录下来。
假设训练一切正常,没有出现NaN,训练循环会每隔100个训练步骤,就打印一行简单的状态文本,告知用户当前的训练状态。
End of explanation
summary = tf.summary.merge_all()
Explanation: 状态可视化
为了释放TensorBoard所使用的事件文件(events file),所有的即时数据(在这里只有一个)都要在图构建阶段合并至一个操作(op)中。
End of explanation
summary_writer = tf.summary.FileWriter(FLAGS.train_dir, sess.graph)
Explanation: 在创建好会话(session)之后,可以实例化一个tf.summary.FileWriter,用于写入包含了图表本身和即时数据具体值的事件文件。
End of explanation
summary_str = sess.run(summary, feed_dict=feed_dict)
summary_writer.add_summary(summary_str, step)
Explanation: 最后,每次运行summary时,都会往事件文件中写入最新的即时数据,函数的输出会传入事件文件读写器(writer)的add_summary()函数。
End of explanation
saver = tf.train.Saver()
Explanation: 事件文件写入完毕之后,可以就训练文件夹打开一个TensorBoard,查看即时数据的情况。
注意:了解更多如何构建并运行TensorBoard的信息,请查看相关教程Tensorboard:训练过程可视化。
保存检查点
为了得到可以用来后续恢复模型以进一步训练或评估的检查点文件(checkpoint file),我们实例化一个tf.train.Saver。
End of explanation
saver.save(sess, FLAGS.train_dir, global_step=step)
Explanation: 在训练循环中,将定期调用tf.train.Saver.save方法,向训练文件夹中写入包含了当前所有可训练变量值得检查点文件。
End of explanation
saver.restore(sess, FLAGS.train_dir)
Explanation: 这样,我们以后就可以使用tf.train.Saver.restore方法,重载模型的参数,继续训练。
End of explanation
print('Training Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.train)
print('Validation Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.validation)
print('Test Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.test)
Explanation: 评估模型
每隔一千个训练步骤,我们的代码会尝试使用训练数据集与测试数据集,对模型进行评估。do_eval函数会被调用三次,分别使用训练数据集、验证数据集合测试数据集。
End of explanation
eval_correct = mnist.evaluation(logits, labels_placeholder)
Explanation: 注意,更复杂的使用场景通常是,先隔绝data_sets.test测试数据集,只有在大量的超参数优化调整(hyperparameter tuning)之后才进行检查。但是,由于MNIST问题比较简单,我们在这里一次性评估所有的数据。
构建评估图
在进入训练循环之前,我们应该先调用mnist.py文件中的evaluation()函数,传入的logits和标签参数要与loss()函数的一致。这样做事为了先构建Eval操作。
End of explanation
eval_correct = tf.nn.in_top_k(logits, labels, 1)
Explanation: evaluation()函数会生成tf.nn.in_top_k操作,如果在K个最有可能的预测中可以发现真的标签,那么这个操作就会将模型输出标记为正确。在本文中,我们把K的值设置为1,也就是只有在预测是真的标签时,才判定它是正确的。
End of explanation
for step in xrange(steps_per_epoch):
feed_dict = fill_feed_dict(data_set,
images_placeholder,
labels_placeholder)
true_count += sess.run(eval_correct, feed_dict=feed_dict)
Explanation: 评估输出
之后,我们可以创建一个循环,往其中添加feed_dict,并在调用sess.run()函数时传入eval_correct操作,目的就是用给定的数据集评估模型。
End of explanation
precision = true_count / num_examples
print(' Num examples: %d Num correct: %d Precision @ 1: %0.04f' %
(num_examples, true_count, precision))
Explanation: true_count变量会累加所有in_top_k操作判定为正确的预测之和。接下来,只需要将正确测试的总数,除以例子总数,就可以得出准确率了。
End of explanation |
9,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Rotations" data-toc-modified-id="Rotations-1"><span class="toc-item-num">1 </span>Rotations</a></div><div class="lev1 toc-item"><a href="#PCA" data-toc-modified-id="PCA-2"><span class="toc-item-num">2 </span>PCA</a></div><div class="lev1 toc-item"><a href="#FastFourier-Transformation" data-toc-modified-id="FastFourier-Transformation-3"><span class="toc-item-num">3 </span>FastFourier Transformation</a></div><div class="lev1 toc-item"><a href="#Save-python-object-with-pickle" data-toc-modified-id="Save-python-object-with-pickle-4"><span class="toc-item-num">4 </span>Save python object with pickle</a></div><div class="lev1 toc-item"><a href="#Progress-Bar" data-toc-modified-id="Progress-Bar-5"><span class="toc-item-num">5 </span>Progress Bar</a></div><div class="lev1 toc-item"><a href="#Check-separations-by-histogram-and-scatter-plot" data-toc-modified-id="Check-separations-by-histogram-and-scatter-plot-6"><span class="toc-item-num">6 </span>Check separations by histogram and scatter plot</a></div><div class="lev1 toc-item"><a href="#Plot-Cumulative-Lift" data-toc-modified-id="Plot-Cumulative-Lift-7"><span class="toc-item-num">7 </span>Plot Cumulative Lift</a></div><div class="lev1 toc-item"><a href="#GBM-skitlearn" data-toc-modified-id="GBM-skitlearn-8"><span class="toc-item-num">8 </span>GBM skitlearn</a></div><div class="lev1 toc-item"><a href="#Xgboost" data-toc-modified-id="Xgboost-9"><span class="toc-item-num">9 </span>Xgboost</a></div><div class="lev1 toc-item"><a href="#LightGBM" data-toc-modified-id="LightGBM-10"><span class="toc-item-num">10 </span>LightGBM</a></div><div class="lev1 toc-item"><a href="#Control-plots
Step3: Rotations
Step5: PCA
Step8: FastFourier Transformation
Step9: Save python object with pickle
Step10: Progress Bar
There are many packages to create a progress bar in python, the one I use is tqdm
- tqdm
Step11: Check separations by histogram and scatter plot
Step13: Plot Cumulative Lift
Step15: GBM skitlearn
Step17: Xgboost
To instal xgboost
Step18: LightGBM
New way to install the package
Step21: Control plots
Step25: Tuning parameters of a model
Grid search with skitlearn
Random search with skitlearn
Bayesian Optimization Search
https | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import xgboost as xgb
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import precision_recall_curve
df = pd.read_csv("iris.csv")
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Rotations" data-toc-modified-id="Rotations-1"><span class="toc-item-num">1 </span>Rotations</a></div><div class="lev1 toc-item"><a href="#PCA" data-toc-modified-id="PCA-2"><span class="toc-item-num">2 </span>PCA</a></div><div class="lev1 toc-item"><a href="#FastFourier-Transformation" data-toc-modified-id="FastFourier-Transformation-3"><span class="toc-item-num">3 </span>FastFourier Transformation</a></div><div class="lev1 toc-item"><a href="#Save-python-object-with-pickle" data-toc-modified-id="Save-python-object-with-pickle-4"><span class="toc-item-num">4 </span>Save python object with pickle</a></div><div class="lev1 toc-item"><a href="#Progress-Bar" data-toc-modified-id="Progress-Bar-5"><span class="toc-item-num">5 </span>Progress Bar</a></div><div class="lev1 toc-item"><a href="#Check-separations-by-histogram-and-scatter-plot" data-toc-modified-id="Check-separations-by-histogram-and-scatter-plot-6"><span class="toc-item-num">6 </span>Check separations by histogram and scatter plot</a></div><div class="lev1 toc-item"><a href="#Plot-Cumulative-Lift" data-toc-modified-id="Plot-Cumulative-Lift-7"><span class="toc-item-num">7 </span>Plot Cumulative Lift</a></div><div class="lev1 toc-item"><a href="#GBM-skitlearn" data-toc-modified-id="GBM-skitlearn-8"><span class="toc-item-num">8 </span>GBM skitlearn</a></div><div class="lev1 toc-item"><a href="#Xgboost" data-toc-modified-id="Xgboost-9"><span class="toc-item-num">9 </span>Xgboost</a></div><div class="lev1 toc-item"><a href="#LightGBM" data-toc-modified-id="LightGBM-10"><span class="toc-item-num">10 </span>LightGBM</a></div><div class="lev1 toc-item"><a href="#Control-plots:-ROC,-Precision-Recall,-ConfusionMatrix,-top_k,-classification-report," data-toc-modified-id="Control-plots:-ROC,-Precision-Recall,-ConfusionMatrix,-top_k,-classification-report,-11"><span class="toc-item-num">11 </span>Control plots: ROC, Precision-Recall, ConfusionMatrix, top_k, classification report,</a></div><div class="lev1 toc-item"><a href="#Tuning-parameters-of-a-model" data-toc-modified-id="Tuning-parameters-of-a-model-12"><span class="toc-item-num">12 </span>Tuning parameters of a model</a></div><div class="lev2 toc-item"><a href="#Grid-search-with-skitlearn" data-toc-modified-id="Grid-search-with-skitlearn-121"><span class="toc-item-num">12.1 </span>Grid search with skitlearn</a></div><div class="lev2 toc-item"><a href="#Random-search-with-skitlearn" data-toc-modified-id="Random-search-with-skitlearn-122"><span class="toc-item-num">12.2 </span>Random search with skitlearn</a></div><div class="lev2 toc-item"><a href="#Bayesian-Optimization--Search" data-toc-modified-id="Bayesian-Optimization--Search-123"><span class="toc-item-num">12.3 </span>Bayesian Optimization Search</a></div>
End of explanation
def rotMat3D(a,r):
Return the matrix that rotate the a vector into the r vector. numpy array are required
a = a/np.linalg.norm(a)
r = r/np.linalg.norm(r)
I = np.eye(3)
v = np.cross(a,r)
c = np.inner(a,r)
v_x = np.array([[0,-v[2],v[1]],[v[2],0,-v[0]],[-v[1],v[0],0]])
return I + v_x + np.matmul(v_x,v_x)/(1+c)
# example usage
z_old = np.array([0, 0, 1])
z = np.array([1, 1, 1])
R = rotMat3D(z, z_old)
print(z, R.dot(z))
print(z_old, R.dot(z_old))
print(np.linalg.norm(z), np.linalg.norm(R.dot(z)))
def createR2D(vector):
rotate the vector to [0,1], require numpy array
m = np.linalg.norm(vector)
c, s = vector[1]/m , vector[0]/m
R2 = np.array([c, -s, s, c]).reshape(2,2)
return R2
# example usage
y_old = np.array([3,4])
R2 = createR2D(y_old)
print(y_old, R2.dot(y_old))
Explanation: Rotations
End of explanation
from sklearn import decomposition
def pca_decomposition(df):
Perform sklearn PCA. The returned components are already ordered by the explained variance
pca = decomposition.PCA()
pca.fit(df)
return pca
def pca_stats(pca):
print("variance explained:\n", pca.explained_variance_ratio_)
print("pca components:\n", pca.components_)
def plot_classcolor(df, x='y', y='x', hue=None):
sns.lmplot(x, y, data=df, hue=hue, fit_reg=False)
sns.plt.title("({} vs {})".format(y, x))
plt.show()
def add_pca_to_df(df, allvars, pca):
df[["pca_" + str(i) for i, j in enumerate(pca.components_)
]] = pd.DataFrame(pca.fit_transform(df[allvars]))
pca = pca_decomposition( df[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']] )
pca_stats(pca)
add_pca_to_df(df, ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'], pca)
plot_classcolor(df, 'pca_0', 'pca_1', 'species_id')
Explanation: PCA
End of explanation
from scipy.fftpack import fft, rfft, irfft, fftfreq
def rfourier_transformation(df, var, pass_high=-1, pass_low=-1, verbose=True, plot=True):
Return the signal after low and high filter applied.
Use verbose and plot to see stats and plot the signal before and after the filter.
low = pass_high
high = pass_low
if (high < low) and (high>0):
print("Cannot be pass_low < pass_high!!")
return -1
time = pd.Series(df.index.values[1:10] -
df.index.values[:10 - 1]) # using the first 10 data
dt = time.describe()['50%']
if (verbose):
print(
sampling time: {0} s
sampling frequency: {1} hz
max freq in rfft: {2} hz
.format(dt, 1 / dt, 1 / (dt * 2), 1 / (dt)))
signal = df[var]
freq = fftfreq(signal.size, d=dt)
f_signal = rfft(signal)
m = {}
if (low > 0):
f_signal_lowcut = f_signal.copy()
f_signal_lowcut[(freq < low)] = 0
cutted_signal_low = irfft(f_signal_lowcut)
m['low'] = 1
if (high > 0):
f_signal_highcut = f_signal.copy()
f_signal_highcut[(freq > high)] = 0
cutted_signal_high = irfft(f_signal_highcut)
m['high'] = 1
if (high > 0) & (low > 0):
f_signal_bwcut = f_signal.copy()
f_signal_bwcut[(freq < low) | (freq > high)] = 0
cutted_signal_bw = irfft(f_signal_bwcut)
m['bw'] = 1
m['low'] = 2
m['high'] = 3
n = len(freq)
if (plot):
f, axarr = plt.subplots(len(m) + 1, 1, sharex=True, figsize=(18,15))
f.canvas.set_window_title(var)
# time plot
axarr[0].plot(signal)
axarr[0].set_title('Signal')
if 'bw' in m:
axarr[m['bw']].plot(df.index, cutted_signal_bw)
axarr[m['bw']].set_title('Signal after low-high cut')
if 'low' in m:
axarr[m['low']].plot(df.index, cutted_signal_low)
axarr[m['low']].set_title('Signal after high filter (low frequencies rejected)')
if 'high' in m:
axarr[m['high']].plot(df.index, cutted_signal_high)
axarr[m['high']].set_title('Signal after low filter (high frequencies rejected)')
plt.show()
# spectrum
f = plt.figure(figsize=(18,8))
plt.plot(freq[0:n // 2], f_signal[:n // 2])
f.suptitle('Frequency spectrum')
if 'low' in m:
plt.axvline(x=low, ymin=0., ymax=1, linewidth=2, color='red')
if 'high' in m:
plt.axvline(x=high, ymin=0., ymax=1, linewidth=2, color='red')
plt.show()
if 'bw' in m:
return cutted_signal_bw
elif 'low' in m:
return cutted_signal_low
elif 'high' in m:
return cutted_signal_high
else:
return signal
acc = pd.read_csv('accelerations.csv')
signal = rfourier_transformation(acc, 'x', pass_high=0.1, pass_low=0.5, verbose=True, plot=True)
Explanation: FastFourier Transformation
End of explanation
# save in pickle with gzip compression
import pickle
import gzip
def save(obj, filename, protocol=0):
file = gzip.GzipFile(filename, 'wb')
file.write(pickle.dumps(obj, protocol))
file.close()
def load(filename):
file = gzip.GzipFile(filename, 'rb')
buffer = ""
while True:
data = file.read()
if data == "":
break
buffer += data
obj = pickle.loads(buffer)
file.close()
return obj
Explanation: Save python object with pickle
End of explanation
# Simple bar, the one to be used in a general python code
import tqdm
for i in tqdm.tqdm(range(0, 1000)):
pass
# Bar to be used in a jupyter notebook
for i in tqdm.tqdm_notebook(range(0, 1000)):
pass
# custom update bar
import time
tot = 4000
bar = tqdm.tqdm_notebook(desc='Status ', total=tot, mininterval=0.5, miniters=5, unit='cm', unit_scale=True)
# with the file options you can show the progress bar into a file
# mininterval: time in seconds to see an update on the progressbar
# miniters: Tweak this and `mininterval` to get very efficient loops, if 0 will only use mininterval
# unit_scale: use international scale for the units (k, M, m, etc...)
# bar_format: specify the bar format, default is '{l_bar}{bar}{r_bar}'. It can impact the performance if you ask for complicate bar format
# unit_divisor: [default: 1000], ignored unless `unit_scale` is True
# ncols: The width of the entire output message. If specified, dynamically resizes the progressbar to stay within this bound.
for l in range(0, tot):
if ((l-1) % 10) == 0:
bar.update(10)
if l % 1000 == 0:
bar.write('to print something without duplicate the progress bar (if you are using tqdm.tqdm instead of tqdm.tqdm_notebook)')
print('or use the simple print if you are using tqdm.tqdm_notebook')
time.sleep(0.001)
# with this extension you can use tqdm_notebook().pandas(...) instead of tqdm.pandas(...)
from tqdm import tqdm_notebook
!jupyter nbextension enable --py --sys-prefix widgetsnbextension
import pandas as pd
import numpy as np
import time
df = pd.DataFrame(np.random.randint(0, int(1e8), (100, 3)))
# Create and register a new `tqdm` instance with `pandas`
# (can use tqdm_gui, optional kwargs, etc.)
print('set tqdm_notebook for pandas, show the bar')
tqdm_notebook().pandas()
# Now you can use `progress_apply` instead of `apply`
print('example usage of progressbar in a groupby pandas statement')
df_g = df.groupby(0).progress_apply(lambda x: time.sleep(0.01))
print('example usage of progressbar in an apply pandas statement')
df_a = df.progress_apply(lambda x: time.sleep(0.01))
Explanation: Progress Bar
There are many packages to create a progress bar in python, the one I use is tqdm
- tqdm: progress bar in a pandas DataFrame progress_apply (tqdm state that it will not noticeably slow pandas down) https://pypi.python.org/pypi/tqdm
others are:
- progressbar: with each iterable (mainly for) https://pypi.python.org/pypi/progressbar2
- https://github.com/niltonvolpato/python-progressbar
End of explanation
def plot_classcolor(df, x='y', y='x', hue=None):
sns.lmplot(x, y, data=df, hue=hue, fit_reg=False)
sns.plt.title("({} vs {})".format(y, x))
plt.show()
plot_classcolor(df, 'sepal_length', 'sepal_width', hue='species')
def plot_histo_per_class(df, var, target):
t_list = df[target].unique()
for t in t_list:
sns.distplot(
df[df[target] == t][var], kde=False, norm_hist=True, label=str(t))
sns.plt.legend()
sns.plt.show()
plot_histo_per_class(df, 'sepal_length', "species_id")
Explanation: Check separations by histogram and scatter plot
End of explanation
def plotLift(df, features, target, ascending=False, multiclass_level=None):
Plot the Lift function for all the features.
Ascending can be a list of the same feature length or a single boolean value.
For the multiclass case you can give the value of a class and the lift is calculated
considering the select class vs all the other
if multiclass_level != None:
df = df[features+[target]].copy()
if multiclass_level != 0:
df.loc[df[target] != multiclass_level, target] = 0
df.loc[df[target] == multiclass_level, target] = 1
else :
df.loc[df[target] == multiclass_level, target] = 1
df.loc[df[target] != multiclass_level, target] = 0
npoints = 100
n = len(df)
st = n / npoints
df_shuffled = df.sample(frac=1)
flat = np.array([[(i * st) / n, df_shuffled[0:int(i * st)][target].sum()]
for i in range(1, npoints + 1)])
flat = flat.transpose()
to_leg = []
if not isinstance(features, list):
features = [features]
if not isinstance(ascending, list):
ascending = [ascending for i in features]
for f, asc in zip(features, ascending):
a = df[[f, target]].sort_values(f, ascending=asc)
b = np.array([[(i * st) / n, a[0:int(i * st)][target].sum()]
for i in range(1, npoints + 1)])
b = b.transpose()
to_leg.append(plt.plot(b[0], b[1], label=f)[0])
to_leg.append(plt.plot(flat[0], flat[1], label="no_gain")[0])
plt.legend(handles=to_leg, loc=4)
plt.xlabel('faction of data', fontsize=18)
plt.ylabel(target+' (cumulative sum)', fontsize=16)
plt.show()
# Lift for regression
titanic = sns.load_dataset("titanic")
plotLift(titanic, ['sibsp', 'survived', 'class'], 'fare', ascending=[False,False, True])
# Lift plot example for multiclass
plotLift(
df, ['sepal_length', 'sepal_width', 'petal_length'],
'species_id',
ascending=[False, True, False],
multiclass_level=3)
Explanation: Plot Cumulative Lift
End of explanation
def plot_var_imp_skitlearn(features, clf_fit):
Plot var_imp for a skitlearn fitted model
my_ff = np.array(features)
importances = clf_fit.feature_importances_
indices = np.argsort(importances)
pos = np.arange(len(my_ff[indices])) + .5
plt.figure(figsize=(20, 0.75 * len(my_ff[indices])))
plt.barh(pos, importances[indices], align='center')
plt.yticks(pos, my_ff[indices], size=25)
plt.xlabel('rank')
plt.title('Feature importances', size=25)
plt.grid(True)
plt.show()
importance_dict = dict(zip(my_ff[indices], importances[indices]))
return importance_dict
Explanation: GBM skitlearn
End of explanation
import xgboost
#### VERSIONE GIUSTA A LAVORO
def plot_var_imp_xgboost(model, mode='gain', ntop=-1):
Plot the vars imp for xgboost model, where mode = ['weight','gain','cover']
'weight' - the number of times a feature is used to split the data across all trees.
'gain' - the average gain of the feature when it is used in trees
'cover' - the average coverage of the feature when it is used in trees
importance = model.get_score(importance_type=mode)
importance = sorted(
importance.items(), key=operator.itemgetter(1), reverse=True)
if ntop == -1: ntop = len(importance)
importance = importance[0:ntop]
my_ff = np.array([i[0] for i in importance])
imp = np.array([i[1] for i in importance])
indices = np.argsort(imp)
pos = np.arange(len(my_ff[indices])) + .5
plt.figure(figsize=(20, 0.75 * len(my_ff[indices])))
plt.barh(pos, imp[indices], align='center')
plt.yticks(pos, my_ff[indices], size=25)
plt.xlabel('rank')
plt.title('Feature importances (' + mode + ')', size=25)
plt.grid(True)
plt.show()
return
Explanation: Xgboost
To instal xgboost: https://github.com/dmlc/xgboost/tree/master/python-package
- conda install py-xgboost (pip install xgboost)
End of explanation
import lightgbm as lgb
Explanation: LightGBM
New way to install the package:
- pip install lightgbm
Old way to install the package
- build the package from https://github.com/Microsoft/LightGBM/wiki/Installation-Guide
- add the library to python using: python setup.py install inside the python package of the github clone
Problem: when you compile a package with the linux compiler and then you use it with the anaconda compiler you need to have the same compiler on both.
This cannot be the default and to fix this do:
- cd ~/anaconda3/lib
- mv -vf libstdc++.so.6 libstdc++.so.6.old
- ln -s /usr/lib/x86_64-linux-gnu/libstdc++.so.6 ./libstdc++.so.6
So now the shared library seen from anaconda is the same that have been usage when the package was compiled.
These steps were necessary to install lightgbm
End of explanation
### VERSIONE CORRETTA A LAVORO
def plot_ROC_PrecisionRecall(y_test, y_pred):
Plot ROC curve and Precision-Recall plot
numpy arrays are required.
fpr_clf, tpr_clf, _ = roc_curve(y_test, y_pred)
precision, recall, thresholds = precision_recall_curve(y_test, y_pred)
f1 = np.array([2 * p * r / (p + r) for p, r in zip(precision, recall)])
f1[np.isnan(f1)] = 0
t_best_f1 = thresholds[np.argmax(f1)]
roc_auc = auc(fpr_clf, tpr_clf)
plt.figure(figsize=(25, 25))
# plot_ROC
plt.subplot(221)
plt.plot(
fpr_clf,
tpr_clf,
color='r',
lw=2,
label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='-')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
# plot_PrecisionRecall
plt.subplot(222)
plt.plot(
recall, precision, color='r', lw=2, label='Precision-Recall curve')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precison-Recall curve')
plt.legend(loc="lower right")
plt.show()
return {"roc_auc": roc_auc, "t_best_f1": t_best_f1}
def plot_ROC_PR_test_train(y_train, y_test, y_test_pred, y_train_pred):
Plot ROC and Precision-Recall curve for test and train.
Return the auc for test and train
roc_auc_test = plot_ROC_PrecisionRecall(y_test, y_test_pred)
roc_auc_train = plot_ROC_PrecisionRecall(y_train, y_train_pred)
return roc_auc_test, roc_auc_train
Explanation: Control plots: ROC, Precision-Recall, ConfusionMatrix, top_k, classification report,
End of explanation
### Bayesian Optimization
# https://github.com/fmfn/BayesianOptimization
from bayes_opt import BayesianOptimization
def xgb_evaluate_gen(xg_train, xg_test, watchlist, num_rounds):
Create the function to be optimized (example for xgboost)
params = { 'eta': 0.1, 'objective':'binary:logistic','silent': 1, 'eval_metric': 'auc' }
def xgb_evaluate(min_child_weight,colsample_bytree,max_depth,subsample,gamma,alpha):
Return the function to be maximized by the Bayesian Optimization,
where the inputs are the parameters to be optimized and the output the
evaluation_metric on test set
params['min_child_weight'] = int(round(min_child_weight))
params['cosample_bytree'] = max(min(colsample_bytree, 1), 0)
params['max_depth'] = int(round(max_depth))
params['subsample'] = max(min(subsample, 1), 0)
params['gamma'] = max(gamma, 0)
params['alpha'] = max(alpha, 0)
#cv_result = xgb.cv(params, xg_train, num_boost_round=num_rounds, nfold=5,
# seed=random_state, callbacks=[xgb.callback.early_stop(25)]
model_temp = xgb.train(params, dtrain=xg_train, num_boost_round=num_rounds,
evals=watchlist, early_stopping_rounds=15, verbose_eval=False)
# return -cv_result['test-merror-mean'].values[-1]
return float(str(model_temp.eval(xg_test)).split(":")[1][0:-1])
return xgb_evaluate
def go_with_BayesianOptimization(xg_train, xg_test, watchlist, num_rounds = 1,
num_iter = 10, init_points = 10, acq='ucb'):
Send the Batesian Optimization for xgboost. acq = 'ucb', 'ei', 'poi'
xgb_func = xgb_evaluate_gen(xg_train, xg_test, watchlist, num_rounds)
xgbBO = BayesianOptimization(xgb_func, {'min_child_weight': (1, 50),
'colsample_bytree': (0.5, 1),
'max_depth': (5, 15),
'subsample': (0.5, 1),
'gamma': (0, 2),
'alpha': (0, 2),
})
xgbBO.maximize(init_points=init_points, n_iter=num_iter, acq=acq) # poi, ei, ucb
Explanation: Tuning parameters of a model
Grid search with skitlearn
Random search with skitlearn
Bayesian Optimization Search
https://github.com/fmfn/BayesianOptimization
use params, pars to pass the different parameters to the train function.
try to make it without xgboost or with other things in general or a simple example
End of explanation |
9,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading packages
Step1: Discrete Random Variables
In this section we show a few example of discrete random variables using Python.
The documentation for these routines can be found at
Step2: Continuous Random Variables
The documentation for these routines can be found at
Step3: http
Step4: Now we can look at the histograms of some of our data from Case Study 2.
Step5: Central limit theorem
Here we show an example of the central limit theorem. You can play around with "numberOfDistributions" and "numberOfSamples" to see how quickly this converges to something that looks Gaussian.
Step6: Linear Algebra
Some basic ideas in Linear Algebra and how you can use them in Python.
Step7: But matrix inversion can be very expensive.
Step8: Something slightly more advanced
Step9: The sparsity structure of A.
Step10: Descriptive statistics
Pandas provides many routines for computing statistics.
Step11: But empirical measures are not always good approximations of the true properties of the distribution.
Step12: Playing around with data | Python Code:
import numpy as np
import matplotlib.pylab as py
import pandas as pa
import scipy.stats as st
np.set_printoptions(precision=2)
%matplotlib inline
Explanation: Loading packages
End of explanation
X=st.bernoulli(p=0.3)
X.rvs(100)
# Note that "high" is not included.
X=st.randint(low=1,high=5)
X.rvs(100)
Explanation: Discrete Random Variables
In this section we show a few example of discrete random variables using Python.
The documentation for these routines can be found at:
http://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html
End of explanation
XUniform=st.uniform(loc=0.7,scale=0.3);
# "bins" tells you how many bars to use
# "normed" says to turn the counts into probability densities
py.hist(XUniform.rvs(1000000),bins=20,normed=True);
x = np.linspace(-0.1,1.1,100)
py.plot(x,XUniform.pdf(x))
#py.savefig('Figures/uniformPDF.png')
py.plot(XUniform.cdf(x))
#py.savefig('Figures/uniformCDF.png')
XNormal=st.norm(loc=0,scale=1);
# "bins" tells you how many bars to use
# "normed" says to turn the counts into probability densities
py.hist(XNormal.rvs(1000),bins=100,normed=True);
x = np.linspace(-3,3,100)
py.plot(x,XNormal.pdf(x))
#py.savefig('Figures/normalPDF.png')
Explanation: Continuous Random Variables
The documentation for these routines can be found at:
http://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html
End of explanation
py.plot(XNormal.cdf(x))
#py.savefig('Figures/normalCDF.png')
Explanation: http://en.wikipedia.org/wiki/Carl_Friedrich_Gauss
End of explanation
data = pa.read_hdf('data.h5','movies')
data
data['title'][100000]
X=data.pivot_table('rating',index='timestamp',aggfunc='count')
X.plot()
# Warning: Some versions of Pandas use "index" and "columns", some use "rows" and "cols"
X=data.pivot_table('rating',index='title',aggfunc='sum')
#X=data.pivot_table('rating',rows='title',aggfunc='sum')
X
X.hist()
# Warning: Some versions of Pandas use "index" and "columns", some use "rows" and "cols"
X=data.pivot_table('rating',index='occupation',aggfunc='sum')
#X=data.pivot_table('rating',rows='occupation',aggfunc='sum')
X
Explanation: Now we can look at the histograms of some of our data from Case Study 2.
End of explanation
numberOfDistributions = 100
numberOfSamples = 1000
XTest = st.uniform(loc=0,scale=1);
# The same thing works with many distributions.
#XTest = st.lognorm(s=1.0);
XCLT=np.zeros([numberOfSamples])
for i in range(numberOfSamples):
for j in range(numberOfDistributions):
XCLT[i] += XTest.rvs()
XCLT[i] = XCLT[i]/numberOfDistributions
py.hist(XCLT,normed=True)
Explanation: Central limit theorem
Here we show an example of the central limit theorem. You can play around with "numberOfDistributions" and "numberOfSamples" to see how quickly this converges to something that looks Gaussian.
End of explanation
import numpy as np
a=np.array([1,2,3])
a
A=np.matrix(np.random.randint(1,10,size=[3,3]))
A
x=np.matrix([[1],[2],[3]])
print x
print x.T
a*a
np.dot(a,a)
x.T*x
A*x
b = np.matrix([[5],[6],[7]])
b
Ai = np.linalg.inv(A)
print A
print Ai
A*Ai
Ai*A
xHat = Ai*b
xHat
print A*xHat
print b
Explanation: Linear Algebra
Some basic ideas in Linear Algebra and how you can use them in Python.
End of explanation
sizes = range(100,1000,200)
times = np.zeros(len(sizes))
for i in range(len(sizes)):
A = np.random.random(size=[sizes[i],sizes[i]])
x = %timeit -o np.linalg.inv(A)
times[i] = x.best
py.plot(sizes,times)
Explanation: But matrix inversion can be very expensive.
End of explanation
from scipy.sparse.linalg import spsolve
from scipy.sparse import rand,eye
mySize = 1000
A=rand(mySize,mySize,0.001)+eye(mySize)
b=np.random.random(size=[mySize])
Explanation: Something slightly more advanced: Sparse matrices.
Sparse matrices (those with lots of 0s) can often be worked with much more efficiently than general matrices than standard methods.
End of explanation
py.spy(A,markersize=0.1)
dense = %timeit -o np.linalg.solve(A.todense(),b)
sparse = %timeit -o spsolve(A,b)
dense.best/sparse.best
Explanation: The sparsity structure of A.
End of explanation
XNormal=st.norm(loc=0.7,scale=2);
x = XNormal.rvs(1000)
print np.mean(x)
print np.std(x)
print np.var(x)
Explanation: Descriptive statistics
Pandas provides many routines for computing statistics.
End of explanation
sizes = np.arange(16)+1
errors = np.zeros(16)
for i in range(16):
x = XNormal.rvs(2**i)
errors[i] = np.abs(0.7-np.mean(x))
py.plot(sizes,errors)
py.plot(sizes,2/np.sqrt(sizes))
py.plot(sizes,2*2/np.sqrt(sizes),'r')
#py.savefig('Figures/errorInMean.png')
Explanation: But empirical measures are not always good approximations of the true properties of the distribution.
End of explanation
data.pivot_table?
X=data.pivot_table('rating',index='title',aggfunc='mean')
#X=data.pivot_table('rating',rows='title',aggfunc='mean')
hist(X)
X=data.pivot_table('rating',index='title',columns='gender',aggfunc='mean')
#X=data.pivot_table('rating',rows='title',cols='gender',aggfunc='mean')
py.subplot(1,2,1)
X['M'].hist()
py.subplot(1,2,2)
X['F'].hist()
py.plot(X['M'],X['F'],'.')
X.cov()
X.corr()
X=data.pivot_table('rating',index='occupation',columns='gender',aggfunc='mean')
#X=data.pivot_table('rating',rows='occupation',cols='gender',aggfunc='mean')
X
Explanation: Playing around with data
End of explanation |
9,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create dataframe
Step2: Make plot | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Title: Pie Chart In MatPlotLib
Slug: matplotlib_pie_chart
Summary: Pie Chart In MatPlotLib
Date: 2016-05-01 12:00
Category: Python
Tags: Data Visualization
Authors: Chris Albon
Based on: Sebastian Raschka.
Preliminaries
End of explanation
raw_data = {'officer_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'jan_arrests': [4, 24, 31, 2, 3],
'feb_arrests': [25, 94, 57, 62, 70],
'march_arrests': [5, 43, 23, 23, 51]}
df = pd.DataFrame(raw_data, columns = ['officer_name', 'jan_arrests', 'feb_arrests', 'march_arrests'])
df
# Create a column with the total arrests for each officer
df['total_arrests'] = df['jan_arrests'] + df['feb_arrests'] + df['march_arrests']
df
Explanation: Create dataframe
End of explanation
# Create a list of colors (from iWantHue)
colors = ["#E13F29", "#D69A80", "#D63B59", "#AE5552", "#CB5C3B", "#EB8076", "#96624E"]
# Create a pie chart
plt.pie(
# using data total)arrests
df['total_arrests'],
# with the labels being officer names
labels=df['officer_name'],
# with no shadows
shadow=False,
# with colors
colors=colors,
# with one slide exploded out
explode=(0, 0, 0, 0, 0.15),
# with the start angle at 90%
startangle=90,
# with the percent listed as a fraction
autopct='%1.1f%%',
)
# View the plot drop above
plt.axis('equal')
# View the plot
plt.tight_layout()
plt.show()
Explanation: Make plot
End of explanation |
9,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
So far, you've learned how to use several SQL clauses. For instance, you know how to use SELECT to pull specific columns from a table, along with WHERE to pull rows that meet specified criteria. You also know how to use aggregate functions like COUNT(), along with GROUP BY to treat multiple rows as a single group.
Now you'll learn how to change the order of your results using the ORDER BY clause, and you'll explore a popular use case by applying ordering to dates. To illustrate what you'll learn in this tutorial, we'll work with a slightly modified version of our familiar pets table.
ORDER BY
ORDER BY is usually the last clause in your query, and it sorts the results returned by the rest of your query.
Notice that the rows are not ordered by the ID column. We can quickly remedy this with the query below.
The ORDER BY clause also works for columns containing text, where the results show up in alphabetical order.
You can reverse the order using the DESC argument (short for 'descending'). The next query sorts the table by the Animal column, where the values that are last in alphabetic order are returned first.
Dates
Next, we'll talk about dates, because they come up very frequently in real-world databases. There are two ways that dates can be stored in BigQuery
Step2: Let's use the table to determine how the number of accidents varies with the day of the week. Since
Step3: As usual, we run it as follows | Python Code:
#$HIDE_INPUT$
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "nhtsa_traffic_fatalities" dataset
dataset_ref = client.dataset("nhtsa_traffic_fatalities", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "accident_2015" table
table_ref = dataset_ref.table("accident_2015")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the "accident_2015" table
client.list_rows(table, max_results=5).to_dataframe()
Explanation: Introduction
So far, you've learned how to use several SQL clauses. For instance, you know how to use SELECT to pull specific columns from a table, along with WHERE to pull rows that meet specified criteria. You also know how to use aggregate functions like COUNT(), along with GROUP BY to treat multiple rows as a single group.
Now you'll learn how to change the order of your results using the ORDER BY clause, and you'll explore a popular use case by applying ordering to dates. To illustrate what you'll learn in this tutorial, we'll work with a slightly modified version of our familiar pets table.
ORDER BY
ORDER BY is usually the last clause in your query, and it sorts the results returned by the rest of your query.
Notice that the rows are not ordered by the ID column. We can quickly remedy this with the query below.
The ORDER BY clause also works for columns containing text, where the results show up in alphabetical order.
You can reverse the order using the DESC argument (short for 'descending'). The next query sorts the table by the Animal column, where the values that are last in alphabetic order are returned first.
Dates
Next, we'll talk about dates, because they come up very frequently in real-world databases. There are two ways that dates can be stored in BigQuery: as a DATE or as a DATETIME.
The DATE format has the year first, then the month, and then the day. It looks like this:
YYYY-[M]M-[D]D
YYYY: Four-digit year
[M]M: One or two digit month
[D]D: One or two digit day
So 2019-01-10 is interpreted as January 10, 2019.
The DATETIME format is like the date format ... but with time added at the end.
EXTRACT
Often you'll want to look at part of a date, like the year or the day. You can do this with EXTRACT. We'll illustrate this with a slightly different table, called pets_with_date.
The query below returns two columns, where column Day contains the day corresponding to each entry the Date column from the pets_with_date table:
SQL is very smart about dates, and we can ask for information beyond just extracting part of the cell. For example, this query returns one column with just the week in the year (between 1 and 53) for each date in the Date column:
You can find all the functions you can use with dates in BigQuery in this documentation under "Date and time functions".
Example: Which day of the week has the most fatal motor accidents?
Let's use the US Traffic Fatality Records database, which contains information on traffic accidents in the US where at least one person died.
We'll investigate the accident_2015 table. Here is a view of the first few rows. (We have hidden the corresponding code. To take a peek, click on the "Code" button below.)
End of explanation
# Query to find out the number of accidents for each day of the week
query =
SELECT COUNT(consecutive_number) AS num_accidents,
EXTRACT(DAYOFWEEK FROM timestamp_of_crash) AS day_of_week
FROM `bigquery-public-data.nhtsa_traffic_fatalities.accident_2015`
GROUP BY day_of_week
ORDER BY num_accidents DESC
Explanation: Let's use the table to determine how the number of accidents varies with the day of the week. Since:
- the consecutive_number column contains a unique ID for each accident, and
- the timestamp_of_crash column contains the date of the accident in DATETIME format,
we can:
- EXTRACT the day of the week (as day_of_week in the query below) from the timestamp_of_crash column, and
- GROUP BY the day of the week, before we COUNT the consecutive_number column to determine the number of accidents for each day of the week.
Then we sort the table with an ORDER BY clause, so the days with the most accidents are returned first.
End of explanation
# Set up the query (cancel the query if it would use too much of
# your quota, with the limit set to 1 GB)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**9)
query_job = client.query(query, job_config=safe_config)
# API request - run the query, and convert the results to a pandas DataFrame
accidents_by_day = query_job.to_dataframe()
# Print the DataFrame
accidents_by_day
Explanation: As usual, we run it as follows:
End of explanation |
9,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing interstellar reddening and calculating synthetic photometry
Authors
Kristen Larson, Lia Corrales, Stephanie T. Douglas, Kelle Cruz
Input from Emir Karamehmetoglu, Pey Lian Lim, Karl Gordon, Kevin Covey
Learning Goals
Investigate extinction curve shapes
Deredden spectral energy distributions and spectra
Calculate photometric extinction and reddening
Calculate synthetic photometry for a dust-reddened star by combining dust_extinction and synphot
Convert from frequency to wavelength with astropy.unit equivalencies
Unit support for plotting with astropy.visualization
Keywords
dust extinction, synphot, astroquery, units, photometry, extinction, physics, observational astronomy
Companion Content
Bessell & Murphy (2012)
Summary
In this tutorial, we will look at some extinction curves from the literature, use one of those curves to deredden an observed spectrum, and practice invoking a background source flux in order to calculate magnitudes from an extinction model.
The primary libraries we'll be using are dust_extinction and synphot, which are Astropy affiliated packages.
We recommend installing the two packages in this fashion
Step1: Introduction
Dust in the interstellar medium (ISM) extinguishes background starlight. The wavelength dependence of the extinction is such that short-wavelength light is extinguished more than long-wavelength light, and we call this effect reddening.
If you're new to extinction, here is a brief introduction to the types of quantities involved.
The fractional change to the flux of starlight is
$$
\frac{dF_\lambda}{F_\lambda} = -\tau_\lambda
$$
where $\tau$ is the optical depth and depends on wavelength. Integrating along the line of sight, the resultant flux is an exponential function of optical depth,
$$
\tau_\lambda = -\ln\left(\frac{F_\lambda}{F_{\lambda,0}}\right).
$$
With an eye to how we define magnitudes, we usually change the base from $e$ to 10,
$$
\tau_\lambda = -2.303\log\left(\frac{F_\lambda}{F_{\lambda,0}}\right),
$$
and define an extinction $A_\lambda = 1.086 \,\tau_\lambda$ so that
$$
A_\lambda = -2.5\log\left(\frac{F_\lambda}{F_{\lambda,0}}\right).
$$
There are two basic take-home messages from this derivation
Step2: Astronomers studying the ISM often display extinction curves against inverse wavelength (wavenumber) to show the ultraviolet variation, as we do here. Infrared extinction varies much less and approaches zero at long wavelength in the absence of wavelength-independent, or grey, extinction.
Example 2
Step3: We read the downloaded files into an astropy table
Step4: The .quantity extension in the next lines will read the Table columns into Quantity vectors. Quantities keep the units of the Table column attached to the numpy array values.
Step5: Now, we use astroquery again to fetch photometry from Simbad to go with the IUE spectrum
Step6: To convert the photometry to flux, we look up some properties of the photometric passbands, including the flux of a magnitude zero star through the each passband, also known as the zero-point of the passband.
Step7: The zero-points that we found for the optical passbands are not in the same units as the IUE fluxes. To make matters worse, the zero-point fluxes are $F_\nu$ and the IUE fluxes are $F_\lambda$. To convert between them, the wavelength is needed. Fortunately, astropy provides an easy way to make the conversion with equivalencies
Step8: Now we can convert from photometry to flux using the definition of magnitude
Step9: Using astropy quantities allow us to take advantage of astropy's unit support in plotting. Calling astropy.visualization.quantity_support explicitly turns the feature on. Then, when quantity objects are passed to matplotlib plotting functions, the axis labels are automatically labeled with the unit of the quantity. In addition, quantities are converted automatically into the same units when combining multiple plots on the same axes.
Step10: Finally, we initialize the extinction model, choosing values $R_V = 5$ and $E_{B-V} = 0.5$. This star is famous in the ISM community for having large-$R_V$ dust in the line of sight.
Step11: To extinguish (redden) a spectrum, multiply by the ext.extinguish function. To unextinguish (deredden), divide by the same ext.extinguish, as we do here
Step12: Notice that, by dereddening the spectrum, the absorption feature at 2175 Angstrom is removed. This feature can also be seen as the prominent bump in the extinction curves in Example 1. That we have smoothly removed the 2175 Angstrom feature suggests that the values we chose, $R_V = 5$ and $E_{B-V} = 0.5$, are a reasonable model for the foreground dust.
Those experienced with dereddening should notice that that dust_extinction returns $A_\lambda/A_V$, while other routines like the IDL fm_unred procedure often return $A_\lambda/E_{B-V}$ by default and need to be divided by $R_V$ in order to compare directly with dust_extinction.
Example 3
Step13: If you are running this with your own python, see the synphot documentation on how to install your own copy of the necessary files.
Next, let's make a background flux to which we will apply extinction. Here we make a 10,000 K blackbody using the model mechanism from within synphot and normalize it to $V$ = 10 in the Vega-based magnitude system.
Step14: Now we initialize the extinction model and choose an extinction of $A_V$ = 2. To get the dust_extinction model working with synphot, we create a wavelength array and make a spectral element with the extinction model as a lookup table.
Step15: Synthetic photometry refers to modeling an observation of a star by multiplying the theoretical model for the astronomical flux through a certain filter response function, then integrating.
Step16: Next, synphot performs the integration and computes magnitudes in the Vega system.
Step17: This is a good check for us to do. We normalized our spectrum to $V$ = 10 mag and added 2 mag of visual extinction, so the synthetic photometry procedure should reproduce these chosen values, and it does. Now we are ready to find the extinction in other passbands.
We calculate the new photometry for the rest of the Johnson optical and the Bessell infrared filters. We calculate extinction $A = \Delta m$ and plot color excess, $E(\lambda - V) = A_\lambda - A_V$.
Notice that synphot calculates the effective wavelength of the observations for us, which is very useful for plotting the results. We show reddening with the model extinction curve for comparison in the plot. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import astropy.units as u
from astropy.table import Table
from dust_extinction.parameter_averages import CCM89, F99
from synphot import units, config
from synphot import SourceSpectrum,SpectralElement,Observation,ExtinctionModel1D
from synphot.models import BlackBodyNorm1D
from synphot.spectrum import BaseUnitlessSpectrum
from synphot.reddening import ExtinctionCurve
from astroquery.simbad import Simbad
from astroquery.mast import Observations
import astropy.visualization
Explanation: Analyzing interstellar reddening and calculating synthetic photometry
Authors
Kristen Larson, Lia Corrales, Stephanie T. Douglas, Kelle Cruz
Input from Emir Karamehmetoglu, Pey Lian Lim, Karl Gordon, Kevin Covey
Learning Goals
Investigate extinction curve shapes
Deredden spectral energy distributions and spectra
Calculate photometric extinction and reddening
Calculate synthetic photometry for a dust-reddened star by combining dust_extinction and synphot
Convert from frequency to wavelength with astropy.unit equivalencies
Unit support for plotting with astropy.visualization
Keywords
dust extinction, synphot, astroquery, units, photometry, extinction, physics, observational astronomy
Companion Content
Bessell & Murphy (2012)
Summary
In this tutorial, we will look at some extinction curves from the literature, use one of those curves to deredden an observed spectrum, and practice invoking a background source flux in order to calculate magnitudes from an extinction model.
The primary libraries we'll be using are dust_extinction and synphot, which are Astropy affiliated packages.
We recommend installing the two packages in this fashion:
pip install synphot
pip install dust_extinction
This tutorial requires v0.7 or later of dust_extinction. To ensure that all commands work properly, make sure you have the correct version installed. If you have v0.6 or earlier installed, run the following command to upgrade
pip install dust_extinction --upgrade
End of explanation
# Create wavelengths array.
wav = np.arange(0.1, 3.0, 0.001)*u.micron
for model in [CCM89, F99]:
for R in (2.0,3.0,4.0):
# Initialize the extinction model
ext = model(Rv=R)
plt.plot(1/wav, ext(wav), label=model.name+' R='+str(R))
plt.xlabel('$\lambda^{-1}$ ($\mu$m$^{-1}$)')
plt.ylabel('A($\lambda$) / A(V)')
plt.legend(loc='best')
plt.title('Some Extinction Laws')
plt.show()
Explanation: Introduction
Dust in the interstellar medium (ISM) extinguishes background starlight. The wavelength dependence of the extinction is such that short-wavelength light is extinguished more than long-wavelength light, and we call this effect reddening.
If you're new to extinction, here is a brief introduction to the types of quantities involved.
The fractional change to the flux of starlight is
$$
\frac{dF_\lambda}{F_\lambda} = -\tau_\lambda
$$
where $\tau$ is the optical depth and depends on wavelength. Integrating along the line of sight, the resultant flux is an exponential function of optical depth,
$$
\tau_\lambda = -\ln\left(\frac{F_\lambda}{F_{\lambda,0}}\right).
$$
With an eye to how we define magnitudes, we usually change the base from $e$ to 10,
$$
\tau_\lambda = -2.303\log\left(\frac{F_\lambda}{F_{\lambda,0}}\right),
$$
and define an extinction $A_\lambda = 1.086 \,\tau_\lambda$ so that
$$
A_\lambda = -2.5\log\left(\frac{F_\lambda}{F_{\lambda,0}}\right).
$$
There are two basic take-home messages from this derivation:
Extinction introduces a multiplying factor $10^{-0.4 A_\lambda}$ to the flux.
Extinction is defined relative to the flux without dust, $F_{\lambda,0}$.
Once astropy and the affiliated packages are installed, we can import from them as needed:
Example 1: Investigate Extinction Models
The dust_extinction package provides various models for extinction $A_\lambda$ normalized to $A_V$. The shapes of normalized curves are relatively (and perhaps surprisingly) uniform in the Milky Way. The little variation that exists is often parameterized by the ratio of extinction ($A_V$) to reddening in the blue-visual ($E_{B-V}$),
$$
R_V \equiv \frac{A_V}{E_{B-V}}
$$
where $E_{B-V}$ is differential extinction $A_B-A_V$. In this example, we show the $R_V$-parameterization for the Clayton, Cardelli, & Mathis (1989, CCM) and the Fitzpatrick (1999) models. More model options are available in the dust_extinction documentation.
End of explanation
obsTable = Observations.query_object("HD 147933",radius="1 arcsec")
obsTable_spec=obsTable[obsTable['dataproduct_type']=='spectrum']
obsTable_spec.pprint()
obsids = ['3000022829']
dataProductsByID = Observations.get_product_list(obsids)
manifest = Observations.download_products(dataProductsByID)
Explanation: Astronomers studying the ISM often display extinction curves against inverse wavelength (wavenumber) to show the ultraviolet variation, as we do here. Infrared extinction varies much less and approaches zero at long wavelength in the absence of wavelength-independent, or grey, extinction.
Example 2: Deredden a Spectrum
Here we deredden (unextinguish) the IUE ultraviolet spectrum and optical photometry of the star $\rho$ Oph (HD 147933).
First, we will use astroquery to fetch the archival IUE spectrum from MAST:
End of explanation
t_lwr = Table.read('./mastDownload/IUE/lwr05639/lwr05639mxlo_vo.fits')
print(t_lwr)
Explanation: We read the downloaded files into an astropy table:
End of explanation
wav_UV = t_lwr['WAVE'][0,].quantity
UVflux = t_lwr['FLUX'][0,].quantity
Explanation: The .quantity extension in the next lines will read the Table columns into Quantity vectors. Quantities keep the units of the Table column attached to the numpy array values.
End of explanation
custom_query = Simbad()
custom_query.add_votable_fields('fluxdata(U)','fluxdata(B)','fluxdata(V)')
phot_table=custom_query.query_object('HD 147933')
Umag=phot_table['FLUX_U']
Bmag=phot_table['FLUX_B']
Vmag=phot_table['FLUX_V']
Explanation: Now, we use astroquery again to fetch photometry from Simbad to go with the IUE spectrum:
End of explanation
wav_U = 0.3660 * u.micron
zeroflux_U_nu = 1.81E-23 * u.Watt/(u.m*u.m*u.Hz)
wav_B = 0.4400 * u.micron
zeroflux_B_nu = 4.26E-23 * u.Watt/(u.m*u.m*u.Hz)
wav_V = 0.5530 * u.micron
zeroflux_V_nu = 3.64E-23 * u.Watt/(u.m*u.m*u.Hz)
Explanation: To convert the photometry to flux, we look up some properties of the photometric passbands, including the flux of a magnitude zero star through the each passband, also known as the zero-point of the passband.
End of explanation
zeroflux_U = zeroflux_U_nu.to(u.erg/u.AA/u.cm/u.cm/u.s,
equivalencies=u.spectral_density(wav_U))
zeroflux_B = zeroflux_B_nu.to(u.erg/u.AA/u.cm/u.cm/u.s,
equivalencies=u.spectral_density(wav_B))
zeroflux_V = zeroflux_V_nu.to(u.erg/u.AA/u.cm/u.cm/u.s,
equivalencies=u.spectral_density(wav_V))
Explanation: The zero-points that we found for the optical passbands are not in the same units as the IUE fluxes. To make matters worse, the zero-point fluxes are $F_\nu$ and the IUE fluxes are $F_\lambda$. To convert between them, the wavelength is needed. Fortunately, astropy provides an easy way to make the conversion with equivalencies:
End of explanation
Uflux = zeroflux_U * 10.**(-0.4*Umag)
Bflux = zeroflux_B * 10.**(-0.4*Bmag)
Vflux = zeroflux_V * 10.**(-0.4*Vmag)
Explanation: Now we can convert from photometry to flux using the definition of magnitude:
$$
F=F_0\ 10^{-0.4\, m}
$$
End of explanation
astropy.visualization.quantity_support()
plt.plot(wav_UV,UVflux,'m',label='UV')
plt.plot(wav_V,Vflux,'ko',label='U, B, V')
plt.plot(wav_B,Bflux,'ko')
plt.plot(wav_U,Uflux,'ko')
plt.legend(loc='best')
plt.ylim(0,3E-10)
plt.title('rho Oph')
plt.show()
Explanation: Using astropy quantities allow us to take advantage of astropy's unit support in plotting. Calling astropy.visualization.quantity_support explicitly turns the feature on. Then, when quantity objects are passed to matplotlib plotting functions, the axis labels are automatically labeled with the unit of the quantity. In addition, quantities are converted automatically into the same units when combining multiple plots on the same axes.
End of explanation
Rv = 5.0 # Usually around 3, but about 5 for this star.
Ebv = 0.5
ext = F99(Rv=Rv)
Explanation: Finally, we initialize the extinction model, choosing values $R_V = 5$ and $E_{B-V} = 0.5$. This star is famous in the ISM community for having large-$R_V$ dust in the line of sight.
End of explanation
plt.semilogy(wav_UV,UVflux,'m',label='UV')
plt.semilogy(wav_V,Vflux,'ko',label='U, B, V')
plt.semilogy(wav_B,Bflux,'ko')
plt.semilogy(wav_U,Uflux,'ko')
plt.semilogy(wav_UV,UVflux/ext.extinguish(wav_UV,Ebv=Ebv),'b',
label='dereddened: EBV=0.5, RV=5')
plt.semilogy(wav_V,Vflux/ext.extinguish(wav_V,Ebv=Ebv),'ro',
label='dereddened: EBV=0.5, RV=5')
plt.semilogy(wav_B,Bflux/ext.extinguish(wav_B,Ebv=Ebv),'ro')
plt.semilogy(wav_U,Uflux/ext.extinguish(wav_U,Ebv=Ebv),'ro')
plt.legend(loc='best')
plt.title('rho Oph')
plt.show()
Explanation: To extinguish (redden) a spectrum, multiply by the ext.extinguish function. To unextinguish (deredden), divide by the same ext.extinguish, as we do here:
End of explanation
# Optional, for when the STScI ftp server is not answering:
config.conf.vega_file='http://ssb.stsci.edu/cdbs/calspec/alpha_lyr_stis_008.fits'
config.conf.johnson_u_file='http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_u_004_syn.fits'
config.conf.johnson_b_file='http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_b_004_syn.fits'
config.conf.johnson_v_file='http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_v_004_syn.fits'
config.conf.johnson_r_file='http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_r_003_syn.fits'
config.conf.johnson_i_file='http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_i_003_syn.fits'
config.conf.bessel_j_file='http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_j_003_syn.fits'
config.conf.bessel_h_file='http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_h_004_syn.fits'
config.conf.bessel_k_file='http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_k_003_syn.fits'
u_band = SpectralElement.from_filter('johnson_u')
b_band = SpectralElement.from_filter('johnson_b')
v_band = SpectralElement.from_filter('johnson_v')
r_band = SpectralElement.from_filter('johnson_r')
i_band = SpectralElement.from_filter('johnson_i')
j_band = SpectralElement.from_filter('bessel_j')
h_band = SpectralElement.from_filter('bessel_h')
k_band = SpectralElement.from_filter('bessel_k')
Explanation: Notice that, by dereddening the spectrum, the absorption feature at 2175 Angstrom is removed. This feature can also be seen as the prominent bump in the extinction curves in Example 1. That we have smoothly removed the 2175 Angstrom feature suggests that the values we chose, $R_V = 5$ and $E_{B-V} = 0.5$, are a reasonable model for the foreground dust.
Those experienced with dereddening should notice that that dust_extinction returns $A_\lambda/A_V$, while other routines like the IDL fm_unred procedure often return $A_\lambda/E_{B-V}$ by default and need to be divided by $R_V$ in order to compare directly with dust_extinction.
Example 3: Calculate Color Excess with synphot
Calculating broadband photometric extinction is harder than it might look at first. All we have to do is look up $A_\lambda$ for a particular passband, right? Under the right conditions, yes. In general, no.
Remember that we have to integrate over a passband to get synthetic photometry,
$$
A = -2.5\log\left(\frac{\int W_\lambda F_{\lambda,0} 10^{-0.4A_\lambda} d\lambda}{\int W_\lambda F_{\lambda,0} d\lambda} \right),
$$
where $W_\lambda$ is the fraction of incident energy transmitted through a filter. See the detailed appendix in Bessell & Murphy (2012)
for an excellent review of the issues and common misunderstandings in synthetic photometry.
There is an important point to be made here. The expression above does not simplify any further. Strictly speaking, it is impossible to convert spectral extinction $A_\lambda$ into a magnitude system without knowing the wavelength dependence of the source's original flux across the filter in question. As a special case, if we assume that the source flux is constant in the band (i.e. $F_\lambda = F$), then we can cancel these factors out from the integrals, and extinction in magnitudes becomes the weighted average of the extinction factor across the filter in question. In that special case, $A_\lambda$ at $\lambda_{\rm eff}$ is a good approximation for magnitude extinction.
In this example, we will demonstrate the more general calculation of photometric extinction. We use a blackbody curve for the flux before the dust, apply an extinction curve, and perform synthetic photometry to calculate extinction and reddening in a magnitude system.
First, let's get the filter transmission curves:
End of explanation
# First, create a blackbody at some temperature.
sp = SourceSpectrum(BlackBodyNorm1D, temperature=10000)
# sp.plot(left=1, right=15000, flux_unit='flam', title='Blackbody')
# Get the Vega spectrum as the zero point flux.
vega = SourceSpectrum.from_vega()
# vega.plot(left=1, right=15000)
# Normalize the blackbody to some chosen magnitude, say V = 10.
vmag = 10.
v_band = SpectralElement.from_filter('johnson_v')
sp_norm = sp.normalize(vmag * units.VEGAMAG, v_band, vegaspec=vega)
sp_norm.plot(left=1, right=15000, flux_unit='flam', title='Normed Blackbody')
Explanation: If you are running this with your own python, see the synphot documentation on how to install your own copy of the necessary files.
Next, let's make a background flux to which we will apply extinction. Here we make a 10,000 K blackbody using the model mechanism from within synphot and normalize it to $V$ = 10 in the Vega-based magnitude system.
End of explanation
# Initialize the extinction model and choose the extinction, here Av = 2.
ext = CCM89(Rv=3.1)
Av = 2.
# Create a wavelength array.
wav = np.arange(0.1, 3, 0.001)*u.micron
# Make the extinction model in synphot using a lookup table.
ex = ExtinctionCurve(ExtinctionModel1D,
points=wav, lookup_table=ext.extinguish(wav, Av=Av))
sp_ext = sp_norm*ex
sp_ext.plot(left=1, right=15000, flux_unit='flam',
title='Normed Blackbody with Extinction')
Explanation: Now we initialize the extinction model and choose an extinction of $A_V$ = 2. To get the dust_extinction model working with synphot, we create a wavelength array and make a spectral element with the extinction model as a lookup table.
End of explanation
# "Observe" the star through the filter and integrate to get photometric mag.
sp_obs = Observation(sp_ext, v_band)
sp_obs_before = Observation(sp_norm, v_band)
# sp_obs.plot(left=1, right=15000, flux_unit='flam',
# title='Normed Blackbody with Extinction through V Filter')
Explanation: Synthetic photometry refers to modeling an observation of a star by multiplying the theoretical model for the astronomical flux through a certain filter response function, then integrating.
End of explanation
sp_stim_before = sp_obs_before.effstim(flux_unit='vegamag', vegaspec=vega)
sp_stim = sp_obs.effstim(flux_unit='vegamag', vegaspec=vega)
print('before dust, V =', np.round(sp_stim_before,1))
print('after dust, V =', np.round(sp_stim,1))
# Calculate extinction and compare to our chosen value.
Av_calc = sp_stim - sp_stim_before
print('$A_V$ = ', np.round(Av_calc,1))
Explanation: Next, synphot performs the integration and computes magnitudes in the Vega system.
End of explanation
bands = [u_band,b_band,v_band,r_band,i_band,j_band,h_band,k_band]
for band in bands:
# Calculate photometry with dust:
sp_obs = Observation(sp_ext, band, force='extrap')
obs_effstim = sp_obs.effstim(flux_unit='vegamag', vegaspec=vega)
# Calculate photometry without dust:
sp_obs_i = Observation(sp_norm, band, force='extrap')
obs_i_effstim = sp_obs_i.effstim(flux_unit='vegamag', vegaspec=vega)
# Extinction = mag with dust - mag without dust
# Color excess = extinction at lambda - extinction at V
color_excess = obs_effstim - obs_i_effstim - Av_calc
plt.plot(sp_obs_i.effective_wavelength(), color_excess,'or')
print(np.round(sp_obs_i.effective_wavelength(),1), ',',
np.round(color_excess,2))
# Plot the model extinction curve for comparison
plt.plot(wav,Av*ext(wav)-Av,'--k')
plt.ylim([-2,2])
plt.xlabel('$\lambda$ (Angstrom)')
plt.ylabel('E($\lambda$-V)')
plt.title('Reddening of T=10,000K Background Source with Av=2')
plt.show()
Explanation: This is a good check for us to do. We normalized our spectrum to $V$ = 10 mag and added 2 mag of visual extinction, so the synthetic photometry procedure should reproduce these chosen values, and it does. Now we are ready to find the extinction in other passbands.
We calculate the new photometry for the rest of the Johnson optical and the Bessell infrared filters. We calculate extinction $A = \Delta m$ and plot color excess, $E(\lambda - V) = A_\lambda - A_V$.
Notice that synphot calculates the effective wavelength of the observations for us, which is very useful for plotting the results. We show reddening with the model extinction curve for comparison in the plot.
End of explanation |
9,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Uncertainty Analysis in Bayesian Deep Learning with Tensorflow Probability
Here is astroNN, please take a look if you are interested in astronomy or how neural network applied in astronomy
* Henry Leung - Astronomy student, University of Toronto - henrysky
* Project adviser
Step1: The equation we use neural net to do regression is
$y=x \sin(x)$
Step2: Here, we will generate data with the regression task
Genereate data with three different region to simulate different situation
Step3: First, we will just use a very simple neural network (2 layers, 75 and 50 neurones respectively) to do the regression task without any fancy methods.
Please notice this is just for the sake of demonstartion. In real world application, please add reguarization (Validation, Early Stop, Reduce Learning Rate) and don not train 100 epochs for such simple task.
<br>
Another thing you should keep in mind neural net is a function approximation method, it will only work well with its training data domain, you can see testing data >1.0, the neural network performs poorly as it has never been trained on.
Remember!!! Great Power comes with Great Overfitting (2016, someone on Reddit)
Step4: Second, we will use a 2 layered fully connected neural network with Flipout
Step5: Third, use a single model to get both epistemic and aleatoric uncertainty with variational inference
Please notice you need to apply Dense layer with varational method from Tensorflow Probability.
For more information on the astroNN loss functions (mse_lin_wrapper, mse_var_wrapper) used in here
Step6: Then we will define two custom loss and custom data generator
<br>
The custom loss function for variance prediction will be<br>
$\text{Loss}=\frac{1}{T} \sum_1^T \frac{1}{2} \frac{(y-\hat{y})^{2}}{\sigma^{2}} + \frac{1}{2}\text{log}(\sigma^{2})$
<br>
<br>
But for the sake of numerical stability, its better to make the neural net to predict $\text{s} = \text{log}(\sigma^{2})$ with <br>
$\text{Loss}=\frac{1}{T} \sum_1^T \frac{1}{2} (y-\hat{y})^{2}e^{-\text{s}} + \frac{1}{2}(\text{s})$
Please ensure if you are using these loss functions, please make sure sigma from your data is gaussian distributed.
<br>
<br>
Please notice if you use the first loss, you have to use $softplus$ as activation in the last layer as $\sigma^{2}$ is always positve, and $linear$ or other appropriate activation for second loss as $\text{log}(\sigma^{2})$ can be both positive and negative.
<br>
$\text{Prediction} = \text{Mean from Variational Dense Layers}$
<br>
$\text{Total Variance} = \text{Variance from Variational Dense Layers} + \text{Predictive Variance Output}$
<br>
$\text{Prediction with Error} = \text{Prediction} \pm \sqrt{\text{Total Variance}}$ | Python Code:
%matplotlib inline
%config InlineBackend.figure_format='retina'
import numpy as np
import pylab as plt
import random
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Dense, InputLayer, Activation
from tensorflow.keras import initializers, regularizers
from tensorflow.keras.optimizers import Adam
import tensorflow.keras.backend as K
# To get plot_model works, you need to install graphviz and pydot_ng
from tensorflow.keras.utils import plot_model
from astroNN.nn.layers import MCDropout
Explanation: Regression Uncertainty Analysis in Bayesian Deep Learning with Tensorflow Probability
Here is astroNN, please take a look if you are interested in astronomy or how neural network applied in astronomy
* Henry Leung - Astronomy student, University of Toronto - henrysky
* Project adviser: Jo Bovy - Professor, Department of Astronomy and Astrophysics, University of Toronto - jobovy
* Contact Henry: henrysky.leung [at] utoronto.ca
* This tutorial is created on 17/July/2018 with Tensorflow 1.9.0, Tensorflow Probability 0.2.0
* Updated on 31/Jan/2020 with Tensorflow 2.1.0, Tensorflow Probability 0.9.0
<br>
For more resources on Bayesian Deep Learning with Reparameterization Tricks/Flipout, please refer to README.md
Import everything we need
End of explanation
def gen_function(x):
return(x*np.sin(x))
Explanation: The equation we use neural net to do regression is
$y=x \sin(x)$
End of explanation
# Four different region,
x_1 = np.random.uniform(-1.5, 1.5, 1000)
x_2 = np.random.uniform(-3, -1.5, 500)
x_3 = np.random.uniform(1.5, 3, 500)
# Corresponding answer and add different noise and bias
y_1 = gen_function(x_1) + np.random.normal(0.0, 0.1, size=x_1.shape)
y_2 = gen_function(x_2) + np.random.normal(0.0, 0.3, size=x_2.shape)
y_3 = gen_function(x_3) + np.random.normal(0.0, 0.3, size=x_3.shape)
# Error of four different region
y_1_err = np.ones(y_1.shape) * 0.1
y_2_err = np.ones(y_2.shape) * 0.3
y_3_err = np.ones(y_3.shape) * 0.3
y_err = np.hstack((y_1_err,y_2_err,y_3_err)).ravel()
# Combine those 4 regions
x = np.hstack((x_1,x_2,x_3)).ravel()
y = np.hstack((y_1,y_2,y_3)).ravel()
# Mean and Standard Derivation for normalization and denormalization
x_mean = np.mean(x)
x_std = np.std(x)
y_mean = np.mean(y)
y_std = np.std(y)
# Array to plot the real equation lines
x_true = np.arange(-5.0,5,0.1)
y_true = gen_function(x_true)
# Matlibplot
plt.figure(figsize=(10, 7), dpi=100)
plt.title('Training Data')
plt.scatter(x, y, s=0.5, label='Training Data')
plt.plot(x_true, y_true, color='red', label='Equation')
plt.xlabel('Training Point (Data)')
plt.ylabel('Training Point (Answer)')
plt.legend(loc='best')
plt.show()
def normalize(data, mean, std):
return (data-mean) / std
def denormalize(data, mean, std):
return (data * std) + mean
Explanation: Here, we will generate data with the regression task
Genereate data with three different region to simulate different situation
End of explanation
def model_regression(num_hidden):
# Defeine Keras model for regression
model = Sequential()
model.add(InputLayer(batch_input_shape=((None, 1))))
model.add(Dense(units=num_hidden[0], kernel_initializer='he_normal', activation='relu'))
model.add(Dense(units=num_hidden[1], kernel_initializer='he_normal', activation='relu'))
model.add(Dense(units=1, activation="linear"))
return model
#Define some parameter
optimizer = Adam(lr=.005)
batch_size = 64
# Compile Keras model
model = model_regression(num_hidden=[75,50])
model.compile(loss='mse', optimizer=optimizer)
model.fit(normalize(x, x_mean, x_std), normalize(y, y_mean, y_std), validation_split=0.0, batch_size=batch_size,
epochs=50, verbose=0)
# Generate test data
test_batch_size = 500
x_test = np.random.uniform(-5.0, 5.0, test_batch_size)
# Array for the real equation
x_true = np.arange(-5.0,5.0,0.1)
y_true = gen_function(x_true)
# Predict
prediction = model.predict(normalize(x_test, x_mean, x_std))
prediction = denormalize(prediction, y_mean, y_std)
# Plotting
plt.figure(figsize=(10, 7), dpi=100)
plt.scatter(x_test, prediction, s=0.5, label='Neural Net Prediction')
plt.plot(x_true, y_true, color='red', label='Equation')
plt.axvline(x=-3.0, label="Training Data range (3.0 to -3.0)")
plt.axvline(x=3.0)
plt.xlabel('Data')
plt.ylabel('Answer')
plt.legend(loc='best')
plt.show()
Explanation: First, we will just use a very simple neural network (2 layers, 75 and 50 neurones respectively) to do the regression task without any fancy methods.
Please notice this is just for the sake of demonstartion. In real world application, please add reguarization (Validation, Early Stop, Reduce Learning Rate) and don not train 100 epochs for such simple task.
<br>
Another thing you should keep in mind neural net is a function approximation method, it will only work well with its training data domain, you can see testing data >1.0, the neural network performs poorly as it has never been trained on.
Remember!!! Great Power comes with Great Overfitting (2016, someone on Reddit)
End of explanation
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
np.random.seed(1)
# Mesh the input space for evaluations of the real function, the prediction and
# its MSE
x_gp = np.atleast_2d(np.linspace(-5, 5, 1000)).T
# ----------------------------------------------------------------------
# now the noisy case
X = np.random.uniform(-3, 3, 100)
X = np.atleast_2d(X).T
# Observations and noise
y_gp = gen_function(X).ravel()
dy = np.random.normal(0.0, 0.5, size=y_gp.shape)
y_gp += dy
# Instanciate a Gaussian Process model
kernel = C(1, (1e-3, 1e3)) * RBF(10, (1e-3, 1e2))
gp = GaussianProcessRegressor(kernel=kernel, alpha=(dy / y_gp) ** 2,
n_restarts_optimizer=10)
# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(X, y_gp)
# Make the prediction on the meshed x-axis (ask for MSE as well)
y_pred, sigma = gp.predict(x_gp, return_std=True)
# Plot the function, the prediction and the 95% confidence interval based on
# the MSE
plt.figure(figsize=(10, 7), dpi=100)
plt.plot(x_gp, gen_function(x_gp), 'r:', label=u'$f(x) = x\,\sin(x)$')
plt.errorbar(X.ravel(), y_gp, dy, fmt='r.', markersize=10, label=u'Observations')
plt.plot(x_gp, y_pred, 'b-', label=u'Prediction')
plt.fill(np.concatenate([x_gp, x_gp[::-1]]),
np.concatenate([y_pred - 1.96 * sigma,
(y_pred + 1.96 * sigma)[::-1]]),
alpha=.5, fc='b', ec='None', label='1 sigma confidence interval')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.ylim(-5, 5)
plt.legend(loc='upper left')
plt.show()
Explanation: Second, we will use a 2 layered fully connected neural network with Flipout
End of explanation
from astroNN.nn.losses import mse_lin_wrapper, mse_var_wrapper
# http://astronn.readthedocs.io/en/latest/neuralnets/losses_metrics.html
import tensorflow_probability as tfp
import tensorflow as tf
def generate_train_batch(x, y, x_mean, x_std, y_mean, y_std, y_err):
while True:
indices = random.sample(range(0, x.shape[0]), batch_size)
indices = np.sort(indices)
x_batch, y_batch, y_err_batch = normalize(x[indices], x_mean, x_std), normalize(y[indices], y_mean, y_std), normalize(y_err[indices], 0., y_std)
yield ({'input': x_batch, 'label_err': y_err_batch}, {'linear_output': y_batch, 'variance_output': y_batch})
def model_regression_var(num_hidden):
# Define Keras Model
input_tensor = tf.keras.layers.Input(batch_shape=(None, 1), name='input')
labels_err_tensor = tf.keras.layers.Input(batch_shape=(None, 1), name='label_err')
layer_1 = tfp.layers.DenseFlipout(units=num_hidden[0], activation='relu')(input_tensor)
layer_2 = tfp.layers.DenseFlipout(units=num_hidden[1], activation='relu')(layer_1)
layer_3 = tfp.layers.DenseFlipout(units=num_hidden[2], activation='relu')(layer_2)
# Good old output
linear_output = tf.keras.layers.Dense(units=1, activation="linear", name='linear_output')(layer_3)
# Data-dependent uncertainty outainty
variance_output = tf.keras.layers.Dense(units=1, activation='linear', name='variance_output')(layer_3)
model = tf.keras.Model(inputs=[input_tensor, labels_err_tensor], outputs=[variance_output, linear_output])
model_prediction = tf.keras.Model(inputs=[input_tensor], outputs=[variance_output, linear_output])
mse_var_ext = mse_var_wrapper(linear_output, labels_err_tensor)
mse_lin_ext = mse_lin_wrapper(variance_output, labels_err_tensor)
return model, model_prediction, mse_lin_ext, mse_var_ext
Explanation: Third, use a single model to get both epistemic and aleatoric uncertainty with variational inference
Please notice you need to apply Dense layer with varational method from Tensorflow Probability.
For more information on the astroNN loss functions (mse_lin_wrapper, mse_var_wrapper) used in here: http://astronn.readthedocs.io/en/latest/neuralnets/losses_metrics.html#regression-loss-and-predictive-variance-loss-for-bayesian-neural-net
Please refer to the Paper: Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
for the information of DenseFlipout.
Frist we will define a "fork" model using Keras functional API, one end ouput the prediction, and the other end ouput predicted uncertainty
End of explanation
#Define some parameter
batch_size = 32
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
# Compile Keras model
num_hidden = [100, 75, 50]
model, model_prediction, mse_lin_ext, mse_var_ext = model_regression_var(num_hidden)
model.compile(loss={'linear_output': mse_lin_ext, 'variance_output': mse_var_ext},
loss_weights={'linear_output': .5, 'variance_output': .5}, optimizer=optimizer)
model.fit_generator(generator=generate_train_batch(x, y, x_mean, x_std, y_mean, y_std, y_err), epochs=100,
max_queue_size=60, verbose=0, steps_per_epoch= x.shape[0] // batch_size)
# Generate test data
test_batch_size = 500
x_test = np.random.uniform(-4, 4, test_batch_size)
mc_num = 100
predictions = np.zeros((mc_num, test_batch_size, 1))
predictions_var = np.zeros((mc_num, test_batch_size, 1))
var = np.zeros((mc_num, test_batch_size, 1))
uncertainty = np.zeros((mc_num, test_batch_size, 1))
for i in range(mc_num):
result = np.array(model_prediction.predict(normalize(x_test.reshape((x_test.shape[0], 1)), x_mean, x_std)))
predictions[i] = result[1].reshape((test_batch_size,1))
predictions_var[i] = result[0].reshape((test_batch_size,1))
predictions = denormalize(predictions, y_mean, y_std)
predictions_var = denormalize(predictions_var, np.zeros(predictions_var.shape), y_std)
# get mean results and its varience and mean unceratinty
prediction_mc_droout = np.mean(predictions, axis=0)
var = np.mean(np.exp(predictions_var), axis=0)
var_mc_droout = np.var(predictions, axis=0)
total_variance = var + var_mc_droout # epistemic plus aleatoric uncertainty
total_uncertainty = np.sqrt(total_variance)
# Array for the real equation
x_true = np.arange(-4,4,0.1)
y_true = gen_function(x_true)
# Plotting
plt.figure(figsize=(10, 7), dpi=100)
plt.errorbar(x_test, prediction_mc_droout, yerr=total_uncertainty[:, 0], markersize=2,fmt='o', ecolor='g', capthick=2,
elinewidth=0.5, label='Neural Net Prediction')
plt.axvline(x=-3.0, label="Training Data range (3.0 to -3.0)")
plt.axvline(x=3.0)
plt.plot(x_true, y_true, color='red', label='Real Answer')
plt.ylim(-7,7)
plt.xlabel('Data')
plt.ylabel('Answer')
plt.legend(loc='best')
plt.show()
Explanation: Then we will define two custom loss and custom data generator
<br>
The custom loss function for variance prediction will be<br>
$\text{Loss}=\frac{1}{T} \sum_1^T \frac{1}{2} \frac{(y-\hat{y})^{2}}{\sigma^{2}} + \frac{1}{2}\text{log}(\sigma^{2})$
<br>
<br>
But for the sake of numerical stability, its better to make the neural net to predict $\text{s} = \text{log}(\sigma^{2})$ with <br>
$\text{Loss}=\frac{1}{T} \sum_1^T \frac{1}{2} (y-\hat{y})^{2}e^{-\text{s}} + \frac{1}{2}(\text{s})$
Please ensure if you are using these loss functions, please make sure sigma from your data is gaussian distributed.
<br>
<br>
Please notice if you use the first loss, you have to use $softplus$ as activation in the last layer as $\sigma^{2}$ is always positve, and $linear$ or other appropriate activation for second loss as $\text{log}(\sigma^{2})$ can be both positive and negative.
<br>
$\text{Prediction} = \text{Mean from Variational Dense Layers}$
<br>
$\text{Total Variance} = \text{Variance from Variational Dense Layers} + \text{Predictive Variance Output}$
<br>
$\text{Prediction with Error} = \text{Prediction} \pm \sqrt{\text{Total Variance}}$
End of explanation |
9,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background Removal with Robust PCA
视频数据集 BMC | Background Models Challenge
https
Step1: LU 分解
将一个矩阵分解为一个上三角和下三角矩阵的乘积
Step2: The LU factorization is useful!
Solving Ax = b becomes LUx = b
Step3: 广播运算
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when
they are equal, or
one of them is 1
稀疏矩阵
There are the most common sparse storage formats | Python Code:
# 多行结果输出支持
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Explanation: Background Removal with Robust PCA
视频数据集 BMC | Background Models Challenge
https://www.cs.utexas.edu/~chaoyeh/web_action_data/dataset_list.html
Background Subtraction Website
End of explanation
import numpy as np
def LU(A):
U = np.copy(A)
m, n = A.shape
L = np.eye(n)
for k in range(n-1):
for j in range(k+1,n):
L[j,k] = U[j,k]/U[k,k]
U[j,k:n] -= L[j,k] * U[k,k:n]
return L, U
A = np.array([[2,1,1,0],[4,3,3,1],[8,7,9,3],[6,7,9,8]]).astype(np.float)
L, U = LU(A)
L
U
A
L @ U
np.allclose(A, L @ U)
Explanation: LU 分解
将一个矩阵分解为一个上三角和下三角矩阵的乘积
End of explanation
v=np.array([1,2,3])
v
v.shape
v1=np.expand_dims(v, -1)
v1
v1.shape
v2 = v[np.newaxis]
v2
v2.shape
v3 = v[:, np.newaxis]
v3
v3.shape
Explanation: The LU factorization is useful!
Solving Ax = b becomes LUx = b:
1. find A = LU
2. solve Ly = b
3. solve Ux = y
End of explanation
import sklearn
Explanation: 广播运算
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when
they are equal, or
one of them is 1
稀疏矩阵
There are the most common sparse storage formats:
coordinate-wise (scipy calls COO)
compressed sparse row (CSR)
compressed sparse column (CSC)
End of explanation |
9,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data, a=0.1, b=0.9):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
x_min = np.amin(image_data)
x_max = np.amax(image_data)
return a + (image_data - x_min) * (b - a) / (x_max - x_min)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
print(test_labels.shape)
print(train_labels.shape)
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
train_features[:10]
train_features.shape
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
n_input = 784
n_classes = 10
features = tf.placeholder(tf.float32, [None, n_input])
labels = tf.placeholder(tf.float32, [None, n_classes])
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal([n_input, n_classes]))
biases = tf.Variable(tf.zeros([n_classes]))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 1
learning_rate = 0.1
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
9,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step5: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this
Step8: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step9: Building the model
Below is a function where I build the graph for the network.
Step10: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step11: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
Step12: Saved checkpoints
Read up on saving and loading checkpoints here
Step13: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step14: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('chalo.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
chars[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
Explanation: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
train_x, train_y, val_x, val_y = split_data(chars, 10, 50)
train_x.shape
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
train_x[:,:50]
Explanation: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:
End of explanation
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
Explanation: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# One-hot encoding the input and target characters
x_one_hot = tf.one_hot(inputs, num_classes)
y_one_hot = tf.one_hot(targets, num_classes)
### Build the RNN layers
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
### Run the data through the RNN layers
# This makes a list where each element is on step in the sequence
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one output row for each step for each batch
seq_output = tf.concat(outputs, axis=1)
output = tf.reshape(seq_output, [-1, lstm_size])
# Now connect the RNN putputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(num_classes))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and batch
logits = tf.matmul(output, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
preds = tf.nn.softmax(logits, name='predictions')
# Reshape the targets to match the logits
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
cost = tf.reduce_mean(loss)
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
# NOTE: I'm using a namedtuple here because I think they are cool
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
Explanation: Building the model
Below is a function where I build the graph for the network.
End of explanation
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
keep_prob = 0.3
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that heps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 300
# Save every N iterations
save_every_n = 100
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
Explanation: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
checkpoint = "checkpoints/i3000_l512_v2.497.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Cuando en")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
9,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Question I (a)
queue = [A]
next = A; queue = [B,C]
next = B; queue = [C,I,D,E]
next = C; queue = [I,D,E,F,G]
next = I -> STOP
There are 4 iterations needed to find the Node I in minimum.
Question I (b)
Step2: Minimal steps
Step3: Question II (b)
A graph is acyclic if it contains not only one cycle and directed if the edges have a direction. This grahp contains a cycle (A->C->F->A) so it is not acyclic but the edges have a directions so it is directed.
Question II (c)
Yes there are Eulerian cycles in the graph. Eulerian cycles use every edge exactly once For Example A->C->F->A or A->B->D->E->I->F->A
Question II (d)
Yes there are Hamiltonian cycles in the graph. Hamiltonian cycles use every node exactly once. For Example A->C->F->A or A->B->D->E->I->F->A
Question II (e)
The concept of cliques is only defined for undirected graphs. If the graph in Fig. 1 would be undirected, ACF, CFG, BDE and BEI would me maximal cliques.
Question II (f)
As the graph contains cycles a topological ordering is only possible by violating some of the edges.
Question III | Python Code:
grapha = {"A":["B", "C"],
"B":["D", "I", "E"],
"C":["G", "F"],
"D":["E","H"],
"E":["I"],
"F":["G","A"],
"H":[],
"I":["F"],
"G":[]}
def dfs(connects, start, searched):
looks if a searched node is in a graph.
connects: dictionary of arrays,the possible paths
star: node to start looking for the searched node
lifo = [start]
visited = set()
while lifo:
print(lifo)
vertex = lifo.pop()
if vertex == searched:
return "vertex found!"
if not vertex in visited:
lifo += connects[vertex]
visited.add(vertex)
return("not found")
dfs(grapha,"A","I")
Explanation: Question I (a)
queue = [A]
next = A; queue = [B,C]
next = B; queue = [C,I,D,E]
next = C; queue = [I,D,E,F,G]
next = I -> STOP
There are 4 iterations needed to find the Node I in minimum.
Question I (b)
End of explanation
import pandas as pd
adjacency = pd.DataFrame({'A':[0,1,1,0,0,0,0,0,0],
'B':[0,0,0,1,1,0,0,0,1],
'C':[0,0,0,0,0,1,1,0,0],
'D':[0,0,0,0,1,0,0,1,0],
'E':[0,0,0,0,0,0,0,0,1],
'F':[1,0,0,0,0,0,1,0,0],
'G':[0,0,0,0,0,0,0,0,0],
'H':[0,0,0,0,0,0,0,0,0],
'I':[0,0,0,0,0,1,0,0,0]},
index =['A','B','C','D','E','F','G','H','I'])
adjacency = adjacency.T
Explanation: Minimal steps:
queue = [A]
next = A; queue = [B,C]
next = B; queue = [I,D,E,C]
next = I -> STOP
There are 3 iterations needed to find node I in minimum.
Question II (a)
End of explanation
def dfs(matrix, query, start):
# Return True if the query was found
if query == start:
return True
# Return False if Node is already visited
elif start not in matrix.index:
return False
# Return False if there are not outgoing edges
elif not 1 in matrix.loc[start].values:
return False
# Call the function for all unvisited neighbouring nodes
else:
mask = adjacency.loc[start].values == 1
neighbours = list(adjacency.loc[start][mask].index)
matrix = matrix.drop(start)
found = []
for n in neighbours:
found.append(dfs(matrix, query, n))
if any(found) == True:
return True
if all(found) == False:
return False
dfs(adjacency, 'I', 'A')
dfs(adjacency, 'N', 'A')
Explanation: Question II (b)
A graph is acyclic if it contains not only one cycle and directed if the edges have a direction. This grahp contains a cycle (A->C->F->A) so it is not acyclic but the edges have a directions so it is directed.
Question II (c)
Yes there are Eulerian cycles in the graph. Eulerian cycles use every edge exactly once For Example A->C->F->A or A->B->D->E->I->F->A
Question II (d)
Yes there are Hamiltonian cycles in the graph. Hamiltonian cycles use every node exactly once. For Example A->C->F->A or A->B->D->E->I->F->A
Question II (e)
The concept of cliques is only defined for undirected graphs. If the graph in Fig. 1 would be undirected, ACF, CFG, BDE and BEI would me maximal cliques.
Question II (f)
As the graph contains cycles a topological ordering is only possible by violating some of the edges.
Question III
End of explanation |
9,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
San Diego Burrito Analytics
Step1: Load data
Step2: Linear model 1
Step3: Linear model 2
Step4: Linear model 3. Predicting Yelp ratings
Can also do this for Google ratings
Note, interestingly, that the Tortilla rating is most positively correlated with Yelp and Google ratings. This is significant in a linear model when accounting for the overall rating. | Python Code:
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
import seaborn as sns
sns.set_style("white")
Explanation: San Diego Burrito Analytics: Linear models
Scott Cole
21 May 2016
This notebook attempts to predict the overall rating of a burrito as a linear combination of its dimensions. Interpretation of these models is complicated by the significant correlations between dimensions (such as meat quality and non-meat filling quality).
Imports
End of explanation
import util
df = util.load_burritos()
N = df.shape[0]
Explanation: Load data
End of explanation
# Define predictors of the model
m_lm = ['Hunger','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap']
# Remove incomplete data
dffull = df[np.hstack((m_lm,'overall'))].dropna()
X = sm.add_constant(dffull[m_lm])
y = dffull['overall']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
print 1 - np.var(res.resid_pearson) / np.var(y)
# Visualize coefficients
from tools.plt import bar
newidx = np.argsort(-res.params.values)
temp = np.arange(len(newidx))
newidx = np.delete(newidx,temp[newidx==0])
bar(res.params[newidx],res.bse[newidx],X.keys()[newidx],'Overall rating\nLinear model\ncoefficient',
ylim =(0,.5),figsize=(11,3))
plt.plot()
figname = 'overall_metric_linearmodelcoef'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
Explanation: Linear model 1: Predict overall rating from the individual dimensions
End of explanation
# Get all ingredient keys
startingredients = 29
ingredientkeys = df.keys()[startingredients:]
# Get all ingredient keys with at least 10 burritos
Nlim = 10
ingredientkeys = ingredientkeys[df.count()[startingredients:].values>=Nlim]
# Make a dataframe for all ingredients
dfing = df[ingredientkeys]
# Convert data to binary
for k in dfing.keys():
dfing[k] = dfing[k].map({'x':1,'X':1,1:1})
dfing[k] = dfing[k].fillna(0)
# Run a general linear model to predict overall burrito rating from ingredients
X = sm.add_constant(dfing)
y = df.overall
lm = sm.GLM(y,X)
res = lm.fit()
print(res.summary())
origR2 = 1 - np.var(res.resid_pearson) / np.var(y)
# Test if the variance explained in this linear model is significantly better than chance
np.random.seed(0)
Nsurr = 1000
randr2 = np.zeros(Nsurr)
for n in range(Nsurr):
Xrand = np.random.rand(X.shape[0],X.shape[1])
Xrand[:,0] = np.ones(X.shape[0])
lm = sm.GLM(y,Xrand)
res = lm.fit()
randr2[n] = 1 - np.var(res.resid_pearson) / np.var(y)
print 'p = ' , np.mean(randr2>origR2)
Explanation: Linear model 2: predict overall rating from ingredients
This linear model is no better than generating random features, showing that simply a good choice of ingredients is not sufficient to making a high quality burrito.
End of explanation
# Average each metric over each Location
# Avoid case issues; in the future should avoid article issues
df.Location = df.Location.str.lower()
m_Location = ['Location','N','Yelp','Google','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
tacoshops = df.Location.unique()
TS = len(tacoshops)
dfmean = pd.DataFrame(np.nan, index=range(TS), columns=m_Location)
for ts in range(TS):
dfmean.loc[ts] = df.loc[df.Location==tacoshops[ts]].mean()
dfmean['N'][ts] = sum(df.Location == tacoshops[ts])
dfmean.Location = tacoshops
# Note high correlations between features
m_Yelp = ['Google','Yelp','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
M = len(m_Yelp)
dfmeancorr = dfmean[m_Yelp].corr()
from matplotlib import cm
clim1 = (-1,1)
plt.figure(figsize=(10,10))
cax = plt.pcolor(range(M+1), range(M+1), dfmeancorr, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(m_Yelp,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(m_Yelp,size=25)
plt.xticks(rotation='vertical')
plt.xlim((0,M))
plt.ylim((0,M))
plt.tight_layout()
# GLM for Yelp: all dimensions
m_Yelp = ['Hunger','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
dffull = dfmean[np.hstack((m_Yelp,'Yelp'))].dropna()
X = sm.add_constant(dffull[m_Yelp])
y = dffull['Yelp']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
print(res.pvalues)
print 1 - np.var(res.resid_pearson) / np.var(y)
# GLM for Yelp: some dimensions
m_Yelp = ['Tortilla','overall']
dffull = dfmean[np.hstack((m_Yelp,'Yelp'))].dropna()
X = sm.add_constant(dffull[m_Yelp])
y = dffull['Yelp']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
print(res.summary())
plt.figure(figsize=(4,4))
ax = plt.gca()
dfmean.plot(kind='scatter',x='Tortilla',y='Yelp',ax=ax,**{'s':40,'color':'k','alpha':.3})
plt.xlabel('Average Tortilla rating',size=20)
plt.ylabel('Yelp rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
plt.ylim((2,5))
plt.tight_layout()
print sp.stats.spearmanr(dffull.Yelp,dffull.Tortilla)
figname = 'corr-Yelp-tortilla'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
Explanation: Linear model 3. Predicting Yelp ratings
Can also do this for Google ratings
Note, interestingly, that the Tortilla rating is most positively correlated with Yelp and Google ratings. This is significant in a linear model when accounting for the overall rating.
End of explanation |
9,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running Tune experiments with Skopt
In this tutorial we introduce Skopt, while running a simple Ray Tune experiment. Tune’s Search Algorithms integrate with Skopt and, as a result, allow you to seamlessly scale up a Skopt optimization process - without sacrificing performance.
Scikit-Optimize, or skopt, is a simple and efficient library to optimize expensive and noisy black-box functions, e.g. large-scale ML experiments. It implements several methods for sequential model-based optimization. Noteably, skopt does not perform gradient-based optimization, and instead uses computationally cheap surrogate models to
approximate the expensive function. In this example we minimize a simple objective to briefly demonstrate the usage of Skopt with Ray Tune via SkOptSearch. It's useful to keep in mind that despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit or explicit objective. Here we assume scikit-opitmize==0.8.1 library is installed. To learn more, please refer to the Scikit-Optimize website.
Step1: Click below to see all the imports we need for this example.
You can also launch directly into a Binder instance to run this notebook yourself.
Just click on the rocket symbol at the top of the navigation.
Step2: Let's start by defining a simple evaluation function. Again, an explicit math formula is queried here for demonstration, yet in practice this is typically a black-box function-- e.g. the performance results after training an ML model. We artificially sleep for a bit (0.1 seconds) to simulate a long-running ML experiment. This setup assumes that we're running multiple steps of an experiment while tuning three hyperparameters, namely width, height, and activation.
Step3: Next, our objective function to be optimized takes a Tune config, evaluates the score of your experiment in a training loop,
and uses tune.report to report the score back to Tune.
Step4: Next we define a search space. The critical assumption is that the optimal hyperparamters live within this space. Yet, if the space is very large, then those hyperparameters may be difficult to find in a short amount of time.
Step5: The search algorithm is instantiated from the SkOptSearch class. We also constrain the the number of concurrent trials to 4 with a ConcurrencyLimiter.
Step6: The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to 1000 samples.
(you can decrease this if it takes too long on your machine).
Step7: Finally, we run the experiment to "min"imize the "mean_loss" of the objective by searching search_config via algo, num_samples times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, notice how efficient it is to execute tune.run().
Step8: We now have hyperparameters found to minimize the mean loss.
Step9: Providing an initial set of hyperparameters
While defining the search algorithm, we may choose to provide an initial set of hyperparameters that we believe are especially promising or informative, and
pass this information as a helpful starting point for the SkOptSearch object. We also can pass the known rewards for these initial params to save on unnecessary computation.
Step10: Now the search_alg built using SkOptSearch takes points_to_evaluate.
Step11: And again run the experiment, this time with initial hyperparameter evaluations
Step12: And we again show the ideal hyperparameters. | Python Code:
# !pip install ray[tune]
!pip install scikit-optimize==0.8.1
!pip install sklearn==0.18.2
Explanation: Running Tune experiments with Skopt
In this tutorial we introduce Skopt, while running a simple Ray Tune experiment. Tune’s Search Algorithms integrate with Skopt and, as a result, allow you to seamlessly scale up a Skopt optimization process - without sacrificing performance.
Scikit-Optimize, or skopt, is a simple and efficient library to optimize expensive and noisy black-box functions, e.g. large-scale ML experiments. It implements several methods for sequential model-based optimization. Noteably, skopt does not perform gradient-based optimization, and instead uses computationally cheap surrogate models to
approximate the expensive function. In this example we minimize a simple objective to briefly demonstrate the usage of Skopt with Ray Tune via SkOptSearch. It's useful to keep in mind that despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit or explicit objective. Here we assume scikit-opitmize==0.8.1 library is installed. To learn more, please refer to the Scikit-Optimize website.
End of explanation
import time
from typing import Dict, Optional, Any
import ray
import skopt
from ray import tune
from ray.tune.suggest import ConcurrencyLimiter
from ray.tune.suggest.skopt import SkOptSearch
ray.init(configure_logging=False)
Explanation: Click below to see all the imports we need for this example.
You can also launch directly into a Binder instance to run this notebook yourself.
Just click on the rocket symbol at the top of the navigation.
End of explanation
def evaluate(step, width, height, activation):
time.sleep(0.1)
activation_boost = 10 if activation=="relu" else 0
return (0.1 + width * step / 100) ** (-1) + height * 0.1 + activation_boost
Explanation: Let's start by defining a simple evaluation function. Again, an explicit math formula is queried here for demonstration, yet in practice this is typically a black-box function-- e.g. the performance results after training an ML model. We artificially sleep for a bit (0.1 seconds) to simulate a long-running ML experiment. This setup assumes that we're running multiple steps of an experiment while tuning three hyperparameters, namely width, height, and activation.
End of explanation
def objective(config):
for step in range(config["steps"]):
score = evaluate(step, config["width"], config["height"], config["activation"])
tune.report(iterations=step, mean_loss=score)
Explanation: Next, our objective function to be optimized takes a Tune config, evaluates the score of your experiment in a training loop,
and uses tune.report to report the score back to Tune.
End of explanation
search_space = {
"steps": 100,
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu", "tanh"]),
}
Explanation: Next we define a search space. The critical assumption is that the optimal hyperparamters live within this space. Yet, if the space is very large, then those hyperparameters may be difficult to find in a short amount of time.
End of explanation
algo = SkOptSearch()
algo = ConcurrencyLimiter(algo, max_concurrent=4)
Explanation: The search algorithm is instantiated from the SkOptSearch class. We also constrain the the number of concurrent trials to 4 with a ConcurrencyLimiter.
End of explanation
num_samples = 1000
# We override here for our smoke tests.
num_samples = 10
Explanation: The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to 1000 samples.
(you can decrease this if it takes too long on your machine).
End of explanation
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="skopt_exp",
num_samples=num_samples,
config=search_space
)
Explanation: Finally, we run the experiment to "min"imize the "mean_loss" of the objective by searching search_config via algo, num_samples times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, notice how efficient it is to execute tune.run().
End of explanation
print("Best hyperparameters found were: ", analysis.best_config)
Explanation: We now have hyperparameters found to minimize the mean loss.
End of explanation
initial_params = [
{"width": 10, "height": 0, "activation": "relu"},
{"width": 15, "height": -20, "activation": "tanh"}
]
known_rewards = [-189, -1144]
Explanation: Providing an initial set of hyperparameters
While defining the search algorithm, we may choose to provide an initial set of hyperparameters that we believe are especially promising or informative, and
pass this information as a helpful starting point for the SkOptSearch object. We also can pass the known rewards for these initial params to save on unnecessary computation.
End of explanation
algo = SkOptSearch(points_to_evaluate=initial_params)
algo = ConcurrencyLimiter(algo, max_concurrent=4)
Explanation: Now the search_alg built using SkOptSearch takes points_to_evaluate.
End of explanation
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="skopt_exp_with_warmstart",
num_samples=num_samples,
config=search_space
)
Explanation: And again run the experiment, this time with initial hyperparameter evaluations:
End of explanation
print("Best hyperparameters found were: ", analysis.best_config)
ray.shutdown()
Explanation: And we again show the ideal hyperparameters.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.