Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
9,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
## Your code here
from collections import Counter
import random
word_counts = Counter(int_words)
t = 1e-5
total_words = len(int_words)
frequency = { word : float(count) / total_words for word, count in word_counts.items() }
p_drop = {word : 1 - np.sqrt(float(t)/frequency[word]) for word in word_counts }
train_words = [w for w in int_words if p_drop[w] < random.random()] # The final subsampled word list
#print (len(train_words))
#print(train_words[:30])
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
R = random.randint (1, window_size) # or window_size + 1?
start = idx - R if idx >= R else 0
end = idx + R + 1
return list (set(words[start:idx] + words[idx+1:end]))
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
# with tf.name_scope('input'):
inputs = tf.placeholder (tf.int32, shape=[None], name='inputs')
# with tf.name_scope('targets'):
labels = tf.placeholder (tf.int32, shape=[None,None], name='labels')
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
# with tf.name_scope('embeddings'):
embedding = tf.Variable (tf.random_uniform ([n_vocab, n_embedding], -1.0, 1.0, dtype=tf.float32), name='embedding') # create embedding weight matrix here
embed = tf.nn.embedding_lookup (embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
tf.summary.histogram ('embedding', embedding)
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable (tf.truncated_normal ([n_vocab, n_embedding], stddev=0.1, dtype=tf.float32), name='softmax_w') # create softmax weight matrix here
softmax_b = tf.Variable (tf.zeros (n_vocab, dtype=tf.float32), name='softmax_b') # create softmax biases here
tf.summary.histogram ('softmax_w', softmax_w)
tf.summary.histogram ('softmax_b', softmax_b)
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss (softmax_w, softmax_b, labels, embed, n_sampled, n_vocab, name='loss')
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
tf.summary.scalar ('cost', cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 1
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
merged = tf.summary.merge_all()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter ("./logs/2/train", sess.graph)
test_writer = tf.summary.FileWriter ("./logs/2/test")
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
summary, train_loss, _ = sess.run([merged, cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
train_writer.add_summary (summary, iteration)
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
9,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate new columns with average block info
Take average values over two time horizons
6 blocks (~1 min) -> represents the current state (short frequency view)
60 blocks (~10 min) -> represents the long term view
Step1: Merge data with new columns
Step2: Create a label
What are we predicting?
A hindsight estimate of what the price should be, given knowledge about previous blocks
Develop a summary statistic about the distribution of prices over previous blocks
Our target
Step3: Compute the summary statistic mu
given the distribution of mv values, fit a statistical model to the data
use this fit model to compute the 25th percentile of the distribution
Step4: Compute the label, p, given mu
knowing mu, how do we obtain our hindsight recommendation?
using our definition of mu, we solve an equation to obtain p (price)
p = (mu x gweiPaid_b) / gasUsed_b
this will serve as our label and thus recommendation for how much to pay per unit gas for a transation to successfully commence
it tells us what price we need to set in order to force mv for that bid to be mu
Step5: Look our method smoothed the prices out!
Write training set and labels to a csv file for modeling | Python Code:
df['txcnt_second'] = df['tx_count'].values / df['blockTime'].values
df['avg_gasUsed_t_perblock'] = df.groupby('block_id')['gasUsed_t'].transform('mean')
df['avg_price_perblock'] = df.groupby('block_id')['price_gwei'].transform('mean')
def rolling_avg(window_size):
price = df[['block_id', 'avg_price_perblock']].drop_duplicates().sort_values(
'block_id', ascending=True)
gasUsed_t = df[['block_id', 'avg_gasUsed_t_perblock']].drop_duplicates().sort_values(
'block_id', ascending=True)
txcnt_second = df[['block_id', 'txcnt_second']].drop_duplicates().sort_values(
'block_id', ascending=True)
tx_count = df[['block_id', 'tx_count']].drop_duplicates().sort_values(
'block_id', ascending=True)
gasUsed_b = df[['block_id', 'gasUsed_b']].drop_duplicates().sort_values(
'block_id', ascending=True)
uncle_count = df[['block_id', 'uncle_count']].drop_duplicates().sort_values(
'block_id', ascending=True)
difficulty = df[['block_id', 'difficulty']].drop_duplicates().sort_values(
'block_id', ascending=True)
blocktime = df[['block_id', 'blockTime']].drop_duplicates().sort_values(
'block_id', ascending=True)
# create new pandas dataframe with average values
rolling_avg = pd.DataFrame()
# calculate rolling averages
rolling_avg['avg_blocktime'] = blocktime['blockTime'].rolling(window=window_size).mean()
rolling_avg['avg_gasUsed_b'] = gasUsed_b['gasUsed_b'].rolling(window=window_size).mean()
rolling_avg['avg_tx_count'] = tx_count['tx_count'].rolling(window=window_size).mean()
rolling_avg['avg_uncle_count'] = uncle_count['uncle_count'].rolling(window=window_size).mean()
rolling_avg['avg_difficulty'] = difficulty['difficulty'].rolling(window=window_size).mean()
rolling_avg['avg_txcnt_second'] = txcnt_second['txcnt_second'].rolling(window=window_size).mean()
rolling_avg['avg_gasUsed_t'] = gasUsed_t['avg_gasUsed_t_perblock'].rolling(window=window_size).mean()
rolling_avg['avg_price'] = price['avg_price_perblock'].rolling(window=window_size).mean()
# insert blockids to merge on
rolling_avg['blockids'] = df['block_id'].drop_duplicates().sort_values(ascending=True)
return rolling_avg
num_blocks = [6, 60]
for num in num_blocks:
df_rolling_avg = rolling_avg(num)
df_rolling_avg.to_csv('./../data/block_avg_{}.csv'.format(num))
df_rolling_avg_6 = rolling_avg(6)
df_rolling_avg_60 = rolling_avg(60)
Explanation: Generate new columns with average block info
Take average values over two time horizons
6 blocks (~1 min) -> represents the current state (short frequency view)
60 blocks (~10 min) -> represents the long term view
End of explanation
merged1 = pd.merge(df, df_rolling_avg_6, left_on='block_id', right_on='blockids')
merged2 = pd.merge(merged1, df_rolling_avg_60, left_on='block_id', right_on='blockids', suffixes=('_6', '_60'))
merged2.columns
for col in merged2.columns:
print(col, merged2[col].isnull().sum())
merged2.dropna(inplace=True)
Explanation: Merge data with new columns
End of explanation
merged2['mv'] = merged2.gweiShare / merged2.gasShare
merged2['mv'].isnull().sum()
merged2['mv'].describe()
Explanation: Create a label
What are we predicting?
A hindsight estimate of what the price should be, given knowledge about previous blocks
Develop a summary statistic about the distribution of prices over previous blocks
Our target: the 25th percentile of the distribution (gweiShare / gasShare)
Definitions
gasUsed_t -> the amount of gas consumed on a transation
gasUsed_b -> the amount of gas consumed in an entire block
gweiPaid -> the total amount paid (Gwei) for a transaction (= gasUsed_t x price_gwei)
gweiPaid_b -> the total amount paid in a block
gweiShare -> the fraction of gwei paid w.r.t. the entire block
gasShare -> the fraction of gas consumed w.r.t. the entire block
Define "miner value" – mv
the fraction of prices per block and gas per block
mv = gweiShare / gasShare
local parameter (per transaction)
Define mu
mu is a summary statistic of mv (global parameter)
a measure of how likely a transaction is to be "picked up" by a miner for completion (risk factor)
our target/goal is for mu to be the 25th percentile of mv (gweiShare / gasShare)
mu = percentile(mv, 25) over the entire distribution of mv values
we can tune this parameter to increase or decrease the desired percentile
it is a pre-emptive statistical calculation based on our hindsight knowledge
The "price" predicted with hindsight
knowing mu, how do we obtain our hindsight recommendation?
using our definition of mu, we solve an equation to obtain p (price)
p = (mu x gweiPaid_b) / gasUsed_b
this will serve as our label and thus recommendation for how much to pay per unit gas for a transation to successfully commence
it tells us what price we need to set in order to force mv for that bid to be mu
Calculate miner value (mv) for every datapoint in our dataset
price / gas or gweiShare / gasShare
End of explanation
alpha = .25
mu= merged2.mv.quantile(alpha)
merged2.mv.apply(np.log10).hist(bins=100)
plt.xlim([-2,2])
ylims=plt.gca().get_ylim()
plt.vlines(np.log10(mu), ylims[0], ylims[1], 'r' )
merged2.mv.hist(bins=np.arange(0,10,.20))
ylims=plt.gca().get_ylim()
plt.vlines(mu, ylims[0], ylims[1], 'r' )
merged2.mv.hist(bins=np.arange(0,10,.20), color = 'k', alpha=0.5, histtype='stepfilled',
label='Miner Values')
ylims=plt.gca().get_ylim()
plt.vlines(mu, ylims[0], ylims[1], 'r', linestyle='--')
plt.title('Distribution of miner values', fontsize=18)
plt.legend()
plt.tight_layout()
plt.savefig('./../images/mv_dist.png', dpi=300)
Explanation: Compute the summary statistic mu
given the distribution of mv values, fit a statistical model to the data
use this fit model to compute the 25th percentile of the distribution
End of explanation
mu
merged2['p_label'] = mu* (merged2.gweiPaid_b / merged2.gasUsed_b)
merged2['p_label'].hist(bins=np.arange(0,50,2), color = 'b', alpha=0.7, histtype='stepfilled',
label='New Label')
merged2['price_gwei'].hist(bins=np.arange(0,50,.5), color = 'r', alpha=0.7,
histtype='stepfilled', label='Price')
plt.title('Constructed Label', fontsize=18)
plt.legend()
plt.tight_layout()
merged2['p_label2'] = mu*merged2.gweiPaid_b/(merged2.gasUsed_b+merged2.gasUsed_t*(1-mu))
merged2.p_label2.describe()
merged2['p_label2'].hist(bins=np.arange(0,50,2), color = 'b', alpha=0.7, histtype='stepfilled',
label='New Label')
merged2['price_gwei'].hist(bins=np.arange(0,50,.5), color = 'r', alpha=0.7, histtype='stepfilled',
label='Price')
plt.title('Constructed Label', fontsize=16)
plt.legend()
plt.tight_layout()
plt.savefig('./../images/label.png', dpi=300)
Explanation: Compute the label, p, given mu
knowing mu, how do we obtain our hindsight recommendation?
using our definition of mu, we solve an equation to obtain p (price)
p = (mu x gweiPaid_b) / gasUsed_b
this will serve as our label and thus recommendation for how much to pay per unit gas for a transation to successfully commence
it tells us what price we need to set in order to force mv for that bid to be mu
End of explanation
merged2.columns
# select candidate features for modeling
sel_cols = ['gasLimit_t',
'gasUsed_t',
'newContract',
'blockTime',
'difficulty',
'gasLimit_b',
'gasUsed_b',
'reward',
'size',
'type',
'totalFee',
'amount_gwei',
'gasShare',
'gweiPaid',
'gweiPaid_b',
'gweiShare',
'free_t',
'day',
'hour',
'dayofweek',
'txcnt_second',
'avg_blocktime_6',
'avg_gasUsed_b_6',
'avg_tx_count_6',
'avg_uncle_count_6',
'avg_difficulty_6',
'avg_txcnt_second_6',
'avg_gasUsed_t_6',
'avg_price_6',
'avg_blocktime_60',
'avg_gasUsed_b_60',
'avg_tx_count_60',
'avg_uncle_count_60',
'avg_difficulty_60',
'avg_txcnt_second_60',
'avg_gasUsed_t_60',
'avg_price_60',
'mv']
features = merged2[sel_cols]
features.to_csv('./../data/training.csv')
labels = merged2['p_label2']
labels.to_csv('./../data/labels.csv')
Explanation: Look our method smoothed the prices out!
Write training set and labels to a csv file for modeling
End of explanation |
9,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this discussion notebook we will cover the material from lecture 12 abour ensambles and boosting. For consistancy (with the lecture's note), we will use decision trees. However, any other classifier/regressor could be used just as easily.
Step1: Bagging and Random Forests
Bagging is a simple idea that the average prediction over multiple classifiers is better than any single classifier. Each classifier is trained on a random sample of the data (bootstrapping).
Loading the data
We'll use the kaggle competition data so you'll get familiar with loading it up and working with it.
Step2: Single Decision Tree
As a reminder, this is how we create a decision tree classifier with the mltools package.
Note that I create this tree as a random tree. That's because in bagging it is very common to create bunch of random trees and then it is called Random Forest!!!!
Step3: Random Forest
We'll create a set of 10 random trees, each with bootstrapping, and combine them into a random forest.
Step4: Printing the train and validation auc for all classifiers.
Step7: Creating a BaggedTree class
One option to find the AUC of the bagging algorithm is to implement it ourselves. But as programmers, we are lazy (the lazier the better). So instead let's just create a BaggedTree class and inherit classifiers.
By implementing the prdictSoft method we'll get everything for free
Step8: Note that this class doesn't have a train function. We assume the training was already done and we are getting the learners. As an excersice, try and write the train function yourself.
Step9: Not surprisingly, the validation AUC has improved
Step10: Just for the fun of it, let's see what a single tree will do. We can't take a random tree, we have to take the first -- make sure you understand why.
Step12: Now let's predict using all the boosting we have | Python Code:
# Import all required libraries
from __future__ import division # For python 2.*
import numpy as np
import matplotlib.pyplot as plt
import mltools as ml
np.random.seed(0)
%matplotlib inline
Explanation: In this discussion notebook we will cover the material from lecture 12 abour ensambles and boosting. For consistancy (with the lecture's note), we will use decision trees. However, any other classifier/regressor could be used just as easily.
End of explanation
X = np.genfromtxt("data/X_train.txt",delimiter=None)
Y = np.genfromtxt("data/Y_train.txt",delimiter=None)
[Xtr,Xva,Ytr,Yva] = ml.splitData(X,Y,0.80)
Xte = np.genfromtxt('data/X_test.txt',delimiter=None)
Xt, Yt = Xtr[:4000], Ytr[:4000]
Explanation: Bagging and Random Forests
Bagging is a simple idea that the average prediction over multiple classifiers is better than any single classifier. Each classifier is trained on a random sample of the data (bootstrapping).
Loading the data
We'll use the kaggle competition data so you'll get familiar with loading it up and working with it.
End of explanation
tree_one = ml.dtree.treeClassify(Xt, Yt, minParent=2**6, maxDepth=25, nFeatures=6) # The nFeatures makes it random
probs = tree_one.predictSoft(Xte)
print("{0:>15}: {1:.4f}".format('Train AUC', tree_one.auc(Xt, Yt)))
print("{0:>15}: {1:.4f}".format('Validation AUC', tree_one.auc(Xva, Yva)))
Explanation: Single Decision Tree
As a reminder, this is how we create a decision tree classifier with the mltools package.
Note that I create this tree as a random tree. That's because in bagging it is very common to create bunch of random trees and then it is called Random Forest!!!!
End of explanation
np.random.seed(0) # Resetting the seed in case you ran other stuff.
n_bags = 10
bags = [] # self.learners
for l in range(n_bags):
# Each boosted data is the size of the original data.
Xi, Yi = ml.bootstrapData(Xt, Yt, Xt.shape[0])
# Train the model on that draw
tree = ml.dtree.treeClassify(Xi, Yi, minParent=2**6,maxDepth=25, nFeatures=6)
bags.append(tree)
Explanation: Random Forest
We'll create a set of 10 random trees, each with bootstrapping, and combine them into a random forest.
End of explanation
for l in range(n_bags):
print(l)
print("{0:>15}: {1:.4f}".format('Train AUC', bags[l].auc(Xt, Yt)))
print("{0:>15}: {1:.4f}".format('Validation AUC', bags[l].auc(Xva, Yva)))
Explanation: Printing the train and validation auc for all classifiers.
End of explanation
class BaggedTree(ml.base.classifier):
def __init__(self, learners):
Constructs a BaggedTree class with a set of learners.
self.learners = learners
def predictSoft(self, X):
Predicts the probabilities with each bagged learner and average over the results.
n_bags = len(self.learners)
preds = [self.learners[l].predictSoft(X) for l in range(n_bags)]
return np.mean(preds, axis=0)
Explanation: Creating a BaggedTree class
One option to find the AUC of the bagging algorithm is to implement it ourselves. But as programmers, we are lazy (the lazier the better). So instead let's just create a BaggedTree class and inherit classifiers.
By implementing the prdictSoft method we'll get everything for free :)
End of explanation
bt = BaggedTree(bags)
bt.classes = np.unique(Y)
print("{0:>15}: {1:.4f}".format('Train AUC', bt.auc(Xt, Yt)))
print("{0:>15}: {1:.4f}".format('Validation AUC', bt.auc(Xva, Yva)))
Explanation: Note that this class doesn't have a train function. We assume the training was already done and we are getting the learners. As an excersice, try and write the train function yourself.
End of explanation
path_to_file = './data/poly_data.txt'
data = np.genfromtxt(path_to_file, delimiter='\t') # Read data from file
X, Y = np.atleast_2d(data[:, 0]).T, data[:, 1]
X, Y = ml.shuffleData(X, Y)
Xtr, Xte, Ytr, Yte = ml.splitData(X, Y, 0.75)
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(Xtr, Ytr, s=80, color='blue', alpha=0.75, label='Train')
ax.scatter(Xte, Yte, s=240, marker='*', color='red', alpha=0.75, label='Test')
ax.set_xlim(-0.2, 4.3)
ax.set_ylim(-13, 18)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
# Controlling the size of the legend and the location.
ax.legend(fontsize=30, loc=0)
plt.show()
boosts = []
n_boosts = 20
Ytr_ = np.copy(Ytr) # We're going to copy the data becuase each booster iteration we're going to mess with it.
for i in range(n_boosts):
tree = ml.dtree.treeRegress(Xtr, Ytr_, maxDepth=1)
boosts.append(tree)
# Now "learning" from out mistakes.
Ytr_ -= tree.predict(Xtr)
Explanation: Not surprisingly, the validation AUC has improved :)
Gradient Boosted Trees
Boosting is kind of the opposite of bagging. In Bagging we have a set of really smart classifiers that we are afraid will overfit the data so we take the average of them to increase the prediction accuracy.
In boosting it's the other way around. We take a bunch of really "stupid" classifier and make them more complex by learning in a sequence where each time we learn from the previous classifier mistakes.
With the decisio trees example, in the bagging we took really smart random classifiers. In the boosting we are going to take one level trees (a.k.a stumps).
Loading the data
This is a regression problem so let's use a regression data :)
End of explanation
xs = np.linspace(0, 4.2, 200)
xs = np.atleast_2d(xs).T
ys = boosts[0].predict(xs)
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(Xtr, Ytr, s=80, color='blue', alpha=0.75, label='Train')
ax.scatter(Xte, Yte, s=240, marker='*', color='red', alpha=0.75, label='Test')
ax.plot(xs, ys, lw=3, color='black', alpha=0.75, label='Prediction')
ax.set_xlim(-0.2, 4.3)
ax.set_ylim(-13, 18)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
# Controlling the size of the legend and the location.
ax.legend(fontsize=30, loc=0)
plt.show()
Explanation: Just for the fun of it, let's see what a single tree will do. We can't take a random tree, we have to take the first -- make sure you understand why.
End of explanation
def predict(X, boosts):
Predicts regression values using boosting.
preds = [boosts[i].predict(X) for i in range(len(boosts))]
# Notice that in the bagging we returning the mean, here we return the sum
return np.sum(preds, axis=0)
xs = np.linspace(0, 4.2, 200)
xs = np.atleast_2d(xs).T
ys = predict(xs, boosts)
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(Xtr, Ytr, s=80, color='blue', alpha=0.75, label='Train')
ax.scatter(Xte, Yte, s=240, marker='*', color='red', alpha=0.75, label='Test')
ax.plot(xs, ys, lw=3, color='black', alpha=0.75, label='Prediction')
ax.set_xlim(-0.2, 4.3)
ax.set_ylim(-13, 18)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
# Controlling the size of the legend and the location.
ax.legend(fontsize=30, loc=0)
plt.show()
Explanation: Now let's predict using all the boosting we have
End of explanation |
9,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generalized Linear Models
Step1: GLM
Step2: Load the data and add a constant to the exogenous (independent) variables
Step3: The dependent variable is N by 2 (Success
Step4: The independent variables include all the other variables described above, as
well as the interaction terms
Step5: Fit and summary
Step6: Quantities of interest
Step7: First differences
Step8: The interquartile first difference for the percentage of low income households in a school district is
Step9: Plots
We extract information that will be used to draw some interesting plots
Step10: Plot yhat vs y
Step11: Plot yhat vs. Pearson residuals
Step12: Histogram of standardized deviance residuals
Step13: QQ Plot of Deviance Residuals
Step14: GLM
Step15: Load the data and add a constant to the exogenous variables
Step16: Model Fit and summary
Step17: GLM
Step18: Fit and summary (artificial data) | Python Code:
%matplotlib inline
import numpy as np
import statsmodels.api as sm
from scipy import stats
from matplotlib import pyplot as plt
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
Explanation: Generalized Linear Models
End of explanation
print(sm.datasets.star98.NOTE)
Explanation: GLM: Binomial response data
Load Star98 data
In this example, we use the Star98 dataset which was taken with permission
from Jeff Gill (2000) Generalized linear models: A unified approach. Codebook
information can be obtained by typing:
End of explanation
data = sm.datasets.star98.load(as_pandas=False)
data.exog = sm.add_constant(data.exog, prepend=False)
Explanation: Load the data and add a constant to the exogenous (independent) variables:
End of explanation
print(data.endog[:5,:])
Explanation: The dependent variable is N by 2 (Success: NABOVE, Failure: NBELOW):
End of explanation
print(data.exog[:2,:])
Explanation: The independent variables include all the other variables described above, as
well as the interaction terms:
End of explanation
glm_binom = sm.GLM(data.endog, data.exog, family=sm.families.Binomial())
res = glm_binom.fit()
print(res.summary())
Explanation: Fit and summary
End of explanation
print('Total number of trials:', data.endog[0].sum())
print('Parameters: ', res.params)
print('T-values: ', res.tvalues)
Explanation: Quantities of interest
End of explanation
means = data.exog.mean(axis=0)
means25 = means.copy()
means25[0] = stats.scoreatpercentile(data.exog[:,0], 25)
means75 = means.copy()
means75[0] = lowinc_75per = stats.scoreatpercentile(data.exog[:,0], 75)
resp_25 = res.predict(means25)
resp_75 = res.predict(means75)
diff = resp_75 - resp_25
Explanation: First differences: We hold all explanatory variables constant at their means and manipulate the percentage of low income households to assess its impact on the response variables:
End of explanation
print("%2.4f%%" % (diff*100))
Explanation: The interquartile first difference for the percentage of low income households in a school district is:
End of explanation
nobs = res.nobs
y = data.endog[:,0]/data.endog.sum(1)
yhat = res.mu
Explanation: Plots
We extract information that will be used to draw some interesting plots:
End of explanation
from statsmodels.graphics.api import abline_plot
fig, ax = plt.subplots()
ax.scatter(yhat, y)
line_fit = sm.OLS(y, sm.add_constant(yhat, prepend=True)).fit()
abline_plot(model_results=line_fit, ax=ax)
ax.set_title('Model Fit Plot')
ax.set_ylabel('Observed values')
ax.set_xlabel('Fitted values');
Explanation: Plot yhat vs y:
End of explanation
fig, ax = plt.subplots()
ax.scatter(yhat, res.resid_pearson)
ax.hlines(0, 0, 1)
ax.set_xlim(0, 1)
ax.set_title('Residual Dependence Plot')
ax.set_ylabel('Pearson Residuals')
ax.set_xlabel('Fitted values')
Explanation: Plot yhat vs. Pearson residuals:
End of explanation
from scipy import stats
fig, ax = plt.subplots()
resid = res.resid_deviance.copy()
resid_std = stats.zscore(resid)
ax.hist(resid_std, bins=25)
ax.set_title('Histogram of standardized deviance residuals');
Explanation: Histogram of standardized deviance residuals:
End of explanation
from statsmodels import graphics
graphics.gofplots.qqplot(resid, line='r')
Explanation: QQ Plot of Deviance Residuals:
End of explanation
print(sm.datasets.scotland.DESCRLONG)
Explanation: GLM: Gamma for proportional count response
Load Scottish Parliament Voting data
In the example above, we printed the NOTE attribute to learn about the
Star98 dataset. statsmodels datasets ships with other useful information. For
example:
End of explanation
data2 = sm.datasets.scotland.load()
data2.exog = sm.add_constant(data2.exog, prepend=False)
print(data2.exog[:5,:])
print(data2.endog[:5])
Explanation: Load the data and add a constant to the exogenous variables:
End of explanation
glm_gamma = sm.GLM(data2.endog, data2.exog, family=sm.families.Gamma(sm.families.links.log()))
glm_results = glm_gamma.fit()
print(glm_results.summary())
Explanation: Model Fit and summary
End of explanation
nobs2 = 100
x = np.arange(nobs2)
np.random.seed(54321)
X = np.column_stack((x,x**2))
X = sm.add_constant(X, prepend=False)
lny = np.exp(-(.03*x + .0001*x**2 - 1.0)) + .001 * np.random.rand(nobs2)
Explanation: GLM: Gaussian distribution with a noncanonical link
Artificial data
End of explanation
gauss_log = sm.GLM(lny, X, family=sm.families.Gaussian(sm.families.links.log()))
gauss_log_results = gauss_log.fit()
print(gauss_log_results.summary())
Explanation: Fit and summary (artificial data)
End of explanation |
9,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-2', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: AWI
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:37
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
9,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Competition assay analysis and thoughts
Here we will analyze two competition assay conducted as a rough beginning to understand how to best design competition assays to the fluorescent kinase inhibitors (bosutinib, bosutinib isomer, erlotinib, and gefitinib) used in other assays in this repository.
The first (1st) part of this will be looking at data collected trying to compete off bosutinib from Src kinase with imatinib (conducted on March 11, 2015). The second (2nd) part of this will be looking at data collected trying to compete off gefitinib from Src kinase with imatinib (conducted on October 30, 2015). The third (3rd) part will be some simple modeling to see if these experiments follow our expectations and how we can better design the experiments to get better results from the competition assay. Then in a fourth (4th) section we'll work a little on a PYMC model to get affinities from the competition assay.
Step1: Bosutinib Assay
The first attempt at a Bosutinib-Imatinib competition assay was on March 11, 2015. The full description of the assay can be found here.
In short (and very similar to as described in the lab-protocols repository) 100 uL of 0.5 $\mu$M Src with a titration of Bosutinib up to 20 $\mu$M in every other row was prepared. In one of two plates 10 $\mu$M Imatinib was added to the whole plate to try to compete off the Bosutinib.
importing and plotting data the clunky way for transparency, will change to use platereader.py once it is slightly nicer.
Step2: Gefitinib Assay
The first attempt at a Gefitinib-Imatinib competition assay was on October 30, 2015. The full description of the assay can be found here.
In short (and very similar to as described in the lab-protocols repository) 100 uL of 0.5 $\mu$M Src with a titration of Gefitinib up to 20 $\mu$M in every other row was prepared. In one of two plates 10 $\mu$M Imatinib was added to the whole plate to try to compete off the Gefitinib. Note the documentation here could be better, which could be why this data doesn't look particularly great.
Step3: Modeled Data
So now let's look at what our expected data might look like. Here we are looking at inhibitor affinities for Src.
Some initial placeholder data from here again
Step4: From our assay setup we know the Src concentration is 0.5 $\mu$M.
Step6: First let's just plot our two component binding for Bosutinib and Gefitinib.
Step8: Now let's see how we would expect Imatinib to effect this.
From our assay setup we know the Imatinib concentration is
Step9: HMMM.
So I was right to think that Gefitinib should work despite the fact that Bosutinib didn't, but...
Now let's try modeling new experiments with Abl before we actually do them!
Step10: Looks promising!
Let's check out our new data set based on this. | Python Code:
#import needed libraries
import re
import os
from lxml import etree
import pandas as pd
import pymc
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: Competition assay analysis and thoughts
Here we will analyze two competition assay conducted as a rough beginning to understand how to best design competition assays to the fluorescent kinase inhibitors (bosutinib, bosutinib isomer, erlotinib, and gefitinib) used in other assays in this repository.
The first (1st) part of this will be looking at data collected trying to compete off bosutinib from Src kinase with imatinib (conducted on March 11, 2015). The second (2nd) part of this will be looking at data collected trying to compete off gefitinib from Src kinase with imatinib (conducted on October 30, 2015). The third (3rd) part will be some simple modeling to see if these experiments follow our expectations and how we can better design the experiments to get better results from the competition assay. Then in a fourth (4th) section we'll work a little on a PYMC model to get affinities from the competition assay.
End of explanation
def get_wells_from_section(path):
reads = path.xpath("*/Well")
wellIDs = [read.attrib['Pos'] for read in reads]
data = [(float(s.text), r.attrib['Pos'])
for r in reads
for s in r]
datalist = {
well : value
for (value, well) in data
}
welllist = [
[
datalist[chr(64 + row) + str(col)]
if chr(64 + row) + str(col) in datalist else None
for row in range(1,9)
]
for col in range(1,13)
]
return welllist
file_BOS= "data/2015-03-11 18-35-16_plate_1.xml"
file_name = os.path.splitext(file_BOS)[0]
root = etree.parse(file_BOS)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_BOS + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
Bos_dataframe = pd.DataFrame(welllist, columns = ['A - Src','B - Buffer','C - Src','D - Buffer', 'E - Src','F - Buffer','G - Src','H - Buffer'])
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
Bos_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
Bos_dataframe
file_BOS_IMA= "data/Ima_WIP_SMH_SrcBos_Extend_013015_mdfx_20150311_18.xml"
file_name = os.path.splitext(file_BOS_IMA)[0]
root = etree.parse(file_BOS_IMA)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_BOS_IMA + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
BosIma_dataframe = pd.DataFrame(welllist, columns = ['A - Src','B - Buffer','C - Src','D - Buffer', 'E - Src','F - Buffer','G - Src','H - Buffer'])
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
BosIma_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
plt.plot(BosIma_dataframe[:].values, 'r');
plt.plot(Bos_dataframe[:].values, 'k');
plt.text(8,450,'Bosutinib',fontsize=15)
plt.text(8,420,'Imatinib + Bosutinib',fontsize=15,color='red')
Explanation: Bosutinib Assay
The first attempt at a Bosutinib-Imatinib competition assay was on March 11, 2015. The full description of the assay can be found here.
In short (and very similar to as described in the lab-protocols repository) 100 uL of 0.5 $\mu$M Src with a titration of Bosutinib up to 20 $\mu$M in every other row was prepared. In one of two plates 10 $\mu$M Imatinib was added to the whole plate to try to compete off the Bosutinib.
importing and plotting data the clunky way for transparency, will change to use platereader.py once it is slightly nicer.
End of explanation
file_GEF = "data/Gef_2015-10-30 17-55-48_plate_1.xml"
file_name = os.path.splitext(file_GEF)[0]
root = etree.parse(file_GEF)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_GEF + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
Gef_dataframe = pd.DataFrame(welllist, columns = ['A - Src','B - Buffer','C - Src','D - Buffer', 'E - Src','F - Buffer','G - Src','H - Buffer'])
Gef_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
file_GEF_IMA= "data/GefIma_2015-10-30 17-51-13_plate_1.xml"
file_name = os.path.splitext(file_GEF_IMA)[0]
root = etree.parse(file_GEF_IMA)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_GEF_IMA + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
GefIma_dataframe = pd.DataFrame(welllist, columns = ['A - Src','B - Buffer','C - Src','D - Buffer', 'E - Src','F - Buffer','G - Src','H - Buffer'])
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
GefIma_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
plt.plot(GefIma_dataframe[:].values, 'r');
plt.plot(Gef_dataframe[:].values, 'k');
plt.text(8,230,'Gefitinib',fontsize=15)
plt.text(8,210,'Imatinib + Gefitinib',fontsize=15,color='red')
Explanation: Gefitinib Assay
The first attempt at a Gefitinib-Imatinib competition assay was on October 30, 2015. The full description of the assay can be found here.
In short (and very similar to as described in the lab-protocols repository) 100 uL of 0.5 $\mu$M Src with a titration of Gefitinib up to 20 $\mu$M in every other row was prepared. In one of two plates 10 $\mu$M Imatinib was added to the whole plate to try to compete off the Gefitinib. Note the documentation here could be better, which could be why this data doesn't look particularly great.
End of explanation
Kd_Bos = 1.0e-9 # M
Kd_Gef = 3800e-9 # M
Kd_Ima = 3000e-9 # M
Explanation: Modeled Data
So now let's look at what our expected data might look like. Here we are looking at inhibitor affinities for Src.
Some initial placeholder data from here again:
http://www.guidetopharmacology.org/GRAC/LigandScreenDisplayForward?ligandId=5710&screenId=2
End of explanation
Ptot = 0.5e-6 # M
Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M
Explanation: From our assay setup we know the Src concentration is 0.5 $\mu$M.
End of explanation
# Now we can use this to define a function that gives us PL from Kd, Ptot, and Ltot.
def two_component_binding(Kd, Ptot, Ltot):
Parameters
----------
Kd : float
Dissociation constant
Ptot : float
Total protein concentration
Ltot : float
Total ligand concentration
Returns
-------
P : float
Free protein concentration
L : float
Free ligand concentration
PL : float
Complex concentration
PL = 0.5 * ((Ptot + Ltot + Kd) - np.sqrt((Ptot + Ltot + Kd)**2 - 4*Ptot*Ltot)) # complex concentration (uM)
P = Ptot - PL; # free protein concentration in sample cell after n injections (uM)
L = Ltot - PL; # free ligand concentration in sample cell after n injections (uM)
return [P, L, PL]
[Lb, Pb, PLb] = two_component_binding(Kd_Bos, Ptot, Ltot)
[Lg, Pg, PLg] = two_component_binding(Kd_Gef, Ptot, Ltot)
# y will be complex concentration
# x will be total ligand concentration
Bos, = plt.semilogx(Ltot,PLb,'green', label='Bosutinib')
Gef, = plt.semilogx(Ltot,PLg,'violet', label = 'Gefitinib')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.ylim(0,6e-7)
plt.legend(loc=3);
Explanation: First let's just plot our two component binding for Bosutinib and Gefitinib.
End of explanation
Lima = 10e-6 # M
#Competitive binding function
def three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A):
Parameters
----------
Ptot : float
Total protein concentration
Ltot : float
Total tracer(fluorescent) ligand concentration
Kd_L : float
Dissociation constant
Atot : float
Total competitive ligand concentration
Kd_A : float
Dissociation constant
Returns
-------
P : float
Free protein concentration
L : float
Free ligand concentration
A : float
Free ligand concentration
PL : float
Complex concentration
Kd_L_app : float
Apparent dissociation constant of L in the presence of A
Usage
-----
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
Kd_L_app = Kd_L*(1+Atot/Kd_A)
PL = 0.5 * ((Ptot + Ltot + Kd_L_app) - np.sqrt((Ptot + Ltot + Kd_L_app)**2 - 4*Ptot*Ltot)) # complex concentration (uM)
P = Ptot - PL; # free protein concentration in sample cell after n injections (uM)
L = Ltot - PL; # free tracer ligand concentration in sample cell after n injections (uM)
A = Atot - PL; # free competitive ligand concentration in sample cell after n injections (uM)
return [P, L, A, PL, Kd_L_app]
[Pbi, Lbi, Abi, PLbi, Kd_bima] = three_component_competitive_binding(Ptot, Ltot, Kd_Bos, Lima, Kd_Ima)
[Pgi, Lgi, Agi, PLgi, Kd_gima] = three_component_competitive_binding(Ptot, Ltot, Kd_Gef, Lima, Kd_Ima)
# y will be complex concentration
# x will be total ligand concentration
plt.title('Src competition assay')
Bos, = plt.semilogx(Ltot,PLb,'green', label='Bosutinib')
Bos_Ima, = plt.semilogx(Ltot,PLbi,'cyan', label='Bosutinib + Ima')
Gef, = plt.semilogx(Ltot,PLg,'violet', label = 'Gefitinib')
Gef_Ima, = plt.semilogx(Ltot,PLgi,'pink', label = 'Gefitinib + Ima')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.ylim(0,6e-7)
plt.legend(loc=3);
Explanation: Now let's see how we would expect Imatinib to effect this.
From our assay setup we know the Imatinib concentration is
End of explanation
#Using expected Kd's from same website as above
Kd_Bos_Abl = 0.1e-9 # M
Kd_Gef_Abl = 480e-9 # M
Kd_Ima_Abl = 21.0e-9 # M
[Lb_Abl, Pb_Abl, PLb_Abl] = two_component_binding(Kd_Bos_Abl, Ptot, Ltot)
[Lg_Abl, Pg_Abl, PLg_Abl] = two_component_binding(Kd_Gef_Abl, Ptot, Ltot)
[Pbi_Abl, Lbi_Abl, Abi_Abl, PLbi_Abl, Kd_bima_Abl] = three_component_competitive_binding(Ptot, Ltot, Kd_Bos_Abl, Lima, Kd_Ima_Abl)
[Pgi_Abl, Lgi_Abl, Agi_Abl, PLgi_Abl, Kd_gima_Abl] = three_component_competitive_binding(Ptot, Ltot, Kd_Gef_Abl, Lima, Kd_Ima_Abl)
# y will be complex concentration
# x will be total ligand concentration
Bos, = plt.semilogx(Ltot,PLb_Abl,'green', label='Bosutinib')
Bos_Ima, = plt.semilogx(Ltot,PLbi_Abl,'cyan', label='Bosutinib + Ima')
Gef, = plt.semilogx(Ltot,PLg_Abl,'violet', label = 'Gefitinib')
Gef_Ima, = plt.semilogx(Ltot,PLgi_Abl,'pink', label = 'Gefitinib + Ima')
plt.title('Abl competition assay')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.ylim(0,6e-7)
plt.legend(loc=3);
Explanation: HMMM.
So I was right to think that Gefitinib should work despite the fact that Bosutinib didn't, but...
Now let's try modeling new experiments with Abl before we actually do them!
End of explanation
def get_wells_from_section(path):
reads = path.xpath("*/Well")
wellIDs = [read.attrib['Pos'] for read in reads]
data = [(s.text, r.attrib['Pos'])
for r in reads
for s in r]
datalist = {
well : value
for (value, well) in data
}
welllist = [
[
datalist[chr(64 + row) + str(col)]
if chr(64 + row) + str(col) in datalist else None
for row in range(1,9)
]
for col in range(1,13)
]
return welllist
file_ABL_GEF= "data/Abl Gef gain 120 bw1020 2016-01-19 15-59-53_plate_1.xml"
file_name = os.path.splitext(file_ABL_GEF)[0]
root = etree.parse(file_ABL_GEF)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_ABL_GEF + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
AblGef_dataframe = pd.DataFrame(welllist, columns = ['A - Abl','B - Buffer','C - Abl','D - Buffer', 'E - Abl','F - Buffer','G - Abl','H - Buffer'])
#AN ERROR FOR 'OVERS' COMES UP UNLESS THE NEXT LINE IS HERE
#THE MAX VALUE IS TAKEN FROM THE MAX VALUE FOR THE ABL GEF IMA DATA
dataframe_rep = AblGef_dataframe.replace({'OVER':'64060.0'})
AblGef_dataframe
#dataframe_rep[['fluorescence']] = dataframe_rep[['fluorescence']].astype('float')
dataframe_rep = dataframe_rep.astype('float')
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
dataframe_rep.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5);
file_ABL_GEF_IMA= "data/Abl Gef Ima gain 120 bw1020 2016-01-19 16-22-45_plate_1.xml"
file_name = os.path.splitext(file_ABL_GEF_IMA)[0]
root = etree.parse(file_ABL_GEF_IMA)
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_ABL_GEF_IMA + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Just going to work with topread for now
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
AblGefIma_dataframe = pd.DataFrame(welllist, columns = ['A - Abl','B - Buffer','C - Abl','D - Buffer', 'E - Abl','F - Buffer','G - Abl','H - Buffer'])
sns.set_palette("Paired", 10)
sns.set_context("notebook", rc={"lines.linewidth": 2.5})
AblGefIma_dataframe = AblGefIma_dataframe.astype('float')
AblGefIma_dataframe.plot(figsize=(6, 4), title=file_name)
plt.xlim(-0.5,11.5)
AblGefIma_dataframe.values.max()
plt.plot(AblGefIma_dataframe[:].values, 'r');
plt.plot(AblGef_dataframe[:].values, 'k');
plt.text(8,60000,'Gefitinib (ABL)',fontsize=15)
plt.text(8,55000,'Imatinib + Gefitinib (ABL)',fontsize=15,color='red')
plt.savefig('Abl_Gef_Ima_Jan2016_repeat.png')
Explanation: Looks promising!
Let's check out our new data set based on this.
End of explanation |
9,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
En este ejercicio ingresaremos un año y lo imprimiremos como numero romano.
Step1: La idea es ir achicando el año, con el mayor numero romano posible, sin embargo nos dimos cuenta que teniamos problemas con los "9", por lo que agregamos 900, 400, 90, 40, 9 y 4.
Step2: Vimos que el if es muy extenso, podemos usar lista, o mejor "tuplas". | Python Code:
# suponemos que ponemos un año de verdad, por eso no pongo condiciones
año = int(input("Ingrese su año: "))
añooriginal = año
Explanation: En este ejercicio ingresaremos un año y lo imprimiremos como numero romano.
End of explanation
resultado = ""
while año != 0:
if año >= 1000:
veces = año // 1000
resultado += "M" * veces
año %= 1000
elif año >= 900:
año -= 900
resultado += "CM"
elif año >= 500:
año -= 500
resultado += "D"
elif año >= 400:
año -= 400
resultado += "CD"
elif año >= 100:
veces = año // 100
resultado += "C" * veces
año %= 100
elif año >= 90:
año -= 90
resultado += "XC"
elif año >= 50:
año -= 50
resultado += "L"
elif año >= 40:
año -= 40
resultado += "XL"
elif año >= 10:
veces = año // 10
año -= 10
resultado += "X"
elif año >= 9:
año -= 9
resultado += "IX"
elif año >= 5:
año -= 5
resultado += "V"
elif año >= 4:
año -= 4
resultado += "IV"
else:
resultado += "I" * año
año = 0
print(resultado)
Explanation: La idea es ir achicando el año, con el mayor numero romano posible, sin embargo nos dimos cuenta que teniamos problemas con los "9", por lo que agregamos 900, 400, 90, 40, 9 y 4.
End of explanation
valores = (1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1)
letras = ("M", "CM", "D", "CD", "C", "XC", "L", "XL", "X", "IX", "V", "IV", "I")
valores[3]= 123
# nos damos cuenta que la tupla no se puede modificar a diferencia de la lista...
año = añooriginal
res = ""
while año != 0:
for i in range(len(valores)):
if valores[i] <= año:
res += letras[i]
año -= valores[i]
break
print(res)
Explanation: Vimos que el if es muy extenso, podemos usar lista, o mejor "tuplas".
End of explanation |
9,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
9,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use reindex for adding missing columns to a dataframe
Step1: Using reindex to add missing columns to a dataframe
https
Step2: This can also be used to get a subset of the columns
Step3: Which is probably better done this way | Python Code:
import pandas as pd
df = pd.DataFrame([
{
'a': 1,
'b': 2,
'd': 4
}
])
df
Explanation: Use reindex for adding missing columns to a dataframe
End of explanation
columns = ['a', 'b', 'c', 'd']
df.reindex(columns=columns, fill_value=0)
Explanation: Using reindex to add missing columns to a dataframe
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html
End of explanation
columns_subset = columns[:2]
columns_subset
df.reindex(columns=columns_subset, fill_value=0)
Explanation: This can also be used to get a subset of the columns
End of explanation
df[columns_subset]
Explanation: Which is probably better done this way
End of explanation |
9,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 11
Step1: To add an item to the dictionary, use square brackets like a list
Step2: Note that order isn't preserved in a dictionary (unlike a list)
The values can be retrieved using the same notation
Step3: The Python documentation for dictionaries is quite extensive
Step4: Dictionary as a set of counters
Often there are many alternatives for implementing a computation
Some are better than others (as we saw with the Fibonacci series)
If you need to count how many times a letter appears in a string, you could
Step5: Reverse lookup
Dictionaries are designed to find a value given a key
This is called a lookup
What if you want to do the reverse? (Note
Step6: Why is this approach inefficient?
Note the raise keyword
It causes an exception (specifically a ValueError) if the value isn't in the dictionary
The raise statement also takes an optional argument that is a detailed error message
Dictionaries and lists
Lists can appear as values in a dictionary
For example, people often have the same birthday
Step7: Sometimes you may want to invert a dictionary
This means that you turn the keys into values and values into keys
However, remember that keys are unique, but values aren't necessarily unique
This means that the original dictionary can have multiple keys with the same value
The inversion process is therefore more complex than simply switching things around. | Python Code:
birthdays = dict()
print( birthdays )
Explanation: Chapter 11: Dictionaries
Contents
- A dictionary is a mapping
- Dictionary as a set of counters
- Looping and dictionaries
- Reverse lookup
- Dictionaries and lists
- Global variables
- Debugging
- Exercises
This notebook is based on "Think Python, 2Ed" by Allen B. Downey <br>
https://greenteapress.com/wp/think-python-2e/
A dictionary is a mapping
A dictionary (or map) is a mapping between keys and values
Real-world examples are dictionaries of words and phone-books
A key-value pair is an association between a key (e.g., a word) and a value (e.g., a definition)
The function dict creates a new dictionary object with no items
For example, we can create a dictionary that maps birthdays to a person's name
End of explanation
birthdays['0704'] = 'Steve'
birthdays['0529'] = 'Tony'
print( birthdays )
Explanation: To add an item to the dictionary, use square brackets like a list
End of explanation
print( birthdays['0529'] )
Explanation: Note that order isn't preserved in a dictionary (unlike a list)
The values can be retrieved using the same notation
End of explanation
# Get the number of key-value pairs
print( len( birthdays ) )
# Get the values in the dictionary
print( birthdays.values() )
# Get the keys in the dictionary
print( birthdays.keys() )
Explanation: The Python documentation for dictionaries is quite extensive: https://docs.python.org/3/tutorial/datastructures.html#dictionaries
Dictionaries have a number of methods available
End of explanation
for a_date in birthdays:
print( a_date )
Explanation: Dictionary as a set of counters
Often there are many alternatives for implementing a computation
Some are better than others (as we saw with the Fibonacci series)
If you need to count how many times a letter appears in a string, you could:
Create 26 variables with each holding the number of times a letter occurs
Create a list with 26 elements with each index corresponding to a letter
Create a dictionary with the letters as keys and the values as the counters
What are the pros and cons with each approach?
Looping and dictionaries
If you use a dictionary in a for statement, the loop traverses the dictionary's keys
Remember that there is no implied ordering of keys in a dictionary
End of explanation
def reverse_lookup( a_dict, value ):
for key in a_dict:
if a_dict[key] == value:
return key
raise ValueError
Explanation: Reverse lookup
Dictionaries are designed to find a value given a key
This is called a lookup
What if you want to do the reverse? (Note: this is referred to as a reverse lookup)
For example, what if you want to lookup a word by its definition?
Unfortunately there is no simple function to do that
Why might that be?
The book presents a simple function that implements this functionality
End of explanation
birthdays['0704'] = [ 'Steve', 'Nick' ]
Explanation: Why is this approach inefficient?
Note the raise keyword
It causes an exception (specifically a ValueError) if the value isn't in the dictionary
The raise statement also takes an optional argument that is a detailed error message
Dictionaries and lists
Lists can appear as values in a dictionary
For example, people often have the same birthday
End of explanation
def invert_dict( a_dict ):
inverted_dict = dict()
for key in a_dict:
value = a_dict[key]
if value not in inverted_dict:
inverted_dict[value] = [ key ]
else:
inverted_dict[value].append( key )
return inverted_dict
Explanation: Sometimes you may want to invert a dictionary
This means that you turn the keys into values and values into keys
However, remember that keys are unique, but values aren't necessarily unique
This means that the original dictionary can have multiple keys with the same value
The inversion process is therefore more complex than simply switching things around.
End of explanation |
9,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
APPKEY is the Application Key for a (free) http
Step1: First set up the necessary conections to drive a LED (see 102 - LEDs - Drive LEDS with the Raspberry Pi GPIO pins for an illustration; we'll be using PIN 18, but above all make sure you do not forget the resistor!)
Step2: Create a function that defines what happens when a message comes in.
Step3: And eventually subscribe to the "doorbell" channel to read out messages
Step4: Et voilá, send a message with the send script or by means of the realtime.co console. | Python Code:
APPKEY = "******"
Explanation: APPKEY is the Application Key for a (free) http://www.realtime.co/ "Realtime Messaging Free" subscription.
See "104 - Remote deurbel - Een cloud API gebruiken om berichten te sturen" voor meer gedetailleerde info.
End of explanation
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
PIN = 18
GPIO.setup(PIN, GPIO.OUT)
def flash_led():
GPIO.output(PIN, 1)
time.sleep(0.5)
GPIO.output(PIN, 0)
Explanation: First set up the necessary conections to drive a LED (see 102 - LEDs - Drive LEDS with the Raspberry Pi GPIO pins for an illustration; we'll be using PIN 18, but above all make sure you do not forget the resistor!)
End of explanation
def on_message(sender, channel, message):
print("Received a message via {}: {}".format(channel, message))
flash_led()
Explanation: Create a function that defines what happens when a message comes in.
End of explanation
import ortc
oc = ortc.OrtcClient()
oc.cluster_url = "http://ortc-developers.realtime.co/server/2.1"
def on_connected(sender):
print('Connected')
oc.subscribe('doorbell', True, on_message)
oc.set_on_connected_callback(on_connected)
oc.connect(APPKEY)
Explanation: And eventually subscribe to the "doorbell" channel to read out messages
End of explanation
GPIO.cleanup()
Explanation: Et voilá, send a message with the send script or by means of the realtime.co console.
End of explanation |
9,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Decision Tree of Observable Operators
Part 5
Step1: ... by slicing slice
Step2: ... that is, only the first item first
Step3: ...that is, only the first items take, take_with_time
Step4: ... that is, only the last item last, last_or_default, take_last
Step5: ... that is, only item n element_at, element_at_or_default
Step6: ... that is, only those items after the first items
... that is, after the first n items skip, skip_with_time
Step7: ... that is, until one of those items matches a predicate skip_while
Step8: ... that is, after a second Observable emits an item skip_until, skip_until_with_time
Step9: ... that is, those items except the last items
... that is, except the last n items skip_last, skip_last_with_time
Step10: ... that is, until one of those items matches a predicate take_while
Step11: ...that is, except items emitted during a period of time before the source completes skip_last, skip_last_with_time
Step12: ...that is, except items emitted after a second Observable emits an item take_until, take_until_with_time
Step13: ... by sampling the Observable periodically sample
Step14: ... by only emitting items that are not followed by other items within some duration debounce
Step15: ... by suppressing items that are duplicates of already-emitted items distinct
Step16: ... if they immediately follow the item they are duplicates of distinct_until_changed
Step17: ... by delaying my subscription to it for some time after it begins emitting items delay_subscription
Step18: I want to reemit items from an Observable only on condition that it was the first of a collection of Observables to emit an item amb | Python Code:
reset_start_time(O.filter) # alias: where
d = subs(O.range(0, 5).filter(lambda x, i: x % 2 == 0))
Explanation: A Decision Tree of Observable Operators
Part 5: Consolidating Streams
source: http://reactivex.io/documentation/operators.html#tree.
(transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, axiros)
This tree can help you find the ReactiveX Observable operator you’re looking for.
See Part 1 for Usage and Output Instructions.
We also require acquaintance with the marble diagrams feature of RxPy.
<h2 id="tocheading">Table of Contents</h2>
<div id="toc"></div>
I want to reemit only certain items from an Observable
... by filtering out those that do not match some predicate filter/where
End of explanation
reset_start_time(O.slice)
s = marble_stream('r-e-a-c-t-i-v-e-|')
d = subs(s.slice(5, 10))
sleep(1)
# start stop step:
d = subs(s.slice(1, -1, 2))
Explanation: ... by slicing slice
End of explanation
rst(O.first)
# match on index:
d = subs(O.from_((1, 2 ,3)).first(lambda x, i: i==1))
Explanation: ... that is, only the first item first
End of explanation
rst(O.take)
d = subs(O.from_((1, 2, 3, 4)).take(2))
rst(O.take_with_time)
d = subs(marble_stream('1-2-3-4|').take_with_time(200))
Explanation: ...that is, only the first items take, take_with_time
End of explanation
rst(O.last, title=True)
d = subs(O.from_((1, 2, 3)).last(lambda x: x < 3))
rst(O.last_or_default, title=True)
d = subs(O.from_((1, 2, 3)).last_or_default(lambda x: x > 3))
d = subs(O.from_((1, 2, 3)).last_or_default(lambda x: x > 3, '42'))
rst(O.take_last, title=True)
d = subs(O.from_((1, 2, 3, 4)).take_last(2))
Explanation: ... that is, only the last item last, last_or_default, take_last
End of explanation
rst(O.element_at)
d = subs(O.from_((1, 2, 3, 4)).element_at(2))
rst(O.element_at_or_default)
d = subs(O.from_((1, 2, 3, 4)).element_at_or_default(6, '42'))
Explanation: ... that is, only item n element_at, element_at_or_default
End of explanation
rst(O.skip, title=True)
d = subs(O.range(0, 5).skip(2))
rst(O.skip_with_time, title=True)
d = subs(marble_stream('1-2-3-4-5-6').skip_with_time(200))
Explanation: ... that is, only those items after the first items
... that is, after the first n items skip, skip_with_time
End of explanation
rst(O.skip_while)
# skipping only AS LONG AS the function is true. If already false at the beginning -> all flushed:
d = subs(O.from_((1, 2, 3, 4, 5, 6)).skip_while(lambda x: x in (1, 2)))
Explanation: ... that is, until one of those items matches a predicate skip_while
End of explanation
rst(O.skip_until)
s1 = marble_stream('1-2-3-4-5|')
s2 = marble_stream('--2------|')
d = subs(s1.skip_until(s2))
sleep(0.5)
rst(O.skip_until_with_time)
d = subs(s1.skip_until_with_time(300))
Explanation: ... that is, after a second Observable emits an item skip_until, skip_until_with_time
End of explanation
rst(O.skip_last)
s1 = marble_stream('1-2-3-4-5|')
s2 = marble_stream('--2------|')
d = subs(s1.skip_last(2))
sleep(0.5)
rst(O.skip_last_with_time)
d = subs(s1.skip_last_with_time(300))
Explanation: ... that is, those items except the last items
... that is, except the last n items skip_last, skip_last_with_time
End of explanation
rst(O.take_while)
d = subs(O.from_((1, 2, 3)).take_while(lambda x: x<3))
Explanation: ... that is, until one of those items matches a predicate take_while
End of explanation
# (see above)
Explanation: ...that is, except items emitted during a period of time before the source completes skip_last, skip_last_with_time
End of explanation
rst(O.take_until)
s1 = marble_stream('1-2-3-4-5|')
s2 = marble_stream('--2------|')
d = subs(s1.take_until(s2))
sleep(0.5)
rst(O.take_until_with_time)
d = subs(s1.take_until_with_time(300))
Explanation: ...that is, except items emitted after a second Observable emits an item take_until, take_until_with_time
End of explanation
rst(O.sample)
xs = marble_stream('1-2-3-4-5-6-7-8-9-1-2-3-4-5-6-E|')
sampler =marble_stream('---1---1----------1------------|')
d = subs(xs.sample(300))
sleep(2)
d = subs(xs.sample(sampler=sampler))
Explanation: ... by sampling the Observable periodically sample
End of explanation
rst(O.debounce)
s = marble_stream('-12-3-4--5--6---7---8----9----a')
print('flushing a value every >= 300ms')
d = subs(s.debounce(300))
Explanation: ... by only emitting items that are not followed by other items within some duration debounce
End of explanation
rst(O.distinct)
s = O.from_((1, 2, 1, 1, 3))
d = subs(s.distinct(lambda x: x*2))
d = subs(s.distinct(lambda x: x, lambda a, b: a==2))
Explanation: ... by suppressing items that are duplicates of already-emitted items distinct
End of explanation
rst(O.distinct_until_changed)
s = O.from_((1, 2, 1, 1, 3))
d = subs(s.distinct_until_changed(lambda x: x*2))
d = subs(s.distinct_until_changed(lambda x: x, lambda a, b: a==2))
Explanation: ... if they immediately follow the item they are duplicates of distinct_until_changed
End of explanation
rst(O.delay)
header("note the absolute time of emissions:")
d = subs(O.range(0, 10).delay(1000))
Explanation: ... by delaying my subscription to it for some time after it begins emitting items delay_subscription
End of explanation
rst(O.amb)
s1 = O.range(0, 5).delay(100)
s2 = O.range(10, 5)
d = subs(O.amb(s1, s2))
Explanation: I want to reemit items from an Observable only on condition that it was the first of a collection of Observables to emit an item amb
End of explanation |
9,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions
So far in this course we've explored equations that perform algebraic operations to produce one or more results. A function is a way of encapsulating an operation that takes an input and produces exactly one ouput.
For example, consider the following function definition
Step1: You can use functions in equations, just like any other term. For example, consider the following equation
Step2: Of course, the value returned by a function depends on the input; and you can graph this with the iput (let's call it x) on one axis and the output (f(x)) on the other.
Step3: As you can see (if you hadn't already figured it out), our function is a quadratic function - it returns a squared value that results in a parabolic graph when the output for multiple input values are plotted.
Bounds of a Function
Some functions will work for any input and may return any output. For example, consider the function u defined here
Step4: Note that the function works for every value other than 0; so the function is defined for x = 0.000000001, and for x = -0.000000001; it only fails to return a defined value for exactly 0.
OK, let's take another example. Consider this function
Step5: Sometimes, a function may be defined for a specific interval; for example, for all values between 0 and 5
Step6: Now, suppose we have a function like this
Step7: Range of a Function
Just as the domain of a function defines the set of values for which the function is defined, the range of a function defines the set of possible outputs from the function.
For example, consider the following function | Python Code:
# define a function to return x^2 + 2
def f(x):
return x**2 + 2
# call the function
f(3)
Explanation: Functions
So far in this course we've explored equations that perform algebraic operations to produce one or more results. A function is a way of encapsulating an operation that takes an input and produces exactly one ouput.
For example, consider the following function definition:
\begin{equation}f(x) = x^{2} + 2\end{equation}
This defines a function named f that accepts one input (x) and returns a single value that is the result calculated by the expression x<sup>2</sup> + 2.
Having defined the function, we can use it for any input value. For example:
\begin{equation}f(3) = 11\end{equation}
You've already seen a few examples of Python functions, which are defined using the def keyword. However, the strict definition of an algebraic function is that it must return a single value. Here's an example of defining and using a Python function that meets this criteria:
End of explanation
x = 4
y = f(x) - 1
print(y)
Explanation: You can use functions in equations, just like any other term. For example, consider the following equation:
\begin{equation}y = f(x) - 1\end{equation}
To calculate a value for y, we take the f of x and subtract 1. So assuming that f is defined as previously, given an x value of 4, this equation returns a y value of 17 (f(4) returns 4<sup>2</sup> + 2, so 16 + 2 = 18; and then the equation subtracts 1 to give us 17). Here it is in Python:
End of explanation
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
# Create an array of x values from -100 to 100
x = np.array(range(-100, 101))
# Set up the graph
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
# Plot x against f(x)
plt.plot(x,f(x), color='purple')
plt.show()
Explanation: Of course, the value returned by a function depends on the input; and you can graph this with the iput (let's call it x) on one axis and the output (f(x)) on the other.
End of explanation
%matplotlib inline
# Define function g
def g(x):
if x != 0:
return (12/2*x)**2
# Plot output from function g
import numpy as np
from matplotlib import pyplot as plt
# Create an array of x values from -100 to 100
x = range(-100, 101)
# Get the corresponding y values from the function
y = [g(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='purple')
# plot an empty circle to show the undefined point
plt.plot(0,g(0.0000001), color='purple', marker='o', markerfacecolor='w', markersize=8)
plt.show()
Explanation: As you can see (if you hadn't already figured it out), our function is a quadratic function - it returns a squared value that results in a parabolic graph when the output for multiple input values are plotted.
Bounds of a Function
Some functions will work for any input and may return any output. For example, consider the function u defined here:
\begin{equation}u(x) = x + 1\end{equation}
This function simply adds 1 to whatever input is passed to it, so it will produce a defined output for any value of x that is a real number; in other words, any "regular" number - but not an imaginary number like √-1, or ∞ (infinity). You can specify the set of real numbers using the symbol ${\rm I!R}$ (note the double stroke). The values that can be used for x can be expressed as a set, which we indicate by enclosing all of the members of the set in "{...}" braces; so to indicate the set of all possible values for x such that x is a member of the set of all real numbers, we can use the following expression:
\begin{equation}{x \in \rm I!R}\end{equation}
Domain of a Function
We call the set of numbers for which a function can return value it's domain, and in this case, the domain of u is the set of all real numbers; which is actually the default assumption for most functions.
Now consider the following function g:
\begin{equation}g(x) = (\frac{12}{2x})^{2}\end{equation}
If we use this function with an x value of 2, we would get the output 9; because (12 ÷ (2•2))<sup>2</sup> is 9. Similarly, if we use the value -3 for x, the output will be 4. However, what happens when we apply this function to an x value of 0? Anything divided by 0 is undefined, so the function g doesn't work for an x value of 0.
So we need a way to denote the domain of the function g by indicating the input values for which a defined output can be returned. Specifically, we need to restrict x to a specific list of values - specifically any real number that is not 0. To indicate this, we can use the following notation:
\begin{equation}{x \in \rm I!R\;\;|\;\; x \ne 0 }\end{equation}
This is interpreted as Any value for x where x is in the set of real numbers such that x is not equal to 0, and we can incorporate this into the function's definition like this:
\begin{equation}g(x) = (\frac{12}{2x})^{2}, {x \in \rm I!R\;\;|\;\; x \ne 0 }\end{equation}
Or more simply:
\begin{equation}g(x) = (\frac{12}{2x})^{2},\;\; x \ne 0\end{equation}
When you plot the output of a function, you can indicate the gaps caused by input values that are not in the function's domain by plotting an empty circle to show that the function is not defined at this point:
End of explanation
%matplotlib inline
def h(x):
if x >= 0:
import numpy as np
return 2 * np.sqrt(x)
# Plot output from function h
import numpy as np
from matplotlib import pyplot as plt
# Create an array of x values from -100 to 100
x = range(-100, 101)
# Get the corresponding y values from the function
y = [h(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('h(x)')
plt.grid()
# Plot x against h(x)
plt.plot(x,y, color='purple')
# plot a filled circle at the end to indicate a closed interval
plt.plot(0, h(0), color='purple', marker='o', markerfacecolor='purple', markersize=8)
plt.show()
Explanation: Note that the function works for every value other than 0; so the function is defined for x = 0.000000001, and for x = -0.000000001; it only fails to return a defined value for exactly 0.
OK, let's take another example. Consider this function:
\begin{equation}h(x) = 2\sqrt{x}\end{equation}
Applying this function to a non-negative x value returns a meaningful output; but for any value where x is negative, the output is undefined.
We can indicate the domain of this function in its definition like this:
\begin{equation}h(x) = 2\sqrt{x}, {x \in \rm I!R\;\;|\;\; x \ge 0 }\end{equation}
This is interpreted as Any value for x where x is in the set of real numbers such that x is greater than or equal to 0.
Or, you might see this in a simpler format:
\begin{equation}h(x) = 2\sqrt{x},\;\; x \ge 0\end{equation}
Note that the symbol ≥ is used to indicate that the value must be greater than or equal to 0; and this means that 0 is included in the set of valid values. To indicate that the value must be greater than 0, **not including 0, use the > symbol. You can also use the equivalent symbols for less than or equal to (≤) and less than (<).
When plotting a function line that marks the end of a continuous range, the end of the line is shown as a circle, which is filled if the function includes the value at that point, and unfilled if it does not.
Here's the Python to plot function h:
End of explanation
%matplotlib inline
def j(x):
if x >= 0 and x <= 5:
return x + 2
# Plot output from function j
import numpy as np
from matplotlib import pyplot as plt
# Create an array of x values from -100 to 100
x = range(-100, 101)
y = [j(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('j(x)')
plt.grid()
# Plot x against k(x)
plt.plot(x, y, color='purple')
# plot a filled circle at the ends to indicate an open interval
plt.plot(0, j(0), color='purple', marker='o', markerfacecolor='purple', markersize=8)
plt.plot(5, j(5), color='purple', marker='o', markerfacecolor='purple', markersize=8)
plt.show()
Explanation: Sometimes, a function may be defined for a specific interval; for example, for all values between 0 and 5:
\begin{equation}j(x) = x + 2,\;\; x \ge 0 \text{ and } x \le 5\end{equation}
In this case, the function is defined for x values between 0 and 5 inclusive; in other words, 0 and 5 are included in the set of defined values. This is known as a closed interval and can be indicated like this:
\begin{equation}{x \in \rm I!R\;\;|\;\; 0 \le x \le 5 }\end{equation}
It could also be indicated like this:
\begin{equation}{x \in \rm I!R\;\;|\;\; [0,5] }\end{equation}
If the condition in the function was x > 0 and x < 5, then the interval would be described as open and 0 and 5 would not be included in the set of defined values. This would be indicated using one of the following expressions:
\begin{equation}{x \in \rm I!R\;\;|\;\; 0 \lt x \lt 5 }\end{equation}
\begin{equation}{x \in \rm I!R\;\;|\;\; (0,5) }\end{equation}
Here's function j in Python:
End of explanation
%matplotlib inline
def k(x):
if x == 0:
return 0
elif x == 100:
return 1
# Plot output from function k
from matplotlib import pyplot as plt
# Create an array of x values from -100 to 100
x = range(-100, 101)
# Get the k(x) values for every value in x
y = [k(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('k(x)')
plt.grid()
# Plot x against k(x)
plt.scatter(x, y, color='purple')
plt.show()
Explanation: Now, suppose we have a function like this:
\begin{equation}
k(x) = \begin{cases}
0, & \text{if } x = 0, \
1, & \text{if } x = 100
\end{cases}
\end{equation}
In this case, the function has highly restricted domain; it only returns a defined output for 0 and 100. No output for any other x value is defined. In this case, the set of the domain is:
\begin{equation}{0,100}\end{equation}
Note that this does not include all real numbers, it only includes 0 and 100.
When we use Python to plot this function, note that it only makes sense to plot a scatter plot showing the individual values returned, there is no line in between because the function is not continuous between the values within the domain.
End of explanation
%matplotlib inline
# define a function to return x^2 + 1
def p(x):
return x**2 + 1
# Plot the function
import numpy as np
from matplotlib import pyplot as plt
# Create an array of x values from -100 to 100
x = np.array(range(-100, 101))
# Set up the graph
plt.xlabel('x')
plt.ylabel('p(x)')
plt.grid()
# Plot x against f(x)
plt.plot(x,p(x), color='purple')
plt.show()
Explanation: Range of a Function
Just as the domain of a function defines the set of values for which the function is defined, the range of a function defines the set of possible outputs from the function.
For example, consider the following function:
\begin{equation}p(x) = x^{2} + 1\end{equation}
The domain of this function is all real numbers. However, this is a quadratic function, so the output values will form a parabola; and since the function has no negative coefficient or constant, it will be an upward opening parabola with a vertex that has a y value of 1.
So what does that tell us? Well, the minimum value that will be returned by this function is 1, so it's range is:
\begin{equation}{p(x) \in \rm I!R\;\;|\;\; p(x) \ge 1 }\end{equation}
Let's create and plot the function for a range of x values in Python:
End of explanation |
9,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
read from an Excel file
documentation
Step1: write to a comma separated value (.csv) file
documentation | Python Code:
file_name_string = 'C:/Users/Charles Kelly/Desktop/Exercise Files/02_07/Final/EmployeesWithGrades.xlsx'
employees_df = pd.read_excel(file_name_string, 'Sheet1', index_col=None, na_values=['NA'])
employees_df
Explanation: read from an Excel file
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html
you don't need to have MS-Excel on your computer
End of explanation
file_name_string_csv = 'C:/Users/Charles Kelly/Desktop/Exercise Files/02_07/Final/EmployeesWithGrades.csv'
employees_df.to_csv(file_name_string_csv )
Explanation: write to a comma separated value (.csv) file
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html
End of explanation |
9,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<hr style="border-top-width
Step1: <hr style="border-top-width
Step2: Summary
The pandas dateframe (DF) are a very flexible data type for postprocessing CALS data. They comes with a rich asset of methods and a very active community.
Think the DF as LEGO
Step3: Summary
The indexesFromCALS method gives a great flexibility to reshuffle CALS data. For teh moment this method is not very fast
Step4: Summary
We have now the basic tools to monitor the beam performance across the CERN accelerator complex.
<hr style="border-top-width
Step5: <hr style="border-top-width | Python Code:
import sys
sys.path.append('/eos/user/s/sterbini/MD_ANALYSIS/public/')
from myToolbox import *
Explanation: <hr style="border-top-width: 4px; border-top-color: #34609b;">
Using pytimber with pandas
Ideally the main parameters of the CERN Accelerator complex (settings and acquisitions) are stored in CALS (CERN Accelerator Logging System,https://be-dep-co.web.cern.ch/content/cals-cern-accelerator-logging-service) for long term (LDB) or short term (MDB, 3 months presently (July, 2017)).
A new CALS platform is presently under devolpment (NXCALS).
A good strategy for the MD community is to ask to complete the present logging adding additional variables if needed (typically in the MDB). In this way one does not need to setup manual logging sessions.
Each machine should have a OP-CALS link person (e.g., O. Hans for the PS).
CALS can be queried from the internet or GPN (General Purpose Network) using SWAN (Service for Web-based ANalysis, https://swan.web.cern.ch/). It is important to note that SWAN is not available (July, 2017) in the TN (Technical Network).
To log manually parameters (e.g., not already present in CALS) one can use different approaches (Matlab, Python, Mathematica, see Scripting Tool on the wikis https://wikis.cern.ch/display/ST/Scripting+Tools+Home). In addition to the logging, a similar approach can be extended to modify the machine setting (provided a valid the RBAC token if requested).
In the following we will show some example on how to use pyTIMBER together with pandas dataframe. Before doind that, we would like to comment of the difference between cycleStamp and acquisitionStamp.
Fundamentaly CALS is logging series in the 2 columns format (times, values).
The time can be the timestamp of the cycle related to that acquisition (cycleStamp) or the acquisitionStamp.
For the Injector Complex is much more convenient to log in cycleStamp because this allows to compare quantities related to the same cycle. In general is not interesting to compare observation related to different cycles even if their acquisition stamp is very close.
In machine with very long cycle (LHC) this cycleStamp concept is not interesting anymore. In other words, we can say that the cycleStamp is useful only if the machine is intrinsecally PPM. In PPM machines one can extract data from CALS by fundamentals filters. This feature is not very attractive for LHC.
As we will see, these observations will have a strong impact on how the pandas dataframe will be organized. For instance, if we want to extend an existing dataframe with an additional variable, this is somehow trivial for LHC, but for the other machines we should maintain syncronization to the same cycleStamps. It is important to observe that the cyclestamps between different cycle in teh same machine or between machines have fixed time offset. One could use a sort of arithmetic of the cycleStamp to follow the same beams in the different machine of the injectors complex or to investigate the effect of SC composition on the beam performance of machine.
JAPC and pyJAPC
This is the JAVA API for Parameters Control.
See the presentation W. Sliwinski on https://indico.cern.ch/event/404646/.
BE-CO chose to have a JAVA API for control the machine parameters. On the other hand JAVA is not very well suited for scientific computing without a major effort (at least in our opinion). Indeed MATLAB is based on JAVA but is not open source. A first succesful attempt to GET/SET/SUBSCRIBE in MATLAB was in the past within the CTF3 community (main contributor is D. Gamba). More recently a similar approach was adopted for python (pyJAPC by T. Levens and M. Betz). In parallel R. De Maria developped pyTIMBER (to access CALS) and pyLSA (together with M. Hostettler). In addition pyLogbook was developped by S. Gessner. These tools naturally complete the JMAD package and all the python code developped in BE-ABP (pyHEADTAIL, pyECOUD,...).
We will describe in future how to use pyJAPC and pyLSA respectively to GET the CALS data, to GET/SET/SUBSCRIBE data from/to the HW (or the last settings of the LSA database), to GET the historical trims in LSA.
<hr style="border-top-width: 4px; border-top-color: #34609b;">
Let us start
End of explanation
# Heavily using pyTimber we get the variable
varSet1=log.search('%TPS15%') #recorded by acqStamp since is not PPM
varSet2=log.search('CPS.TGM%')#recorded by cyclestamp
print(varSet1) # just to show what is inside
print(varSet2) # just to show what is inside
extractFromTime=myToolbox.time_1_hour_ago(hours=2)
extractToTime=myToolbox.time_1_hour_ago(hours=1)
# Heavily using PANDAS
# we cannot use the fundamental (recorded by acqStamp)!
myDataFrame1=myToolbox.cals2pnd(varSet1,extractFromTime,extractToTime)
# we can use the fundamental
myDataFrame2=myToolbox.cals2pnd(varSet2,extractFromTime,extractToTime,fundamental='%TOF')
# Now I can merge and create a postprocessing DF
rawDF=myToolbox.mergeDF(myDataFrame1,myDataFrame2)
#eventually I can define a postprocessing
def postprocess(df):
aux=pnd.DataFrame()
aux['PE.TPS15.359.CONTROLLER:ANGLE filled']= df['PE.TPS15.359.CONTROLLER:ANGLE'].ffill()
aux['PE.TPS15.359.CONTROLLER:ANGLE filled, doubled']= aux['PE.TPS15.359.CONTROLLER:ANGLE filled']*2+1
return aux;
postDF=postprocess(rawDF)
#I suggest not to merge the rawDF with the postDF, this will allow to extend the raw.
# starting from the original DF is trivial now to extend them.
# It is important to note that the we have somehow to remember that the second DF needs a fundamental filter
myDataFrame1=myToolbox.cals2pnd(myDataFrame1.columns,rawDF.index[-1],
rawDF.index[-1]+datetime.timedelta(hours=1))
myDataFrame2=myToolbox.cals2pnd(myDataFrame2.columns,rawDF.index[-1],
rawDF.index[-1]+datetime.timedelta(hours=1),
fundamental='%TOF')
# and now we can iterate with the merging
aux=myToolbox.mergeDF(myDataFrame1,myDataFrame2)
# and with the concatenation
rawDF=myToolbox.concatDF(rawDF,aux)
postDF=myToolbox.concatDF(postDF,postprocess(aux))
# we suggest to maintain well separated the raw data, postprocessing functions and postprocessed data.
# print the dataFrame head
rawDF.head()
# describe the dataFrame
postDF.describe()
# extract one column as a series
rawDF[['CPS.TGM:USER','CPS.TGM:DEST']].head()
# extract the fourth and fifth rows
rawDF.iloc[3:5]
# extract between time
rawDF.between_time('14:02','14:03')
Explanation: <hr style="border-top-width: 4px; border-top-color: #34609b;">
Example 1
Let us assume that you start to download some raw data from CALS. Here I decided to download some variables recorded by acqStamp and some others by cycleStamp from the S. The fundamental filtering makes sense only on the variable recorded by cycleStamp.
End of explanation
CPSDF=myToolbox.cals2pnd(['CPS.TGM:USER'],
myToolbox.time_1_hour_ago(hours=1./6.),
myToolbox.time_now(),fundamental='%TOF')
CPSDF.head()
# the important method to use is indexesFromCALS
cycleAfterDF=myToolbox.indexesFromCALS(CPSDF.index+datetime.timedelta(seconds=1.2),CPSDF.columns)
cycleAfterDF.head()
Explanation: Summary
The pandas dateframe (DF) are a very flexible data type for postprocessing CALS data. They comes with a rich asset of methods and a very active community.
Think the DF as LEGO: play with them, merge and contatenate them. Keep well separated the raw DF (extracted from CALS), the postprocessing methods and the postrprocessed DF.
NB: numpy is of course more performant of pandas DF, but I tend to buy flexibility with performace.
<hr style="border-top-width: 4px; border-top-color: #34609b;">
Example 2
Let us consider now another user case. We would like to extract the list of the cycles following TOF in the PS in the last 10 min. We will start extracting the TOF cycle and from that DF create a new dataframe. This is an example of cycleStamp arithmetic.
End of explanation
CPSDF=myToolbox.cals2pnd(['CPS.TGM:USER','CPS.TGM:BATCH','PR.DCAFTINJ_1:INTENSITY'],myToolbox.time_1_hour_ago(hours=1./6.),myToolbox.time_now(),fundamental='%MTE%')
firstBatch=CPSDF['CPS.TGM:BATCH']==1
SPSDF=myToolbox.indexesFromCALS(CPSDF[firstBatch].index+datetime.timedelta(seconds=0.635), ['SPS.TGM:USER'])
PSBDF=myToolbox.indexesFromCALS(CPSDF.index-datetime.timedelta(seconds=0.635), ['PSB.TGM:USER']+log.search('%_BCT_ACC_INTENSITY'))
print('==================================')
print('PSB')
print(PSBDF['PSB.TGM:USER'].head(4))
print('==================================')
print('CPS')
print(CPSDF['CPS.TGM:USER'].head(4))
print('==================================')
print('SPS')
print(SPSDF['SPS.TGM:USER'].head(2))
postPSBDF=pnd.DataFrame()
# note the use of the filtering with regular expression and the sum done on the rows (axis=1)
postPSBDF['Total Intensity']=PSBDF.filter(regex='BR*').sum(axis=1)
postPSBDF.head()
postCPSDF=pnd.DataFrame()
# we are now building a series using data from different DF and adopting the indexing (cycleStamp) from PS.
postCPSDF['transmission']=pnd.Series(CPSDF['PR.DCAFTINJ_1:INTENSITY'].values/postPSBDF['Total Intensity'].values,index=CPSDF.index)
plt.plot(postCPSDF['transmission'])
myToolbox.setXlabel(ax=plt.gca(), hours=1/30.)
Explanation: Summary
The indexesFromCALS method gives a great flexibility to reshuffle CALS data. For teh moment this method is not very fast: it is cycling on the rows of the DF, in future perhaps could be improved by using a primitive from CALS.
<hr style="border-top-width: 4px; border-top-color: #34609b;">
Example 3
Let us consider now another user case. We would like to extract the list of the cyclestamp and some basic variable from MTE cycles along the injector chains. This is an example of cycleStamp arithmetic. We have to remember that the offeset between teh C0 time of the PSB, PS and SPS is 635 ms.
End of explanation
t1=myToolbox.time_1_hour_ago(hours=.1)
t2=myToolbox.time_now()
CPS=myToolbox.cals2pnd(log.search('CPS.TGM%'),t1,t2)
PSB=myToolbox.cals2pnd(log.search('PSB.TGM%'),t1,t2)
SPS=myToolbox.cals2pnd(log.search('SPS.TGM%'),t1,t2)
SPS.head(10)
SCNUM=1
PSB[PSB['PSB.TGM:SCNUM']==SCNUM].head(1)
CPS[CPS['CPS.TGM:SCNUM']==SCNUM].head(1)
SPS[SPS['SPS.TGM:SCNUM']==SCNUM]
CPS[CPS['CPS.TGM:SCNUM']==SCNUM].index[0]-PSB[PSB['PSB.TGM:SCNUM']==SCNUM].index[0]
SPS[SPS['SPS.TGM:SCNUM']==SCNUM].index[0]-CPS[CPS['CPS.TGM:SCNUM']==SCNUM].index[0]
Explanation: Summary
We have now the basic tools to monitor the beam performance across the CERN accelerator complex.
<hr style="border-top-width: 4px; border-top-color: #34609b;">
APPENDIX A: Offset between the machines
There are 635 ms of delay between the C0 of PS and the C0 of PSB.
There are 635 ms of delay between the C0 of SPS and the C0 of PS.
See below for a verification. This allows to make an simple aritmetic before the different cycles.
End of explanation
# load all files
myFiles=sorted(glob.glob('/eos/user/s/sterbini/MD_ANALYSIS/2016/MD1949/2016.11.17/Monitor1/I250/*mat'))
myFiles
# check the content of a single file
myFileFormat=myToolbox.japcMatlabImport(myFiles[0])
plt.plot(myFileFormat.RPPBK_BA5_BBLR5177M.LOG_OASIS_I_MEAS.value.DATA)
plt.plot(myFileFormat.RPPBK_BA5_BBLR5177M.LOG_OASIS_I_REF.value.DATA)
plt.xlabel('time [ms]')
plt.ylabel('I [A]')
plt.axis('tight')
myFileFormat.parameters
# import all the selected file and variables in a dataFrame
MD1949=myToolbox.fromMatlabToDataFrame(myFiles,['RPPBK_BA5_BBLR5177M.LOG_OASIS_I_MEAS.value.DATA'])
MD1949.head()
# we can add in second time additional variable from Matlab files
myToolbox.addToDataFrameFromMatlab(MD1949,['SPS_BCTDC_41435.Acquisition.value.totalIntensity'])
MD1949.head()
Explanation: <hr style="border-top-width: 4px; border-top-color: #34609b;">
Appendix B: Retrieving data from Matlab
Not all the machine data are recorded in CALS. For getting and recording the data not present in CALS one can automatically subscribe the data with JAPC (https://wikis.cern.ch/display/ST/Libraries+Available).
I am using a lot Matlab/JAPC interface but I would like to migrate to the Python/JAPC solution.
In the following we assume you have some matlab files in a folder and you want to import them. As you will see the approach is very similar to the CALS dataframe.
End of explanation |
9,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Author
Step1: First let's check if there are new or deleted files (only matching by file names).
Step2: Cool, no new nor deleted files.
Now let's set up a dataset that, for each table, links both the old and the new file together.
Step3: Let's make sure the structure hasn't changed
Step4: OK no columns have changed.
Now let's see for each file if there are more or less rows.
Step5: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation and liens_rome_referentiels, so let's see more precisely.
Step6: Alright, so the only change seems to be 3 new jobs added. Let's take a look (only showing interesting fields)
Step7: Those are indeed new jobs.
OK, let's check at the changes in items
Step8: No changes at all, so it's merely existing items that have been newly assigned to existing groups.
The changes in liens_rome_referentiels might help reveal those changes
Step9: So there are few fixes. Let's have a look at some of them | Python Code:
import collections
import glob
import os
from os import path
import matplotlib_venn
import pandas as pd
rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')
OLD_VERSION = '345'
NEW_VERSION = '346'
old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))
new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))
Explanation: Author: Émilie, [email protected]
Date: 2021-03-31
ROME update from v345 to v346
In March 2021 a new version of the ROME was released. I want to investigate what changed and whether we need to do anything about it.
You might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v345. You will have to trust me on the results ;-)
Skip the run test because it requires older versions of the ROME.
End of explanation
new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)
deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)
print('{:d} new files'.format(len(new_files)))
print('{:d} deleted files'.format(len(deleted_files)))
Explanation: First let's check if there are new or deleted files (only matching by file names).
End of explanation
# Load all ROME datasets for the two versions we compare.
VersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new'])
def read_csv(filename):
try:
return pd.read_csv(filename)
except pd.errors.ParserError:
display(f'While parsing: {filename}')
raise
rome_data = [VersionedDataset(
basename=path.basename(f),
old=read_csv(f.replace(NEW_VERSION, OLD_VERSION)),
new=read_csv(f))
for f in sorted(new_version_files)]
def find_rome_dataset_by_name(data, partial_name):
for dataset in data:
if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename:
return dataset
raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [d.basename for d in data]))
Explanation: Cool, no new nor deleted files.
Now let's set up a dataset that, for each table, links both the old and the new file together.
End of explanation
for dataset in rome_data:
if set(dataset.old.columns) != set(dataset.new.columns):
print('Columns of {} have changed.'.format(dataset.basename))
Explanation: Let's make sure the structure hasn't changed:
End of explanation
same_row_count_files = 0
for dataset in rome_data:
diff = len(dataset.new.index) - len(dataset.old.index)
if diff > 0:
print('{:d}/{:d} values added in {}'.format(
diff, len(dataset.new.index), dataset.basename))
elif diff < 0:
print('{:d}/{:d} values removed in {}'.format(
-diff, len(dataset.old.index), dataset.basename))
else:
same_row_count_files += 1
print('{:d}/{:d} files with the same number of rows'.format(
same_row_count_files, len(rome_data)))
Explanation: OK no columns have changed.
Now let's see for each file if there are more or less rows.
End of explanation
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)
obsolete_jobs = set(jobs.old.code_ogr) - set(jobs.new.code_ogr)
stable_jobs = set(jobs.new.code_ogr) & set(jobs.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_jobs), len(new_jobs), len(stable_jobs)), (OLD_VERSION, NEW_VERSION));
Explanation: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation and liens_rome_referentiels, so let's see more precisely.
End of explanation
pd.options.display.max_colwidth = 2000
jobs.new[jobs.new.code_ogr.isin(new_jobs)][['code_ogr', 'libelle_appellation_long', 'code_rome']]
Explanation: Alright, so the only change seems to be 3 new jobs added. Let's take a look (only showing interesting fields):
End of explanation
items = find_rome_dataset_by_name(rome_data, 'item')
new_items = set(items.new.code_ogr) - set(items.old.code_ogr)
obsolete_items = set(items.old.code_ogr) - set(items.new.code_ogr)
stable_items = set(items.new.code_ogr) & set(items.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_items), len(new_items), len(stable_items)), (OLD_VERSION, NEW_VERSION));
Explanation: Those are indeed new jobs.
OK, let's check at the changes in items:
End of explanation
links = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels')
old = links.old[['code_rome', 'code_ogr']]
new = links.new[['code_rome', 'code_ogr']]
links_merged = old.merge(new, how='outer', indicator=True)
links_merged['_diff'] = links_merged._merge.map({'left_only': 'removed', 'right_only': 'added'})
links_merged._diff.value_counts()
Explanation: No changes at all, so it's merely existing items that have been newly assigned to existing groups.
The changes in liens_rome_referentiels might help reveal those changes:
End of explanation
job_group_names = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome').new.set_index('code_rome').libelle_rome
item_names = items.new.set_index('code_ogr').libelle.drop_duplicates()
links_merged['job_group_name'] = links_merged.code_rome.map(job_group_names)
links_merged['item_name'] = links_merged.code_ogr.map(item_names)
display(links_merged[links_merged._diff == 'removed'].dropna().head(5))
links_merged[links_merged._diff == 'added'].dropna().head(10)
Explanation: So there are few fixes. Let's have a look at some of them:
End of explanation |
9,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: JAX에서 TensorFlow 확률(TFP on JAX)
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: TFP의 최신 야간 빌드를 사용하여 TFP on JAX를 설치할 수 있습니다.
Step3: 몇 가지 유용한 Python 라이브러리를 가져옵니다.
Step4: 또한 몇 가지 기본 JAX 기능을 가져옵니다.
Step5: TFP on JAX 가져오기
TFP on JAX를 사용하려면 jax "기판"을 가져온 후 평소의 tfp처럼 사용하면 됩니다.
Step6: 데모
Step7: tfd.JointDistributionCoroutine를 사용하여 모델을 정의할 수 있습니다. 가중치와 바이어스 항 모두에 표준 정규 사전 분포를 넣은 후 샘플링된 레이블을 데이터에 고정하는 target_log_prob 함수를 작성합니다.
Step8: dist에서 샘플링하여 MCMC의 초기 상태를 생성합니다. 그런 다음 무작위 키와 초기 상태를 취하는 함수를 정의하고 No-U-Turn-Sampler(NUTS)에서 500개의 샘플을 생성할 수 있습니다. jit과 같은 JAX 변환을 사용하여 XLA로 NUTS 샘플러를 컴파일할 수 있습니다.
Step9: 샘플을 사용하여 각 가중치 세트의 예측 확률을 평균화하는 방식으로 베이지안 모델 평균화(BMA)를 수행하겠습니다.
먼저 주어진 매개변수 세트에 대해 각 클래스에 대한 확률을 생성하는 함수를 작성합니다. dist.sample_distributions를 사용하여 모델의 최종 분포를 얻을 수 있습니다.
Step10: 샘플 세트에 vmap(classifier_probs)를 수행하여 각 샘플의 예측된 클래스 확률을 얻을 수 있습니다. 그런 다음 각 샘플의 평균 정확성과 베이지안 모델 평균화의 정확성을 계산합니다.
Step11: BMA가 오류율을 거의 3분의 1로 줄이는 것으로 보입니다!
기본 사항
TFP on JAX에는 tf.Tensor와 같은 TF 객체를 수락하는 대신 JAX 아날로그를 수락하는 TF와 동일한 API가 있습니다. 예를 들어 tf.Tensor가 이전에 입력으로 사용된 곳이라면 어디에서나 API가 AX DeviceArray를 예상합니다. TFP 메서드는 tf.Tensor를 반환하는 대신에 DeviceArray를 반환합니다. TFP on JAX는 DeviceArray 목록이나 사전과 같은 JAX 객체의 중첩 구조에서도 작동합니다.
분포
대부분의 TFP 분포는 TF 대응 부분과 매우 유사한 의미 체계로 JAX에서 지원됩니다. 또한, JAX Pytrees로 등록되어 JAX 변환 함수의 입력 및 출력이 될 수도 있습니다.
기본 분포
이 분포의 log_prob 메서드는 동일하게 작동합니다.
Step12: 한 분포로부터 샘플링하려면 PRNGKey(또는 정수 목록)에서 seed 키워드 인수로 명시적으로 전달해야 합니다. 명시적으로 시드를 전달하지 못하면 오류가 발생합니다.
Step13: 분포의 형상 의미 체계는 JAX에서 동일하게 유지됩니다. 여기서 분포는 각각 event_shape와 batch_shape를 갖게 되며 많은 샘플을 이끌어낼수록 sample_shape 차원이 더 많이 추가됩니다.
예를 들어 벡터 매개변수를 가진 tfd.MultivariateNormalDiag는 벡터 이벤트 형상과 빈 배치 형상을 갖게 됩니다.
Step14: 반면에 벡터로 매개변수화된 tfd.Normal는 스칼라 이벤트 형상과 벡터 배치 형상을 갖게 됩니다.
Step15: 샘플의 log_prob를 취하는 의미 체계는 JAX에서도 동일하게 작동합니다.
Step16: JAX DeviceArray는 NumPy 및 Matplotlib와 같은 라이브러리와 호환되므로 샘플을 직접 플로팅 함수에 공급할 수 있습니다.
Step17: Distribution 메서드는 JAX 변환과 호환됩니다.
Step18: TFP 분포는 JAX pytree 노드로 등록되어 있기 때문에 분포를 입력 또는 출력으로 사용하여 함수를 작성하고 이러한 함수를 jit를 사용하여 변환할 수는 있지만 아직 vmap를 적용한 함수에 대한 인수로는 지원되지 않습니다.
Step19: 변환된 분포
샘플이 Bijector를 통해 전달된 분포와 같이 변환된 분포도 기본적으로 작동합니다(bijectors도 작동합니다! 아래 참조).
Step20: 결합 분포
TFP는 구성 요소 분포를 여러 무작위 변수에 대한 단일 분포로 결합할 수 있도록 JointDistribution를 제공합니다. 현재 TFP는 JAX에서 지원되는 3가지 핵심 변화형(JointDistributionSequential, JointDistributionNamed 및 JointDistributionCoroutine)을 제공합니다. AutoBatched 변화형도 모두 지원합니다.
Step21: 기타 분포
가우시안 프로세스도 JAX 모드에서 작동합니다!
Step22: Hidden Markov 모델도 지원합니다.
Step23: TensorFlow 또는 XLA 비호환성에 대한 엄격한 종속성으로 인해 PixelCNN와 같은 몇몇 분포는 아직 지원하지 않습니다.
Bijectors
현재 JAX는 대부분의 TFP Bijector를 지원하고 있습니다!
Step24: Bijector는 jit, grad 및 vmap과 같은 JAX 변환과 호환됩니다.
Step25: RealNVP 및 FFJORD와 같은 일부 Bijector는 아직 지원하지 않습니다.
MCMC
tfp.mcmc도 JAX로 이식했으므로 JAX에서 Hamiltonian Monte Carlo(HMC) 및 No-U-Turn-Sampler(NUTS)와 같은 알고리즘도 실행할 수 있습니다.
Step26: TFP on TF와 달리 seed 키워드 인수를 사용하여 PRNGKey를 sample_chain로 전달해야 합니다.
Step27: 여러 체인을 실행하기 위해 상태 배치를 sample_chain에 전달하거나 vmap를 사용할 수 있습니다(두 접근법의 성능 차이는 아직 조사하지 않음).
Step28: 옵티마이저
TFP on JAX는 BFGS 및 L-BFGS와 같은 일부 중요한 옵티마이저를 지원합니다. 조정된 간단한 제곱 손실 함수를 설정해보도록 하겠습니다.
Step29: BFGS는 이러한 손실의 최소값을 찾을 수 있습니다.
Step30: L-BFGS도 마찬가지입니다.
Step31: vmap L-BFGS를 하기 위해 단일 시작점에 대한 손실을 최적화하는 함수를 설정해보겠습니다.
Step32: 주의 사항
TF와 JAX 사이에는 몇 가지 근본적인 차이점이 있으며, 일부 TFP 동작은 두 기판 사이에서 다르게 되고 일부 기능을 지원하지 않게 됩니다. 예를 들면, 다음과 같습니다.
TFP on JAX는 tf.Variable와 같은 것을 지원하지 않는데, JAX에는 이러한 것이 존재하지 않기 때문입니다. 이는 또한 tfp.util.TransformedVariable와 같은 유틸리티가 지원되지 않음을 의미합니다.
tfp.layers는 Keras 및 tf.Variable에 대한 종속성으로 인해 아직 백엔드에서 지원되지 않습니다.
tfp.math.minimize는 tf.Variable에 대한 종속성 때문에 TFP on JAX에서 작동하지 않습니다.
TFP on JAX를 사용하면 텐서 형상이 항상 구체적인 정수값이며 TFP on TF에서와 같이 알 수 없거나 동적이지 않습니다.
의사 난수는 TF와 JAX에서 다르게 처리됩니다(부록 참조).
tfp.experimental의 라이브러리는 JAX 기판에 존재할 것으로 보장되지 않습니다.
Dtype 프로모션 규칙은 TF와 JAX 사이에서 다릅니다. TFP on JAX는 일관성을 위해 내부적으로 TF의 dtype 의미 체계를 지키려고 합니다.
Bijectors는 아직 JAX pytrees로 등록되지 않았습니다.
TFP on JAX에서 지원되는 내용의 전체 목록을 보려면 API 설명서를 참조하세요.
결론
우리는 다수의 TFP 기능을 JAX로 이식했으며 모두가 무엇을 구축하게 될지 기대하고 있습니다. 일부 기능은 아직 지원되지 않습니다. 여러분을 위해 우리가 놓친 부분이 있다면(또는 버그를 발견한다면!) 연락해 주시기 바랍니다. [email protected]로 이메일을 보내거나 Github 리포지토리에서 문제를 보고할 수 있습니다.
부록
Step33: JAX에서 난수 함수는 키를 사용하여 결정론적으로 난수 변량을 생성하므로 다시 사용해서는 안 됩니다. 예를 들어 key를 사용하여 정규 분포 값을 샘플링할 수는 있지만 다른 곳에서 key를 다시 사용해서는 안 됩니다. 또한, random.normal에 동일한 값을 전달하면 동일한 값이 생성됩니다.
Step34: 그렇다면 어떻게 해야 단일 키로 여러 샘플을 이끌어 낼 수 있을까요? 정답은 키 분할입니다. 기본 개념은 PRNGKey를 여러 개로 분할할 수 있으며, 각각의 새 키를 독립적인 난수 소스로 취급할 수 있다는 것입니다.
Step35: 키 분할은 결정론적이지만 혼란스럽기 때문에 이제 각각의 새 키를 사용하여 고유한 난수 샘플을 이끌어낼 수 있습니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip uninstall tensorflow -y -q
Explanation: JAX에서 TensorFlow 확률(TFP on JAX)
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/TensorFlow_Probability_on_JAX"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/TensorFlow_Probability_on_JAX.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/TensorFlow_Probability_on_JAX.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/probability/examples/TensorFlow_Probability_on_JAX.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
TensorFlow 확률(TFP)은 이제 JAX에서도 작동하는 확률적 추론 및 통계 분석을 위한 라이브러리입니다! 익숙하지 않은 분들을 위해 말하자면, JAX는 구성 가능한 함수 변환을 기반으로 가속화된 수치 계산을 수행하기 위한 라이브러리입니다.
TFP on JAX는 많은 TFP 사용자들이 현재 편리하게 사용하는 추상화 및 API를 유지하는 한편 정규 TFP의 가장 유용한 기능을 다수 지원합니다.
설정
TFP on JAX는 TensorFlow에 종속적이지 않습니다. 이 Colab에서 TensorFlow를 완전히 제거하도록 합니다.
End of explanation
!pip install -Uq tfp-nightly[jax] > /dev/null
Explanation: TFP의 최신 야간 빌드를 사용하여 TFP on JAX를 설치할 수 있습니다.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn import datasets
sns.set(style='white')
Explanation: 몇 가지 유용한 Python 라이브러리를 가져옵니다.
End of explanation
import jax.numpy as jnp
from jax import grad
from jax import jit
from jax import random
from jax import value_and_grad
from jax import vmap
Explanation: 또한 몇 가지 기본 JAX 기능을 가져옵니다.
End of explanation
from tensorflow_probability.substrates import jax as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
tfpk = tfp.math.psd_kernels
Explanation: TFP on JAX 가져오기
TFP on JAX를 사용하려면 jax "기판"을 가져온 후 평소의 tfp처럼 사용하면 됩니다.
End of explanation
iris = datasets.load_iris()
features, labels = iris['data'], iris['target']
num_features = features.shape[-1]
num_classes = len(iris.target_names)
Explanation: 데모: 베이지안 로지스틱 회귀
JAX 백엔드로 무엇을 할 수 있는지 보여주기 위해 클래식 Iris 데이터세트에 적용된 베이지안 로지스틱 회귀를 구현하겠습니다.
먼저 Iris 데이터세트를 가져온 후 일부 메타데이터를 추출합니다.
End of explanation
Root = tfd.JointDistributionCoroutine.Root
def model():
w = yield Root(tfd.Sample(tfd.Normal(0., 1.),
sample_shape=(num_features, num_classes)))
b = yield Root(
tfd.Sample(tfd.Normal(0., 1.), sample_shape=(num_classes,)))
logits = jnp.dot(features, w) + b
yield tfd.Independent(tfd.Categorical(logits=logits),
reinterpreted_batch_ndims=1)
dist = tfd.JointDistributionCoroutine(model)
def target_log_prob(*params):
return dist.log_prob(params + (labels,))
Explanation: tfd.JointDistributionCoroutine를 사용하여 모델을 정의할 수 있습니다. 가중치와 바이어스 항 모두에 표준 정규 사전 분포를 넣은 후 샘플링된 레이블을 데이터에 고정하는 target_log_prob 함수를 작성합니다.
End of explanation
init_key, sample_key = random.split(random.PRNGKey(0))
init_params = tuple(dist.sample(seed=init_key)[:-1])
@jit
def run_chain(key, state):
kernel = tfp.mcmc.NoUTurnSampler(target_log_prob, 1e-3)
return tfp.mcmc.sample_chain(500,
current_state=state,
kernel=kernel,
trace_fn=lambda _, results: results.target_log_prob,
num_burnin_steps=500,
seed=key)
states, log_probs = run_chain(sample_key, init_params)
plt.figure()
plt.plot(log_probs)
plt.ylabel('Target Log Prob')
plt.xlabel('Iterations of NUTS')
plt.show()
Explanation: dist에서 샘플링하여 MCMC의 초기 상태를 생성합니다. 그런 다음 무작위 키와 초기 상태를 취하는 함수를 정의하고 No-U-Turn-Sampler(NUTS)에서 500개의 샘플을 생성할 수 있습니다. jit과 같은 JAX 변환을 사용하여 XLA로 NUTS 샘플러를 컴파일할 수 있습니다.
End of explanation
def classifier_probs(params):
dists, _ = dist.sample_distributions(seed=random.PRNGKey(0),
value=params + (None,))
return dists[-1].distribution.probs_parameter()
Explanation: 샘플을 사용하여 각 가중치 세트의 예측 확률을 평균화하는 방식으로 베이지안 모델 평균화(BMA)를 수행하겠습니다.
먼저 주어진 매개변수 세트에 대해 각 클래스에 대한 확률을 생성하는 함수를 작성합니다. dist.sample_distributions를 사용하여 모델의 최종 분포를 얻을 수 있습니다.
End of explanation
all_probs = jit(vmap(classifier_probs))(states)
print('Average accuracy:', jnp.mean(all_probs.argmax(axis=-1) == labels))
print('BMA accuracy:', jnp.mean(all_probs.mean(axis=0).argmax(axis=-1) == labels))
Explanation: 샘플 세트에 vmap(classifier_probs)를 수행하여 각 샘플의 예측된 클래스 확률을 얻을 수 있습니다. 그런 다음 각 샘플의 평균 정확성과 베이지안 모델 평균화의 정확성을 계산합니다.
End of explanation
dist = tfd.Normal(0., 1.)
print(dist.log_prob(0.))
Explanation: BMA가 오류율을 거의 3분의 1로 줄이는 것으로 보입니다!
기본 사항
TFP on JAX에는 tf.Tensor와 같은 TF 객체를 수락하는 대신 JAX 아날로그를 수락하는 TF와 동일한 API가 있습니다. 예를 들어 tf.Tensor가 이전에 입력으로 사용된 곳이라면 어디에서나 API가 AX DeviceArray를 예상합니다. TFP 메서드는 tf.Tensor를 반환하는 대신에 DeviceArray를 반환합니다. TFP on JAX는 DeviceArray 목록이나 사전과 같은 JAX 객체의 중첩 구조에서도 작동합니다.
분포
대부분의 TFP 분포는 TF 대응 부분과 매우 유사한 의미 체계로 JAX에서 지원됩니다. 또한, JAX Pytrees로 등록되어 JAX 변환 함수의 입력 및 출력이 될 수도 있습니다.
기본 분포
이 분포의 log_prob 메서드는 동일하게 작동합니다.
End of explanation
tfd.Normal(0., 1.).sample(seed=random.PRNGKey(0))
Explanation: 한 분포로부터 샘플링하려면 PRNGKey(또는 정수 목록)에서 seed 키워드 인수로 명시적으로 전달해야 합니다. 명시적으로 시드를 전달하지 못하면 오류가 발생합니다.
End of explanation
dist = tfd.MultivariateNormalDiag(
loc=jnp.zeros(5),
scale_diag=jnp.ones(5)
)
print('Event shape:', dist.event_shape)
print('Batch shape:', dist.batch_shape)
Explanation: 분포의 형상 의미 체계는 JAX에서 동일하게 유지됩니다. 여기서 분포는 각각 event_shape와 batch_shape를 갖게 되며 많은 샘플을 이끌어낼수록 sample_shape 차원이 더 많이 추가됩니다.
예를 들어 벡터 매개변수를 가진 tfd.MultivariateNormalDiag는 벡터 이벤트 형상과 빈 배치 형상을 갖게 됩니다.
End of explanation
dist = tfd.Normal(
loc=jnp.ones(5),
scale=jnp.ones(5),
)
print('Event shape:', dist.event_shape)
print('Batch shape:', dist.batch_shape)
Explanation: 반면에 벡터로 매개변수화된 tfd.Normal는 스칼라 이벤트 형상과 벡터 배치 형상을 갖게 됩니다.
End of explanation
dist = tfd.Normal(jnp.zeros(5), jnp.ones(5))
s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))
print(dist.log_prob(s).shape)
dist = tfd.Independent(tfd.Normal(jnp.zeros(5), jnp.ones(5)), 1)
s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))
print(dist.log_prob(s).shape)
Explanation: 샘플의 log_prob를 취하는 의미 체계는 JAX에서도 동일하게 작동합니다.
End of explanation
sns.distplot(tfd.Normal(0., 1.).sample(1000, seed=random.PRNGKey(0)))
plt.show()
Explanation: JAX DeviceArray는 NumPy 및 Matplotlib와 같은 라이브러리와 호환되므로 샘플을 직접 플로팅 함수에 공급할 수 있습니다.
End of explanation
sns.distplot(jit(vmap(lambda key: tfd.Normal(0., 1.).sample(seed=key)))(
random.split(random.PRNGKey(0), 2000)))
plt.show()
x = jnp.linspace(-5., 5., 100)
plt.plot(x, jit(vmap(grad(tfd.Normal(0., 1.).prob)))(x))
plt.show()
Explanation: Distribution 메서드는 JAX 변환과 호환됩니다.
End of explanation
@jit
def random_distribution(key):
loc_key, scale_key = random.split(key)
loc, log_scale = random.normal(loc_key), random.normal(scale_key)
return tfd.Normal(loc, jnp.exp(log_scale))
random_dist = random_distribution(random.PRNGKey(0))
print(random_dist.mean(), random_dist.variance())
Explanation: TFP 분포는 JAX pytree 노드로 등록되어 있기 때문에 분포를 입력 또는 출력으로 사용하여 함수를 작성하고 이러한 함수를 jit를 사용하여 변환할 수는 있지만 아직 vmap를 적용한 함수에 대한 인수로는 지원되지 않습니다.
End of explanation
dist = tfd.TransformedDistribution(
tfd.Normal(0., 1.),
tfb.Sigmoid()
)
sns.distplot(dist.sample(1000, seed=random.PRNGKey(0)))
plt.show()
Explanation: 변환된 분포
샘플이 Bijector를 통해 전달된 분포와 같이 변환된 분포도 기본적으로 작동합니다(bijectors도 작동합니다! 아래 참조).
End of explanation
dist = tfd.JointDistributionSequential([
tfd.Normal(0., 1.),
lambda x: tfd.Normal(x, 1e-1)
])
plt.scatter(*dist.sample(1000, seed=random.PRNGKey(0)), alpha=0.5)
plt.show()
joint = tfd.JointDistributionNamed(dict(
e= tfd.Exponential(rate=1.),
n= tfd.Normal(loc=0., scale=2.),
m=lambda n, e: tfd.Normal(loc=n, scale=e),
x=lambda m: tfd.Sample(tfd.Bernoulli(logits=m), 12),
))
joint.sample(seed=random.PRNGKey(0))
Root = tfd.JointDistributionCoroutine.Root
def model():
e = yield Root(tfd.Exponential(rate=1.))
n = yield Root(tfd.Normal(loc=0, scale=2.))
m = yield tfd.Normal(loc=n, scale=e)
x = yield tfd.Sample(tfd.Bernoulli(logits=m), 12)
joint = tfd.JointDistributionCoroutine(model)
joint.sample(seed=random.PRNGKey(0))
Explanation: 결합 분포
TFP는 구성 요소 분포를 여러 무작위 변수에 대한 단일 분포로 결합할 수 있도록 JointDistribution를 제공합니다. 현재 TFP는 JAX에서 지원되는 3가지 핵심 변화형(JointDistributionSequential, JointDistributionNamed 및 JointDistributionCoroutine)을 제공합니다. AutoBatched 변화형도 모두 지원합니다.
End of explanation
k1, k2, k3 = random.split(random.PRNGKey(0), 3)
observation_noise_variance = 0.01
f = lambda x: jnp.sin(10*x[..., 0]) * jnp.exp(-x[..., 0]**2)
observation_index_points = random.uniform(
k1, [50], minval=-1.,maxval= 1.)[..., jnp.newaxis]
observations = f(observation_index_points) + tfd.Normal(
loc=0., scale=jnp.sqrt(observation_noise_variance)).sample(seed=k2)
index_points = jnp.linspace(-1., 1., 100)[..., jnp.newaxis]
kernel = tfpk.ExponentiatedQuadratic(length_scale=0.1)
gprm = tfd.GaussianProcessRegressionModel(
kernel=kernel,
index_points=index_points,
observation_index_points=observation_index_points,
observations=observations,
observation_noise_variance=observation_noise_variance)
samples = gprm.sample(10, seed=k3)
for i in range(10):
plt.plot(index_points, samples[i], alpha=0.5)
plt.plot(observation_index_points, observations, marker='o', linestyle='')
plt.show()
Explanation: 기타 분포
가우시안 프로세스도 JAX 모드에서 작동합니다!
End of explanation
initial_distribution = tfd.Categorical(probs=[0.8, 0.2])
transition_distribution = tfd.Categorical(probs=[[0.7, 0.3],
[0.2, 0.8]])
observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.])
model = tfd.HiddenMarkovModel(
initial_distribution=initial_distribution,
transition_distribution=transition_distribution,
observation_distribution=observation_distribution,
num_steps=7)
print(model.mean())
print(model.log_prob(jnp.zeros(7)))
print(model.sample(seed=random.PRNGKey(0)))
Explanation: Hidden Markov 모델도 지원합니다.
End of explanation
tfb.Exp().inverse(1.)
bij = tfb.Shift(1.)(tfb.Scale(3.))
print(bij.forward(jnp.ones(5)))
print(bij.inverse(jnp.ones(5)))
b = tfb.FillScaleTriL(diag_bijector=tfb.Exp(), diag_shift=None)
print(b.forward(x=[0., 0., 0.]))
print(b.inverse(y=[[1., 0], [.5, 2]]))
b = tfb.Chain([tfb.Exp(), tfb.Softplus()])
# or:
# b = tfb.Exp()(tfb.Softplus())
print(b.forward(-jnp.ones(5)))
Explanation: TensorFlow 또는 XLA 비호환성에 대한 엄격한 종속성으로 인해 PixelCNN와 같은 몇몇 분포는 아직 지원하지 않습니다.
Bijectors
현재 JAX는 대부분의 TFP Bijector를 지원하고 있습니다!
End of explanation
jit(vmap(tfb.Exp().inverse))(jnp.arange(4.))
x = jnp.linspace(0., 1., 100)
plt.plot(x, jit(grad(lambda x: vmap(tfb.Sigmoid().inverse)(x).sum()))(x))
plt.show()
Explanation: Bijector는 jit, grad 및 vmap과 같은 JAX 변환과 호환됩니다.
End of explanation
target_log_prob = tfd.MultivariateNormalDiag(jnp.zeros(2), jnp.ones(2)).log_prob
Explanation: RealNVP 및 FFJORD와 같은 일부 Bijector는 아직 지원하지 않습니다.
MCMC
tfp.mcmc도 JAX로 이식했으므로 JAX에서 Hamiltonian Monte Carlo(HMC) 및 No-U-Turn-Sampler(NUTS)와 같은 알고리즘도 실행할 수 있습니다.
End of explanation
def run_chain(key, state):
kernel = tfp.mcmc.NoUTurnSampler(target_log_prob, 1e-1)
return tfp.mcmc.sample_chain(1000,
current_state=state,
kernel=kernel,
trace_fn=lambda _, results: results.target_log_prob,
seed=key)
states, log_probs = jit(run_chain)(random.PRNGKey(0), jnp.zeros(2))
plt.figure()
plt.scatter(*states.T, alpha=0.5)
plt.figure()
plt.plot(log_probs)
plt.show()
Explanation: TFP on TF와 달리 seed 키워드 인수를 사용하여 PRNGKey를 sample_chain로 전달해야 합니다.
End of explanation
states, log_probs = jit(run_chain)(random.PRNGKey(0), jnp.zeros([10, 2]))
plt.figure()
for i in range(10):
plt.scatter(*states[:, i].T, alpha=0.5)
plt.figure()
for i in range(10):
plt.plot(log_probs[:, i], alpha=0.5)
plt.show()
Explanation: 여러 체인을 실행하기 위해 상태 배치를 sample_chain에 전달하거나 vmap를 사용할 수 있습니다(두 접근법의 성능 차이는 아직 조사하지 않음).
End of explanation
minimum = jnp.array([1.0, 1.0]) # The center of the quadratic bowl.
scales = jnp.array([2.0, 3.0]) # The scales along the two axes.
# The objective function and the gradient.
def quadratic_loss(x):
return jnp.sum(scales * jnp.square(x - minimum))
start = jnp.array([0.6, 0.8]) # Starting point for the search.
Explanation: 옵티마이저
TFP on JAX는 BFGS 및 L-BFGS와 같은 일부 중요한 옵티마이저를 지원합니다. 조정된 간단한 제곱 손실 함수를 설정해보도록 하겠습니다.
End of explanation
optim_results = tfp.optimizer.bfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
# Check that the search converged
assert(optim_results.converged)
# Check that the argmin is close to the actual value.
np.testing.assert_allclose(optim_results.position, minimum)
# Print out the total number of function evaluations it took. Should be 5.
print("Function evaluations: %d" % optim_results.num_objective_evaluations)
Explanation: BFGS는 이러한 손실의 최소값을 찾을 수 있습니다.
End of explanation
optim_results = tfp.optimizer.lbfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
# Check that the search converged
assert(optim_results.converged)
# Check that the argmin is close to the actual value.
np.testing.assert_allclose(optim_results.position, minimum)
# Print out the total number of function evaluations it took. Should be 5.
print("Function evaluations: %d" % optim_results.num_objective_evaluations)
Explanation: L-BFGS도 마찬가지입니다.
End of explanation
def optimize_single(start):
return tfp.optimizer.lbfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
all_results = jit(vmap(optimize_single))(
random.normal(random.PRNGKey(0), (10, 2)))
assert all(all_results.converged)
for i in range(10):
np.testing.assert_allclose(optim_results.position[i], minimum)
print("Function evaluations: %s" % all_results.num_objective_evaluations)
Explanation: vmap L-BFGS를 하기 위해 단일 시작점에 대한 손실을 최적화하는 함수를 설정해보겠습니다.
End of explanation
key = random.PRNGKey(0) # Creates a key with value [0, 0]
print(key)
Explanation: 주의 사항
TF와 JAX 사이에는 몇 가지 근본적인 차이점이 있으며, 일부 TFP 동작은 두 기판 사이에서 다르게 되고 일부 기능을 지원하지 않게 됩니다. 예를 들면, 다음과 같습니다.
TFP on JAX는 tf.Variable와 같은 것을 지원하지 않는데, JAX에는 이러한 것이 존재하지 않기 때문입니다. 이는 또한 tfp.util.TransformedVariable와 같은 유틸리티가 지원되지 않음을 의미합니다.
tfp.layers는 Keras 및 tf.Variable에 대한 종속성으로 인해 아직 백엔드에서 지원되지 않습니다.
tfp.math.minimize는 tf.Variable에 대한 종속성 때문에 TFP on JAX에서 작동하지 않습니다.
TFP on JAX를 사용하면 텐서 형상이 항상 구체적인 정수값이며 TFP on TF에서와 같이 알 수 없거나 동적이지 않습니다.
의사 난수는 TF와 JAX에서 다르게 처리됩니다(부록 참조).
tfp.experimental의 라이브러리는 JAX 기판에 존재할 것으로 보장되지 않습니다.
Dtype 프로모션 규칙은 TF와 JAX 사이에서 다릅니다. TFP on JAX는 일관성을 위해 내부적으로 TF의 dtype 의미 체계를 지키려고 합니다.
Bijectors는 아직 JAX pytrees로 등록되지 않았습니다.
TFP on JAX에서 지원되는 내용의 전체 목록을 보려면 API 설명서를 참조하세요.
결론
우리는 다수의 TFP 기능을 JAX로 이식했으며 모두가 무엇을 구축하게 될지 기대하고 있습니다. 일부 기능은 아직 지원되지 않습니다. 여러분을 위해 우리가 놓친 부분이 있다면(또는 버그를 발견한다면!) 연락해 주시기 바랍니다. [email protected]로 이메일을 보내거나 Github 리포지토리에서 문제를 보고할 수 있습니다.
부록: JAX의 의사 난수
JAX의 의사 난수 생성(PRNG) 모델은 상태 비저장 모델입니다. 상태 저장 모델과 달리 각 난수 추첨 후에 진화하는 변경 가능한 전역 상태가 없습니다. JAX 모델에서 우리는 PRNG 키로 시작하며, 이는 32비트 정수 쌍으로 작동합니다. jax.random.PRNGKey를 사용하여 이러한 키를 구성할 수 있습니다.
End of explanation
print(random.normal(key))
Explanation: JAX에서 난수 함수는 키를 사용하여 결정론적으로 난수 변량을 생성하므로 다시 사용해서는 안 됩니다. 예를 들어 key를 사용하여 정규 분포 값을 샘플링할 수는 있지만 다른 곳에서 key를 다시 사용해서는 안 됩니다. 또한, random.normal에 동일한 값을 전달하면 동일한 값이 생성됩니다.
End of explanation
key1, key2 = random.split(key, num=2)
print(key1, key2)
Explanation: 그렇다면 어떻게 해야 단일 키로 여러 샘플을 이끌어 낼 수 있을까요? 정답은 키 분할입니다. 기본 개념은 PRNGKey를 여러 개로 분할할 수 있으며, 각각의 새 키를 독립적인 난수 소스로 취급할 수 있다는 것입니다.
End of explanation
print(random.normal(key1), random.normal(key2))
Explanation: 키 분할은 결정론적이지만 혼란스럽기 때문에 이제 각각의 새 키를 사용하여 고유한 난수 샘플을 이끌어낼 수 있습니다.
End of explanation |
9,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Indexing and selecting data
Step1: More on NumPy indexing
Step2: Fancy indexing
Apart from indexing with integers and slices NumPy also supports indexing with arrays of integers (so-called fancy indexing). For example, to get the 2nd and 4th element of a
Step3: Boolean indexing
To select data fulfilling specific criteria, one can use the bolean indexing. This is best illustrated on 1D arrays; for example, lets select only positive elements of a
Step4: Note that the index array has the same size as and type of boolean
Step5: Multiple criteria can be also combine in one query
Step6: <div class="alert alert-success">
<b>EXERCISE</b>
Step7: We can use fancy indexing with the rich index
Step8: Similarly, boolean indexing can be used to filter the Series. Lets select countries with population of more than 20 millions
Step9: You can also do position-based indexing by using integers instead of labels
Step10: Indexing DataFrame
Step11: Some notes on selecting data
Data frames allow for labeling rows and columns, but this makes indexing also a bit more complex compared to 1D NumPy's array and pandas Series. We now have to distuinguish between
Step12: or multiple columns using fancy indexing
Step13: But, slicing accesses the rows
Step14: We can also select rows similarly to the boolean indexing in numpy. The boolean mask should be 1-dimensional and the same length as the thing being indexed. Boolean indexing of DataFrame can be used like the WHERE clause of SQL to select rows matching some criteria
Step15: So as a summary, [] provides the following convenience shortcuts
Step16: But the row or column indexer can also be a list, slice, boolean array, ..
Step17: Selecting by position with iloc works similar as indexing numpy arrays
Step18: The different indexing methods can also be used to assign data
Step19: <div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Explanation: Indexing and selecting data
End of explanation
a = np.array([-2, 3, 4, -5, 5])
print(a)
Explanation: More on NumPy indexing
End of explanation
a[[1, 3]]
Explanation: Fancy indexing
Apart from indexing with integers and slices NumPy also supports indexing with arrays of integers (so-called fancy indexing). For example, to get the 2nd and 4th element of a:
End of explanation
a[a > 0]
Explanation: Boolean indexing
To select data fulfilling specific criteria, one can use the bolean indexing. This is best illustrated on 1D arrays; for example, lets select only positive elements of a:
End of explanation
print(a)
print(a > 0)
Explanation: Note that the index array has the same size as and type of boolean:
End of explanation
a[(a > 0) & (a < 5)]
Explanation: Multiple criteria can be also combine in one query:
End of explanation
pop_dict = {'Germany': 81.3,
'Belgium': 11.3,
'France': 64.3,
'United Kingdom': 64.9,
'Netherlands': 16.9}
population = pd.Series(pop_dict)
print(population)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Select all odd numbers from the array <code>a</code>
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Select <b>negative</b> odd numbers from the array <code>a</code>
</div>
Indexing pandas Series
Series can be indexed similarly to 1D NumPy array.
End of explanation
population[['Netherlands', 'Germany']]
Explanation: We can use fancy indexing with the rich index:
End of explanation
population[population > 20]
Explanation: Similarly, boolean indexing can be used to filter the Series. Lets select countries with population of more than 20 millions:
End of explanation
population[:2]
Explanation: You can also do position-based indexing by using integers instead of labels:
End of explanation
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
countries = countries.set_index('country')
countries
Explanation: Indexing DataFrame
End of explanation
countries['area']
Explanation: Some notes on selecting data
Data frames allow for labeling rows and columns, but this makes indexing also a bit more complex compared to 1D NumPy's array and pandas Series. We now have to distuinguish between:
selection of rows or columns,
selection by label or position.
[] provides some convenience shortcuts
For a DataFrame, basic indexing selects the columns.
Selecting a single column:
End of explanation
countries[['area', 'population']]
Explanation: or multiple columns using fancy indexing:
End of explanation
countries['France':'Netherlands']
Explanation: But, slicing accesses the rows:
End of explanation
countries[countries['area'] > 100000]
Explanation: We can also select rows similarly to the boolean indexing in numpy. The boolean mask should be 1-dimensional and the same length as the thing being indexed. Boolean indexing of DataFrame can be used like the WHERE clause of SQL to select rows matching some criteria:
End of explanation
countries.loc['Germany', 'area']
Explanation: So as a summary, [] provides the following convenience shortcuts:
<table>
<tr>
<td></td>
<td>NumPy/`Series`</td>
<td>`DataFrame`</td>
</tr>
<tr>
<td>Integer index<br>`data[label]`</td>
<td>single element</td>
<td>single **column**</td>
</tr>
<tr>
<td>Slice<br>`data[label1:label2]`</td>
<td>sequence</td>
<td>one or more **rows**</td>
</tr>
<tr>
<td>Fancy indexing<br>`data[[label1,label2]]`</td>
<td>sequence</td>
<td>one or more **columns**</td>
</tr>
<tr>
<td>Boolean indexing<br>`data[mask]`</td>
<td>sequence</td>
<td>one or more **rows**</td>
</tr>
</table>
<div class="alert alert-success">
<b>EXERCISE</b>: Calculate the area of Germany relative to the total area of all other countries in the data frame. *Hint*: you can compare the index of the data frame to any string
</div>
Systematic indexing with loc and iloc
When using [] like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes:
loc: selection by label
iloc: selection by position
These methods index the different dimensions of the frame:
df.loc[row_indexer, column_indexer]
df.iloc[row_indexer, column_indexer]
Selecting a single element:
End of explanation
countries.loc['France':'Germany', ['area', 'population']]
Explanation: But the row or column indexer can also be a list, slice, boolean array, ..
End of explanation
countries.iloc[:2,1:3]
Explanation: Selecting by position with iloc works similar as indexing numpy arrays:
End of explanation
countries2 = countries.copy()
countries2.loc['Belgium':'Germany', 'population'] = 10
countries2
Explanation: The different indexing methods can also be used to assign data:
End of explanation
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Add a column `density` with the population density (note: population column is expressed in millions)
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Select the capital and the population column of those countries where the density is larger than 300
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: List names, capitals and population densities of two countries with highest population density.
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Change the capital of the UK to Cambridge
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Select all countries whose population density is between 100 and 300 people/km²
</div>
More exercises!
For the quick ones among you, here are some more exercises with some larger dataframe with film data. These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /data folder.
End of explanation |
9,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to use
maximum_superleave_length indicates the maximum length of superleaves to consider. Right now, maximum runnable on my local machine is 5.
ev_calculator_max_length indicates the maximum length of superleave that we want to calculate the EV for based on a log file of games (log_file).
To-do
Have creation of duplicates go in reverse so that all necessary columns are present.
Changelog
1/27/19 - Determined that the speed of creation of the rack dataframes is a function of the length of the dataframe. From that, realized that we should organize leaves by least-frequent to most-frequent letter, such that sub-dataframes are created from the shortest racks possible.
Step1: Create a dictionary of all possible 1 to 6-tile leaves. Also, add functionality for sorting by an arbitrary key - allowing us to put rarest letters first
Step2: The bottom creates the full set of leaves for all lengths from 1-5 (6 breaks on my local machine)
Step3: Order rack (originally alphabetical, but now custom key with rarest letters first for maximum efficiency). Note that this is slower than alphabetical organization because it has to use the index function, but should be rewarded with subsequent performance enhancements.
Step4: Set up dataframe for storing EV of all leaves.
Step5: To find all of the racks corresponding to a particular leave, we have added columns to the dataframe of plays df marking each letter (A, B, C...) and also for duplicates (AA, BB, CC...) and triplicates where possible (AAA, DDD, EEE...).
If the letters in a given leave are all different, we can look for rows by using df['A']&df['B']. However, if there are duplicates involved, we have to look for df['AA']. The following function gives the correct dataframe columns to be looked up.
Step6: Benchmark figures
With 2M racks, following amount of time was taken
Step7: Calculate leave "synergy", in other words the difference between the EV of the rack and what we'd expect just from adding the individual values of the tiles | Python Code:
from itertools import combinations
import numpy as np
import pandas as pd
import seaborn as sns
from string import ascii_uppercase
import time as time
%matplotlib inline
maximum_superleave_length = 5
ev_calculator_max_length = 5
log_file = 'log_games.csv'
Explanation: How to use
maximum_superleave_length indicates the maximum length of superleaves to consider. Right now, maximum runnable on my local machine is 5.
ev_calculator_max_length indicates the maximum length of superleave that we want to calculate the EV for based on a log file of games (log_file).
To-do
Have creation of duplicates go in reverse so that all necessary columns are present.
Changelog
1/27/19 - Determined that the speed of creation of the rack dataframes is a function of the length of the dataframe. From that, realized that we should organize leaves by least-frequent to most-frequent letter, such that sub-dataframes are created from the shortest racks possible.
End of explanation
tilebag = ['A']*9+['B']*2+['C']*2+['D']*4+['E']*12+\
['F']*2+['G']*3+['H']*2+['I']*9+['J']*1+\
['K']*1+['L']*4+['M']*2+['N']*6+['O']*8+\
['P']*2+['Q']*1+['R']*6+['S']*4+['T']*6+\
['U']*4+['V']*2+['W']*2+['X']*1+['Y']*2+\
['Z']*1+['?']*2
tiles = [x for x in ascii_uppercase] + ['?']
# potential future improvement: calculate optimal order of letters on the fly
rarity_key = 'ZXKJQ?HYMFPWBCVSGDLURTNAOIE'
# alphabetical_key = '?ABCDEFGHIJKLMNOPQRSTUVWXYZ'
sort_func = lambda x: rarity_key.index(x)
t0 = time.time()
leaves = {i:sorted(list(set(list(combinations(tilebag,i))))) for i in
range(1,maximum_superleave_length+1)}
# turn leaves from lists of letters into strings
# algorith runs faster if leaves non-alphabetical!
for i in range(1,maximum_superleave_length+1):
leaves[i] = [''.join(sorted(leave, key=sort_func))
for leave in leaves[i]]
t1 = time.time()
print('Calculated superleaves up to length {} in {} seconds'.format(
maximum_superleave_length,t1-t0))
Explanation: Create a dictionary of all possible 1 to 6-tile leaves. Also, add functionality for sorting by an arbitrary key - allowing us to put rarest letters first
End of explanation
for i in range(1,maximum_superleave_length+1):
print(i,len(leaves[i]))
column_dict = {
0:'rack',
1:'score',
2:'tiles_remaining'
}
df = pd.read_csv(log_file, header=None, keep_default_na=False)
df.rename(columns=column_dict,inplace=True)
tile_limit = 1
df = df.loc[df['tiles_remaining']>=tile_limit]
df = df.iloc[:2000000]
Explanation: The bottom creates the full set of leaves for all lengths from 1-5 (6 breaks on my local machine)
End of explanation
t0 = time.time()
df['rack'] = df['rack'].apply(lambda x: ''.join(sorted(x, key=sort_func)))
t1 = time.time()
print(t1-t0)
tb = time.time()
df_dict = {'': df}
for multiple in range(1,maximum_superleave_length+1):
t0 = time.time()
# iterate through all 27 tiles
for c in leaves[1]:
if multiple*c in leaves[multiple]:
condition = df_dict[(multiple-1)*c]['rack'].apply(lambda x: multiple*c in x)
df_dict[multiple*c] = df_dict[(multiple-1)*c].loc[condition]
df[multiple*c] = condition
df[multiple*c].fillna(False, inplace=True)
t1 = time.time()
print('Added columns for all duplicates up to length {} in {} seconds'.format(multiple,t1-t0))
te = time.time()
print('Added all necessary columns in {} seconds'.format(te-tb))
Explanation: Order rack (originally alphabetical, but now custom key with rarest letters first for maximum efficiency). Note that this is slower than alphabetical organization because it has to use the index function, but should be rewarded with subsequent performance enhancements.
End of explanation
all_leaves = []
for i in range(1,ev_calculator_max_length+1):
all_leaves += leaves[i]
df_dict = {leave: pd.DataFrame() for leave in all_leaves}
df_dict[''] = df
ev_df = pd.DataFrame(columns=['mean','std','count','ev','synergy'],
index=all_leaves)
Explanation: Set up dataframe for storing EV of all leaves.
End of explanation
def get_columns(leave):
letters=list(set(leave))
tags = []
for l in letters:
tags += [sum([l==letter for letter in leave])*l]
return tags
for leave_length in range(3,5):
print(leave_length)
t0 = time.time()
for leave in leaves[leave_length]:
print(leave)
print(len(df_dict[leave[:-1]]))
t2 = time.time()
condition = df_dict[leave[:-1]][get_columns(leave)].all(axis=1)
t3 = time.time()
df_dict[leave] = df_dict[leave[:-1]].loc[condition]
t4 = time.time()
ev_df.loc[leave]['mean'] = df_dict[leave]['score'].mean()
t5 = time.time()
ev_df.loc[leave]['std'] = df_dict[leave]['score'].std()
t6 = time.time()
ev_df.loc[leave]['count'] = len(df_dict[leave])
t7 = time.time()
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t3-t2),100*(t3-t2)/(t7-t2)))
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t4-t3),100*(t4-t3)/(t7-t2)))
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t5-t4),100*(t5-t4)/(t7-t2)))
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t6-t5),100*(t6-t5)/(t7-t2)))
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t7-t6),100*(t7-t6)/(t7-t2)))
t1 = time.time()
print('Calculated mean, std and count in {} seconds'.format(t1-t0))
Explanation: To find all of the racks corresponding to a particular leave, we have added columns to the dataframe of plays df marking each letter (A, B, C...) and also for duplicates (AA, BB, CC...) and triplicates where possible (AAA, DDD, EEE...).
If the letters in a given leave are all different, we can look for rows by using df['A']&df['B']. However, if there are duplicates involved, we have to look for df['AA']. The following function gives the correct dataframe columns to be looked up.
End of explanation
for leave_length in range(1,ev_calculator_max_length+1):
print(leave_length)
t0 = time.time()
for leave in leaves[leave_length]:
condition = df_dict[leave[:-1]][get_columns(leave)].all(axis=1)
df_dict[leave] = df_dict[leave[:-1]].loc[condition]
ev_df.loc[leave]['mean'] = df_dict[leave]['score'].mean()
ev_df.loc[leave]['std'] = df_dict[leave]['score'].std()
ev_df.loc[leave]['count'] = len(df_dict[leave])
t1 = time.time()
print('Calculated mean, std and count in {} seconds'.format(t1-t0))
ev_df['pct'] = 100*ev_df['count']/len(df)
ev_df['ev'] = ev_df['mean']-df['score'].mean()
Explanation: Benchmark figures
With 2M racks, following amount of time was taken:
* 1 - 3s (.11s/leave)
* 2 - 15s (.04s/leave)
* 3 - 38s (.011s/leave)
* 4 - 84s (.0033s/leave)
* 5 - 244s (.0016s/leave)
-> 383s total
Using improvement of non-alphabetical leaves, performance was as follows:
* 1 - 3s (.11s/leave)
* 2 - 10s (.027s/leave)
* 3 - 23s (.0066s/leave)
* 4 - 64s (.0025s/leave)
* 5 - 226s (.0015s/leave)
-> 326 total (15% faster than previous version)
End of explanation
for leave_length in range(2,ev_calculator_max_length+1):
for leave in leaves[leave_length]:
ev_df.loc[leave]['synergy'] = ev_df.loc[leave]['ev']-\
sum([ev_df.loc[c]['ev'] for c in leave])
ev_df
ev_df.to_csv('leave_values_011219_v7.csv')
ev_df.sort_values('synergy')
Explanation: Calculate leave "synergy", in other words the difference between the EV of the rack and what we'd expect just from adding the individual values of the tiles
End of explanation |
9,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 16 – Reinforcement Learning
This notebook contains all the sample code and solutions to the exersices in chapter 16.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures
Step1: Note
Step2: Next we will load the MsPacman environment, version 0.
Step3: Let's initialize the environment by calling is reset() method. This returns an observation
Step4: Observations vary depending on the environment. In this case it is an RGB image represented as a 3D NumPy array of shape [width, height, channels] (with 3 channels
Step5: An environment can be visualized by calling its render() method, and you can pick the rendering mode (the rendering options depend on the environment). In this example we will set mode="rgb_array" to get an image of the environment as a NumPy array
Step6: Let's plot this image
Step7: Welcome back to the 1980s!
Step8: Let's create a little helper function to plot an environment
Step9: Let's see how to interact with an environment. Your agent will need to select an action from an "action space" (the set of possible actions). Let's see what this environment's action space looks like
Step10: Discrete(9) means that the possible actions are integers 0 through 8, which represents the 9 possible positions of the joystick (0=center, 1=up, 2=right, 3=left, 4=down, 5=upper-right, 6=upper-left, 7=lower-right, 8=lower-left).
Next we need to tell the environment which action to play, and it will compute the next step of the game. Let's go left for 110 steps, then lower left for 40 steps
Step11: Where are we now?
Step12: The step() function actually returns several important objects
Step13: The observation tells the agent what the environment looks like, as discussed earlier. This is a 210x160 RGB image
Step14: The environment also tells the agent how much reward it got during the last step
Step15: When the game is over, the environment returns done=True
Step16: Finally, info is an environment-specific dictionary that can provide some extra information about the internal state of the environment. This is useful for debugging, but your agent should not use this information for learning (it would be cheating).
Step17: Let's play one full game (with 3 lives), by moving in random directions for 10 steps at a time, recording each frame
Step18: Now show the animation (it's a bit jittery within Jupyter)
Step19: Once you have finished playing with an environment, you should close it to free up resources
Step20: To code our first learning agent, we will be using a simpler environment
Step21: The observation is a 1D NumPy array composed of 4 floats
Step22: Now let's look at the action space
Step23: Yep, just two possible actions
Step24: Notice that the game is over when the pole tilts too much, not when it actually falls. Now let's reset the environment and push the cart to right instead
Step25: Looks like it's doing what we're telling it to do. Now how can we make the poll remain upright? We will need to define a policy for that. This is the strategy that the agent will use to select an action at each step. It can use all the past actions and observations to decide what to do.
A simple hard-coded policy
Let's hard code a simple strategy
Step26: Nope, the system is unstable and after just a few wobbles, the pole ends up too tilted
Step27: In this particular environment, the past actions and observations can safely be ignored, since each observation contains the environment's full state. If there were some hidden state then you may need to consider past actions and observations in order to try to infer the hidden state of the environment. For example, if the environment only revealed the position of the cart but not its velocity, you would have to consider not only the current observation but also the previous observation in order to estimate the current velocity. Another example is if the observations are noisy
Step28: Now let's look at how well this randomly initialized policy network performed
Step29: Yeah... pretty bad. The neural network will have to learn to do better. First let's see if it is capable of learning the basic policy we used earlier
Step30: We can make the same net play in 10 different environments in parallel, and train for 1000 iterations. We also reset environments when they are done.
Step31: Looks like it learned the policy correctly. Now let's see if it can learn a better policy on its own.
Policy Gradients
To train this neural network we will need to define the target probabilities y. If an action is good we should increase its probability, and conversely if it is bad we should reduce it. But how do we know whether an action is good or bad? The problem is that most actions have delayed effects, so when you win or lose points in a game, it is not clear which actions contributed to this result
Step32: Markov Chains
Step33: Markov Decision Process
Step34: Q-Learning
Q-Learning works by watching an agent play (e.g., randomly) and gradually improving its estimates of the Q-Values. Once it has accurate Q-Value estimates (or close enough), then the optimal policy consists in choosing the action that has the highest Q-Value (i.e., the greedy policy).
Step35: Learning to Play MsPacman Using the DQN Algorithm
Warning
Step36: Preprocessing
Preprocessing the images is optional but greatly speeds up training.
Step37: Build DQN
Note
Step38: Note
Step39: A few variables for tracking progress
Step40: And now the main training loop!
Step41: You can interrupt the cell above at any time to test your agent using the cell below. You can then run the cell above once again, it will load the last parameters saved and resume training.
Step42: Extra material
Preprocessing for Breakout
Here is a preprocessing function you can use to train a DQN for the Breakout-v0 Atari game
Step43: As you can see, a single image does not give you the direction and speed of the ball, which are crucial informations for playing this game. For this reason, it is best to actually combine several consecutive observations to create the environment's state representation. One way to do that is to create a multi-channel image, with one channel per recent observation. Another is to merge all recent observations into a single-channel image, using np.max(). In this case, we need to dim the older images so that the DQN can distinguish the past from the present. | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
import sys
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures and animations
%matplotlib nbagg
import matplotlib
import matplotlib.animation as animation
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rl"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
Explanation: Chapter 16 – Reinforcement Learning
This notebook contains all the sample code and solutions to the exersices in chapter 16.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
import gym
Explanation: Note: there may be minor differences between the output of this notebook and the examples shown in the book. You can safely ignore these differences. They are mainly due to the fact that most of the environments provided by OpenAI gym have some randomness.
Introduction to OpenAI gym
In this notebook we will be using OpenAI gym, a great toolkit for developing and comparing Reinforcement Learning algorithms. It provides many environments for your learning agents to interact with. Let's start by importing gym:
End of explanation
env = gym.make('MsPacman-v0')
Explanation: Next we will load the MsPacman environment, version 0.
End of explanation
obs = env.reset()
Explanation: Let's initialize the environment by calling is reset() method. This returns an observation:
End of explanation
obs.shape
Explanation: Observations vary depending on the environment. In this case it is an RGB image represented as a 3D NumPy array of shape [width, height, channels] (with 3 channels: Red, Green and Blue). In other environments it may return different objects, as we will see later.
End of explanation
img = env.render(mode="rgb_array")
Explanation: An environment can be visualized by calling its render() method, and you can pick the rendering mode (the rendering options depend on the environment). In this example we will set mode="rgb_array" to get an image of the environment as a NumPy array:
End of explanation
plt.figure(figsize=(5,4))
plt.imshow(img)
plt.axis("off")
save_fig("MsPacman")
plt.show()
Explanation: Let's plot this image:
End of explanation
(img == obs).all()
Explanation: Welcome back to the 1980s! :)
In this environment, the rendered image is simply equal to the observation (but in many environments this is not the case):
End of explanation
def plot_environment(env, figsize=(5,4)):
plt.close() # or else nbagg sometimes plots in the previous cell
plt.figure(figsize=figsize)
img = env.render(mode="rgb_array")
plt.imshow(img)
plt.axis("off")
plt.show()
Explanation: Let's create a little helper function to plot an environment:
End of explanation
env.action_space
Explanation: Let's see how to interact with an environment. Your agent will need to select an action from an "action space" (the set of possible actions). Let's see what this environment's action space looks like:
End of explanation
env.reset()
for step in range(110):
env.step(3) #left
for step in range(40):
env.step(8) #lower-left
Explanation: Discrete(9) means that the possible actions are integers 0 through 8, which represents the 9 possible positions of the joystick (0=center, 1=up, 2=right, 3=left, 4=down, 5=upper-right, 6=upper-left, 7=lower-right, 8=lower-left).
Next we need to tell the environment which action to play, and it will compute the next step of the game. Let's go left for 110 steps, then lower left for 40 steps:
End of explanation
plot_environment(env)
Explanation: Where are we now?
End of explanation
obs, reward, done, info = env.step(0)
Explanation: The step() function actually returns several important objects:
End of explanation
obs.shape
Explanation: The observation tells the agent what the environment looks like, as discussed earlier. This is a 210x160 RGB image:
End of explanation
reward
Explanation: The environment also tells the agent how much reward it got during the last step:
End of explanation
done
Explanation: When the game is over, the environment returns done=True:
End of explanation
info
Explanation: Finally, info is an environment-specific dictionary that can provide some extra information about the internal state of the environment. This is useful for debugging, but your agent should not use this information for learning (it would be cheating).
End of explanation
frames = []
n_max_steps = 1000
n_change_steps = 10
obs = env.reset()
for step in range(n_max_steps):
img = env.render(mode="rgb_array")
frames.append(img)
if step % n_change_steps == 0:
action = env.action_space.sample() # play randomly
obs, reward, done, info = env.step(action)
if done:
break
Explanation: Let's play one full game (with 3 lives), by moving in random directions for 10 steps at a time, recording each frame:
End of explanation
def update_scene(num, frames, patch):
patch.set_data(frames[num])
return patch,
def plot_animation(frames, repeat=False, interval=40):
plt.close() # or else nbagg sometimes plots in the previous cell
fig = plt.figure()
patch = plt.imshow(frames[0])
plt.axis('off')
return animation.FuncAnimation(fig, update_scene, fargs=(frames, patch), frames=len(frames), repeat=repeat, interval=interval)
video = plot_animation(frames)
plt.show()
Explanation: Now show the animation (it's a bit jittery within Jupyter):
End of explanation
env.close()
Explanation: Once you have finished playing with an environment, you should close it to free up resources:
End of explanation
env = gym.make("CartPole-v0")
obs = env.reset()
obs
Explanation: To code our first learning agent, we will be using a simpler environment: the Cart-Pole.
A simple environment: the Cart-Pole
The Cart-Pole is a very simple environment composed of a cart that can move left or right, and pole placed vertically on top of it. The agent must move the cart left or right to keep the pole upright.
End of explanation
from PIL import Image, ImageDraw
try:
from pyglet.gl import gl_info
openai_cart_pole_rendering = True # no problem, let's use OpenAI gym's rendering function
except Exception:
openai_cart_pole_rendering = False # probably no X server available, let's use our own rendering function
def render_cart_pole(env, obs):
if openai_cart_pole_rendering:
# use OpenAI gym's rendering function
return env.render(mode="rgb_array")
else:
# rendering for the cart pole environment (in case OpenAI gym can't do it)
img_w = 600
img_h = 400
cart_w = img_w // 12
cart_h = img_h // 15
pole_len = img_h // 3.5
pole_w = img_w // 80 + 1
x_width = 2
max_ang = 0.2
bg_col = (255, 255, 255)
cart_col = 0x000000 # Blue Green Red
pole_col = 0x669acc # Blue Green Red
pos, vel, ang, ang_vel = obs
img = Image.new('RGB', (img_w, img_h), bg_col)
draw = ImageDraw.Draw(img)
cart_x = pos * img_w // x_width + img_w // x_width
cart_y = img_h * 95 // 100
top_pole_x = cart_x + pole_len * np.sin(ang)
top_pole_y = cart_y - cart_h // 2 - pole_len * np.cos(ang)
draw.line((0, cart_y, img_w, cart_y), fill=0)
draw.rectangle((cart_x - cart_w // 2, cart_y - cart_h // 2, cart_x + cart_w // 2, cart_y + cart_h // 2), fill=cart_col) # draw cart
draw.line((cart_x, cart_y - cart_h // 2, top_pole_x, top_pole_y), fill=pole_col, width=pole_w) # draw pole
return np.array(img)
def plot_cart_pole(env, obs):
plt.close() # or else nbagg sometimes plots in the previous cell
img = render_cart_pole(env, obs)
plt.imshow(img)
plt.axis("off")
plt.show()
plot_cart_pole(env, obs)
Explanation: The observation is a 1D NumPy array composed of 4 floats: they represent the cart's horizontal position, its velocity, the angle of the pole (0 = vertical), and the angular velocity. Let's render the environment... unfortunately we need to fix an annoying rendering issue first.
Fixing the rendering issue
Some environments (including the Cart-Pole) require access to your display, which opens up a separate window, even if you specify the rgb_array mode. In general you can safely ignore that window. However, if Jupyter is running on a headless server (ie. without a screen) it will raise an exception. One way to avoid this is to install a fake X server like Xvfb. You can start Jupyter using the xvfb-run command:
$ xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
If Jupyter is running on a headless server but you don't want to worry about Xvfb, then you can just use the following rendering function for the Cart-Pole:
End of explanation
env.action_space
Explanation: Now let's look at the action space:
End of explanation
obs = env.reset()
while True:
obs, reward, done, info = env.step(0)
if done:
break
plt.close() # or else nbagg sometimes plots in the previous cell
img = render_cart_pole(env, obs)
plt.imshow(img)
plt.axis("off")
save_fig("cart_pole_plot")
img.shape
Explanation: Yep, just two possible actions: accelerate towards the left or towards the right. Let's push the cart left until the pole falls:
End of explanation
obs = env.reset()
while True:
obs, reward, done, info = env.step(1)
if done:
break
plot_cart_pole(env, obs)
Explanation: Notice that the game is over when the pole tilts too much, not when it actually falls. Now let's reset the environment and push the cart to right instead:
End of explanation
frames = []
n_max_steps = 1000
n_change_steps = 10
obs = env.reset()
for step in range(n_max_steps):
img = render_cart_pole(env, obs)
frames.append(img)
# hard-coded policy
position, velocity, angle, angular_velocity = obs
if angle < 0:
action = 0
else:
action = 1
obs, reward, done, info = env.step(action)
if done:
break
video = plot_animation(frames)
plt.show()
Explanation: Looks like it's doing what we're telling it to do. Now how can we make the poll remain upright? We will need to define a policy for that. This is the strategy that the agent will use to select an action at each step. It can use all the past actions and observations to decide what to do.
A simple hard-coded policy
Let's hard code a simple strategy: if the pole is tilting to the left, then push the cart to the left, and vice versa. Let's see if that works:
End of explanation
import tensorflow as tf
# 1. Specify the network architecture
n_inputs = 4 # == env.observation_space.shape[0]
n_hidden = 4 # it's a simple task, we don't need more than this
n_outputs = 1 # only outputs the probability of accelerating left
initializer = tf.contrib.layers.variance_scaling_initializer()
# 2. Build the neural network
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu,
kernel_initializer=initializer)
outputs = tf.layers.dense(hidden, n_outputs, activation=tf.nn.sigmoid,
kernel_initializer=initializer)
# 3. Select a random action based on the estimated probabilities
p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs])
action = tf.multinomial(tf.log(p_left_and_right), num_samples=1)
init = tf.global_variables_initializer()
Explanation: Nope, the system is unstable and after just a few wobbles, the pole ends up too tilted: game over. We will need to be smarter than that!
Neural Network Policies
Let's create a neural network that will take observations as inputs, and output the action to take for each observation. To choose an action, the network will first estimate a probability for each action, then select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability p of the action 0 (left), and of course the probability of action 1 (right) will be 1 - p.
Note: instead of using the fully_connected() function from the tensorflow.contrib.layers module (as in the book), we now use the dense() function from the tf.layers module, which did not exist when this chapter was written. This is preferable because anything in contrib may change or be deleted without notice, while tf.layers is part of the official API. As you will see, the code is mostly the same.
The main differences relevant to this chapter are:
* the _fn suffix was removed in all the parameters that had it (for example the activation_fn parameter was renamed to activation).
* the weights parameter was renamed to kernel,
* the default activation is None instead of tf.nn.relu
End of explanation
n_max_steps = 1000
frames = []
with tf.Session() as sess:
init.run()
obs = env.reset()
for step in range(n_max_steps):
img = render_cart_pole(env, obs)
frames.append(img)
action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)})
obs, reward, done, info = env.step(action_val[0][0])
if done:
break
env.close()
Explanation: In this particular environment, the past actions and observations can safely be ignored, since each observation contains the environment's full state. If there were some hidden state then you may need to consider past actions and observations in order to try to infer the hidden state of the environment. For example, if the environment only revealed the position of the cart but not its velocity, you would have to consider not only the current observation but also the previous observation in order to estimate the current velocity. Another example is if the observations are noisy: you may want to use the past few observations to estimate the most likely current state. Our problem is thus as simple as can be: the current observation is noise-free and contains the environment's full state.
You may wonder why we are picking a random action based on the probability given by the policy network, rather than just picking the action with the highest probability. This approach lets the agent find the right balance between exploring new actions and exploiting the actions that are known to work well. Here's an analogy: suppose you go to a restaurant for the first time, and all the dishes look equally appealing so you randomly pick one. If it turns out to be good, you can increase the probability to order it next time, but you shouldn't increase that probability to 100%, or else you will never try out the other dishes, some of which may be even better than the one you tried.
Let's randomly initialize this policy neural network and use it to play one game:
End of explanation
video = plot_animation(frames)
plt.show()
Explanation: Now let's look at how well this randomly initialized policy network performed:
End of explanation
import tensorflow as tf
reset_graph()
n_inputs = 4
n_hidden = 4
n_outputs = 1
learning_rate = 0.01
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
y = tf.placeholder(tf.float32, shape=[None, n_outputs])
hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer)
logits = tf.layers.dense(hidden, n_outputs)
outputs = tf.nn.sigmoid(logits) # probability of action 0 (left)
p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs])
action = tf.multinomial(tf.log(p_left_and_right), num_samples=1)
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(cross_entropy)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Explanation: Yeah... pretty bad. The neural network will have to learn to do better. First let's see if it is capable of learning the basic policy we used earlier: go left if the pole is tilting left, and go right if it is tilting right. The following code defines the same neural network but we add the target probabilities y, and the training operations (cross_entropy, optimizer and training_op):
End of explanation
n_environments = 10
n_iterations = 1000
envs = [gym.make("CartPole-v0") for _ in range(n_environments)]
observations = [env.reset() for env in envs]
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
target_probas = np.array([([1.] if obs[2] < 0 else [0.]) for obs in observations]) # if angle<0 we want proba(left)=1., or else proba(left)=0.
action_val, _ = sess.run([action, training_op], feed_dict={X: np.array(observations), y: target_probas})
for env_index, env in enumerate(envs):
obs, reward, done, info = env.step(action_val[env_index][0])
observations[env_index] = obs if not done else env.reset()
saver.save(sess, "./my_policy_net_basic.ckpt")
for env in envs:
env.close()
def render_policy_net(model_path, action, X, n_max_steps = 1000):
frames = []
env = gym.make("CartPole-v0")
obs = env.reset()
with tf.Session() as sess:
saver.restore(sess, model_path)
for step in range(n_max_steps):
img = render_cart_pole(env, obs)
frames.append(img)
action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)})
obs, reward, done, info = env.step(action_val[0][0])
if done:
break
env.close()
return frames
frames = render_policy_net("./my_policy_net_basic.ckpt", action, X)
video = plot_animation(frames)
plt.show()
Explanation: We can make the same net play in 10 different environments in parallel, and train for 1000 iterations. We also reset environments when they are done.
End of explanation
import tensorflow as tf
reset_graph()
n_inputs = 4
n_hidden = 4
n_outputs = 1
learning_rate = 0.01
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden = tf.layers.dense(X, n_hidden, activation=tf.nn.elu, kernel_initializer=initializer)
logits = tf.layers.dense(hidden, n_outputs)
outputs = tf.nn.sigmoid(logits) # probability of action 0 (left)
p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs])
action = tf.multinomial(tf.log(p_left_and_right), num_samples=1)
y = 1. - tf.to_float(action)
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(cross_entropy)
gradients = [grad for grad, variable in grads_and_vars]
gradient_placeholders = []
grads_and_vars_feed = []
for grad, variable in grads_and_vars:
gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape())
gradient_placeholders.append(gradient_placeholder)
grads_and_vars_feed.append((gradient_placeholder, variable))
training_op = optimizer.apply_gradients(grads_and_vars_feed)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
def discount_rewards(rewards, discount_rate):
discounted_rewards = np.zeros(len(rewards))
cumulative_rewards = 0
for step in reversed(range(len(rewards))):
cumulative_rewards = rewards[step] + cumulative_rewards * discount_rate
discounted_rewards[step] = cumulative_rewards
return discounted_rewards
def discount_and_normalize_rewards(all_rewards, discount_rate):
all_discounted_rewards = [discount_rewards(rewards, discount_rate) for rewards in all_rewards]
flat_rewards = np.concatenate(all_discounted_rewards)
reward_mean = flat_rewards.mean()
reward_std = flat_rewards.std()
return [(discounted_rewards - reward_mean)/reward_std for discounted_rewards in all_discounted_rewards]
discount_rewards([10, 0, -50], discount_rate=0.8)
discount_and_normalize_rewards([[10, 0, -50], [10, 20]], discount_rate=0.8)
env = gym.make("CartPole-v0")
n_games_per_update = 10
n_max_steps = 1000
n_iterations = 250
save_iterations = 10
discount_rate = 0.95
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
print("\rIteration: {}".format(iteration), end="")
all_rewards = []
all_gradients = []
for game in range(n_games_per_update):
current_rewards = []
current_gradients = []
obs = env.reset()
for step in range(n_max_steps):
action_val, gradients_val = sess.run([action, gradients], feed_dict={X: obs.reshape(1, n_inputs)})
obs, reward, done, info = env.step(action_val[0][0])
current_rewards.append(reward)
current_gradients.append(gradients_val)
if done:
break
all_rewards.append(current_rewards)
all_gradients.append(current_gradients)
all_rewards = discount_and_normalize_rewards(all_rewards, discount_rate=discount_rate)
feed_dict = {}
for var_index, gradient_placeholder in enumerate(gradient_placeholders):
mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index]
for game_index, rewards in enumerate(all_rewards)
for step, reward in enumerate(rewards)], axis=0)
feed_dict[gradient_placeholder] = mean_gradients
sess.run(training_op, feed_dict=feed_dict)
if iteration % save_iterations == 0:
saver.save(sess, "./my_policy_net_pg.ckpt")
env.close()
frames = render_policy_net("./my_policy_net_pg.ckpt", action, X, n_max_steps=1000)
video = plot_animation(frames)
plt.show()
Explanation: Looks like it learned the policy correctly. Now let's see if it can learn a better policy on its own.
Policy Gradients
To train this neural network we will need to define the target probabilities y. If an action is good we should increase its probability, and conversely if it is bad we should reduce it. But how do we know whether an action is good or bad? The problem is that most actions have delayed effects, so when you win or lose points in a game, it is not clear which actions contributed to this result: was it just the last action? Or the last 10? Or just one action 50 steps earlier? This is called the credit assignment problem.
The Policy Gradients algorithm tackles this problem by first playing multiple games, then making the actions in good games slightly more likely, while actions in bad games are made slightly less likely. First we play, then we go back and think about what we did.
End of explanation
transition_probabilities = [
[0.7, 0.2, 0.0, 0.1], # from s0 to s0, s1, s2, s3
[0.0, 0.0, 0.9, 0.1], # from s1 to ...
[0.0, 1.0, 0.0, 0.0], # from s2 to ...
[0.0, 0.0, 0.0, 1.0], # from s3 to ...
]
n_max_steps = 50
def print_sequence(start_state=0):
current_state = start_state
print("States:", end=" ")
for step in range(n_max_steps):
print(current_state, end=" ")
if current_state == 3:
break
current_state = np.random.choice(range(4), p=transition_probabilities[current_state])
else:
print("...", end="")
print()
for _ in range(10):
print_sequence()
Explanation: Markov Chains
End of explanation
transition_probabilities = [
[[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]], # in s0, if action a0 then proba 0.7 to state s0 and 0.3 to state s1, etc.
[[0.0, 1.0, 0.0], None, [0.0, 0.0, 1.0]],
[None, [0.8, 0.1, 0.1], None],
]
rewards = [
[[+10, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, -50]],
[[0, 0, 0], [+40, 0, 0], [0, 0, 0]],
]
possible_actions = [[0, 1, 2], [0, 2], [1]]
def policy_fire(state):
return [0, 2, 1][state]
def policy_random(state):
return np.random.choice(possible_actions[state])
def policy_safe(state):
return [0, 0, 1][state]
class MDPEnvironment(object):
def __init__(self, start_state=0):
self.start_state=start_state
self.reset()
def reset(self):
self.total_rewards = 0
self.state = self.start_state
def step(self, action):
next_state = np.random.choice(range(3), p=transition_probabilities[self.state][action])
reward = rewards[self.state][action][next_state]
self.state = next_state
self.total_rewards += reward
return self.state, reward
def run_episode(policy, n_steps, start_state=0, display=True):
env = MDPEnvironment()
if display:
print("States (+rewards):", end=" ")
for step in range(n_steps):
if display:
if step == 10:
print("...", end=" ")
elif step < 10:
print(env.state, end=" ")
action = policy(env.state)
state, reward = env.step(action)
if display and step < 10:
if reward:
print("({})".format(reward), end=" ")
if display:
print("Total rewards =", env.total_rewards)
return env.total_rewards
for policy in (policy_fire, policy_random, policy_safe):
all_totals = []
print(policy.__name__)
for episode in range(1000):
all_totals.append(run_episode(policy, n_steps=100, display=(episode<5)))
print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals)))
print()
Explanation: Markov Decision Process
End of explanation
n_states = 3
n_actions = 3
n_steps = 20000
alpha = 0.01
gamma = 0.99
exploration_policy = policy_random
q_values = np.full((n_states, n_actions), -np.inf)
for state, actions in enumerate(possible_actions):
q_values[state][actions]=0
env = MDPEnvironment()
for step in range(n_steps):
action = exploration_policy(env.state)
state = env.state
next_state, reward = env.step(action)
next_value = np.max(q_values[next_state]) # greedy policy
q_values[state, action] = (1-alpha)*q_values[state, action] + alpha*(reward + gamma * next_value)
def optimal_policy(state):
return np.argmax(q_values[state])
q_values
all_totals = []
for episode in range(1000):
all_totals.append(run_episode(optimal_policy, n_steps=100, display=(episode<5)))
print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals)))
print()
Explanation: Q-Learning
Q-Learning works by watching an agent play (e.g., randomly) and gradually improving its estimates of the Q-Values. Once it has accurate Q-Value estimates (or close enough), then the optimal policy consists in choosing the action that has the highest Q-Value (i.e., the greedy policy).
End of explanation
env = gym.make("MsPacman-v0")
obs = env.reset()
obs.shape
env.action_space
Explanation: Learning to Play MsPacman Using the DQN Algorithm
Warning: Unfortunately, the first version of the book contained two important errors in this section.
The actor DQN and critic DQN should have been named online DQN and target DQN respectively. Actor-critic algorithms are a distinct class of algorithms.
The online DQN is the one that learns and is copied to the target DQN at regular intervals. The target DQN's only role is to estimate the next state's Q-Values for each possible action. This is needed to compute the target Q-Values for training the online DQN, as shown in this equation:
$y(s,a) = \text{r} + \gamma . \underset{a'}{\max} \, Q_\text{target}(s', a')$
$y(s,a)$ is the target Q-Value to train the online DQN for the state-action pair $(s, a)$.
$r$ is the reward actually collected after playing action $a$ in state $s$.
$\gamma$ is the discount rate.
$s'$ is the state actually reached after played action $a$ in state $s$.
$a'$ is one of the possible actions in state $s'$.
$Q_\text{target}(s', a')$ is the target DQN's estimate of the Q-Value of playing action $a'$ while in state $s'$.
I hope these errors did not affect you, and if they did, I sincerely apologize.
Creating the MsPacman environment
End of explanation
mspacman_color = np.array([210, 164, 74]).mean()
def preprocess_observation(obs):
img = obs[1:176:2, ::2] # crop and downsize
img = img.mean(axis=2) # to greyscale
img[img==mspacman_color] = 0 # Improve contrast
img = (img - 128) / 128 - 1 # normalize from -1. to 1.
return img.reshape(88, 80, 1)
img = preprocess_observation(obs)
plt.figure(figsize=(11, 7))
plt.subplot(121)
plt.title("Original observation (160×210 RGB)")
plt.imshow(obs)
plt.axis("off")
plt.subplot(122)
plt.title("Preprocessed observation (88×80 greyscale)")
plt.imshow(img.reshape(88, 80), interpolation="nearest", cmap="gray")
plt.axis("off")
save_fig("preprocessing_plot")
plt.show()
Explanation: Preprocessing
Preprocessing the images is optional but greatly speeds up training.
End of explanation
reset_graph()
input_height = 88
input_width = 80
input_channels = 1
conv_n_maps = [32, 64, 64]
conv_kernel_sizes = [(8,8), (4,4), (3,3)]
conv_strides = [4, 2, 1]
conv_paddings = ["SAME"] * 3
conv_activation = [tf.nn.relu] * 3
n_hidden_in = 64 * 11 * 10 # conv3 has 64 maps of 11x10 each
n_hidden = 512
hidden_activation = tf.nn.relu
n_outputs = env.action_space.n # 9 discrete actions are available
initializer = tf.contrib.layers.variance_scaling_initializer()
def q_network(X_state, name):
prev_layer = X_state
with tf.variable_scope(name) as scope:
for n_maps, kernel_size, strides, padding, activation in zip(
conv_n_maps, conv_kernel_sizes, conv_strides,
conv_paddings, conv_activation):
prev_layer = tf.layers.conv2d(
prev_layer, filters=n_maps, kernel_size=kernel_size,
strides=strides, padding=padding, activation=activation,
kernel_initializer=initializer)
last_conv_layer_flat = tf.reshape(prev_layer, shape=[-1, n_hidden_in])
hidden = tf.layers.dense(last_conv_layer_flat, n_hidden,
activation=hidden_activation,
kernel_initializer=initializer)
outputs = tf.layers.dense(hidden, n_outputs,
kernel_initializer=initializer)
trainable_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope=scope.name)
trainable_vars_by_name = {var.name[len(scope.name):]: var
for var in trainable_vars}
return outputs, trainable_vars_by_name
X_state = tf.placeholder(tf.float32, shape=[None, input_height, input_width,
input_channels])
online_q_values, online_vars = q_network(X_state, name="q_networks/online")
target_q_values, target_vars = q_network(X_state, name="q_networks/target")
copy_ops = [target_var.assign(online_vars[var_name])
for var_name, target_var in target_vars.items()]
copy_online_to_target = tf.group(*copy_ops)
online_vars
learning_rate = 0.001
momentum = 0.95
with tf.variable_scope("train"):
X_action = tf.placeholder(tf.int32, shape=[None])
y = tf.placeholder(tf.float32, shape=[None, 1])
q_value = tf.reduce_sum(online_q_values * tf.one_hot(X_action, n_outputs),
axis=1, keep_dims=True)
error = tf.abs(y - q_value)
clipped_error = tf.clip_by_value(error, 0.0, 1.0)
linear_error = 2 * (error - clipped_error)
loss = tf.reduce_mean(tf.square(clipped_error) + linear_error)
global_step = tf.Variable(0, trainable=False, name='global_step')
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum, use_nesterov=True)
training_op = optimizer.minimize(loss, global_step=global_step)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Explanation: Build DQN
Note: instead of using tf.contrib.layers.convolution2d() or tf.contrib.layers.conv2d() (as in the first version of the book), we now use the tf.layers.conv2d(), which did not exist when this chapter was written. This is preferable because anything in contrib may change or be deleted without notice, while tf.layers is part of the official API. As you will see, the code is mostly the same, except that the parameter names have changed slightly:
* the num_outputs parameter was renamed to filters,
* the stride parameter was renamed to strides,
* the _fn suffix was removed from parameter names that had it (e.g., activation_fn was renamed to activation),
* the weights_initializer parameter was renamed to kernel_initializer,
* the weights variable was renamed to "kernel" (instead of "weights"), and the biases variable was renamed from "biases" to "bias",
* and the default activation is now None instead of tf.nn.relu.
End of explanation
from collections import deque
replay_memory_size = 500000
replay_memory = deque([], maxlen=replay_memory_size)
def sample_memories(batch_size):
indices = np.random.permutation(len(replay_memory))[:batch_size]
cols = [[], [], [], [], []] # state, action, reward, next_state, continue
for idx in indices:
memory = replay_memory[idx]
for col, value in zip(cols, memory):
col.append(value)
cols = [np.array(col) for col in cols]
return cols[0], cols[1], cols[2].reshape(-1, 1), cols[3], cols[4].reshape(-1, 1)
eps_min = 0.1
eps_max = 1.0
eps_decay_steps = 2000000
def epsilon_greedy(q_values, step):
epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps)
if np.random.rand() < epsilon:
return np.random.randint(n_outputs) # random action
else:
return np.argmax(q_values) # optimal action
n_steps = 4000000 # total number of training steps
training_start = 10000 # start training after 10,000 game iterations
training_interval = 4 # run a training step every 4 game iterations
save_steps = 1000 # save the model every 1,000 training steps
copy_steps = 10000 # copy online DQN to target DQN every 10,000 training steps
discount_rate = 0.99
skip_start = 90 # Skip the start of every game (it's just waiting time).
batch_size = 50
iteration = 0 # game iterations
checkpoint_path = "./my_dqn.ckpt"
done = True # env needs to be reset
Explanation: Note: in the first version of the book, the loss function was simply the squared error between the target Q-Values (y) and the estimated Q-Values (q_value). However, because the experiences are very noisy, it is better to use a quadratic loss only for small errors (below 1.0) and a linear loss (twice the absolute error) for larger errors, which is what the code above computes. This way large errors don't push the model parameters around as much. Note that we also tweaked some hyperparameters (using a smaller learning rate, and using Nesterov Accelerated Gradients rather than Adam optimization, since adaptive gradient algorithms may sometimes be bad, according to this paper). We also tweaked a few other hyperparameters below (a larger replay memory, longer decay for the $\epsilon$-greedy policy, larger discount rate, less frequent copies of the online DQN to the target DQN, etc.).
End of explanation
loss_val = np.infty
game_length = 0
total_max_q = 0
mean_max_q = 0.0
Explanation: A few variables for tracking progress:
End of explanation
with tf.Session() as sess:
if os.path.isfile(checkpoint_path + ".index"):
saver.restore(sess, checkpoint_path)
else:
init.run()
copy_online_to_target.run()
while True:
step = global_step.eval()
if step >= n_steps:
break
iteration += 1
print("\rIteration {}\tTraining step {}/{} ({:.1f})%\tLoss {:5f}\tMean Max-Q {:5f} ".format(
iteration, step, n_steps, step * 100 / n_steps, loss_val, mean_max_q), end="")
if done: # game over, start again
obs = env.reset()
for skip in range(skip_start): # skip the start of each game
obs, reward, done, info = env.step(0)
state = preprocess_observation(obs)
# Online DQN evaluates what to do
q_values = online_q_values.eval(feed_dict={X_state: [state]})
action = epsilon_greedy(q_values, step)
# Online DQN plays
obs, reward, done, info = env.step(action)
next_state = preprocess_observation(obs)
# Let's memorize what happened
replay_memory.append((state, action, reward, next_state, 1.0 - done))
state = next_state
# Compute statistics for tracking progress (not shown in the book)
total_max_q += q_values.max()
game_length += 1
if done:
mean_max_q = total_max_q / game_length
total_max_q = 0.0
game_length = 0
if iteration < training_start or iteration % training_interval != 0:
continue # only train after warmup period and at regular intervals
# Sample memories and use the target DQN to produce the target Q-Value
X_state_val, X_action_val, rewards, X_next_state_val, continues = (
sample_memories(batch_size))
next_q_values = target_q_values.eval(
feed_dict={X_state: X_next_state_val})
max_next_q_values = np.max(next_q_values, axis=1, keepdims=True)
y_val = rewards + continues * discount_rate * max_next_q_values
# Train the online DQN
_, loss_val = sess.run([training_op, loss], feed_dict={
X_state: X_state_val, X_action: X_action_val, y: y_val})
# Regularly copy the online DQN to the target DQN
if step % copy_steps == 0:
copy_online_to_target.run()
# And save regularly
if step % save_steps == 0:
saver.save(sess, checkpoint_path)
Explanation: And now the main training loop!
End of explanation
frames = []
n_max_steps = 10000
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
obs = env.reset()
for step in range(n_max_steps):
state = preprocess_observation(obs)
# Online DQN evaluates what to do
q_values = online_q_values.eval(feed_dict={X_state: [state]})
action = np.argmax(q_values)
# Online DQN plays
obs, reward, done, info = env.step(action)
img = env.render(mode="rgb_array")
frames.append(img)
if done:
break
plot_animation(frames)
Explanation: You can interrupt the cell above at any time to test your agent using the cell below. You can then run the cell above once again, it will load the last parameters saved and resume training.
End of explanation
def preprocess_observation(obs):
img = obs[34:194:2, ::2] # crop and downsize
return np.mean(img, axis=2).reshape(80, 80) / 255.0
env = gym.make("Breakout-v0")
obs = env.reset()
for step in range(10):
obs, _, _, _ = env.step(1)
img = preprocess_observation(obs)
plt.figure(figsize=(11, 7))
plt.subplot(121)
plt.title("Original observation (160×210 RGB)")
plt.imshow(obs)
plt.axis("off")
plt.subplot(122)
plt.title("Preprocessed observation (80×80 grayscale)")
plt.imshow(img, interpolation="nearest", cmap="gray")
plt.axis("off")
plt.show()
Explanation: Extra material
Preprocessing for Breakout
Here is a preprocessing function you can use to train a DQN for the Breakout-v0 Atari game:
End of explanation
def combine_observations_multichannel(preprocessed_observations):
return np.array(preprocessed_observations).transpose([1, 2, 0])
def combine_observations_singlechannel(preprocessed_observations, dim_factor=0.5):
dimmed_observations = [obs * dim_factor**index
for index, obs in enumerate(reversed(preprocessed_observations))]
return np.max(np.array(dimmed_observations), axis=0)
n_observations_per_state = 3
preprocessed_observations = deque([], maxlen=n_observations_per_state)
obs = env.reset()
for step in range(10):
obs, _, _, _ = env.step(1)
preprocessed_observations.append(preprocess_observation(obs))
img1 = combine_observations_multichannel(preprocessed_observations)
img2 = combine_observations_singlechannel(preprocessed_observations)
plt.figure(figsize=(11, 7))
plt.subplot(121)
plt.title("Multichannel state")
plt.imshow(img1, interpolation="nearest")
plt.axis("off")
plt.subplot(122)
plt.title("Singlechannel state")
plt.imshow(img2, interpolation="nearest", cmap="gray")
plt.axis("off")
plt.show()
Explanation: As you can see, a single image does not give you the direction and speed of the ball, which are crucial informations for playing this game. For this reason, it is best to actually combine several consecutive observations to create the environment's state representation. One way to do that is to create a multi-channel image, with one channel per recent observation. Another is to merge all recent observations into a single-channel image, using np.max(). In this case, we need to dim the older images so that the DQN can distinguish the past from the present.
End of explanation |
9,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualisation using
Table of Contents
Notebook Setup
Simple Line Plots
Using different styles for plots
Setting x and y limits
Labeling plots
Label formatting
LaTeX labels
Legends
Grids
Axis scales
Ticks
Multiple lines in the same plot
Multiple plots in the same figure
Shared axes
Tight layout
Inset plots
Error bars
Asymmetrical errors
Upper and lower limits
Polar plots
Histograms
1D Histograms
2D Histograms
Scatter Plots
Object-Oriented Syntax
MJD Date axis
Plots for Publication
Plot styles
Saving figures
<a id=setup></a>
Notebook Setup (run me first!)
First, we apply a "magic command" to make plots appear "inline" inside the notebook. Alternatively, we could allow plots to appear in a separate window.
Step1: In order to work with Matplotlib, the library must be imported first. So we do not have to type so much, we give it a shorter name
Step2: Matplotlib works best with numpy arrays, so we import numpy as well
Step3: <a id=line_plots></a>
Simple Line Plots
Step4: <a id=different_styles></a>
Using different styles for plots
Step5: All styles and colors
Step6: <a id=plot_labels></a>
We are still lacking something important
Step7: A side note on units
Contrary to some customs in HEP or Astronomy, the BiPM has strict rules on how to typeset quantities, numbers and units in table headers and axis labels
Step8: <a id=latex_labels></a>
Matplotlib can handle a rather complete subset of LaTeX in any text
Step9: <a id=legends></a>
Legends
Matplotlib can create legends automatically for plot objects that have a label.
Step10: <a id=grids></a>
Grids
Step11: <a id=axis_scales></a>
Axis-Scales
Step12: <a id=ticks></a>
Ticks
Step13: <a id=multiple_lines></a>
Multiple lines in the same plot
Step14: Remember
Step15: <a id=shared_axes></a>
Shared Axes
Step16: <a id=tight_layout></a>
You should almost always call plt.tight_layout()
Step17: <a id=inset_plots></a>
Inset Plots (plot inside a plot)
Step18: <a id=error_bars></a>
Error bars
Step19: <a id=asym_errors></a>
Asymmetrical errors
Give 2 arrays to the xerr or yerr kwargs
Step20: <a id=upper_limits></a>
Upper and lower limits
Often, we want to give uncertainties for some values, but upper or lower limits for others.
Step21: <a id=polar_plots></a>
Polar Plots
Step22: <a id=histograms></a>
Histograms
<a id=hist1d></a>
1D Histograms
Step23: <a id=hist2d></a>
2D Histograms
Step24: Colormaps
Can influence perception greatly
Physicists most loved colormaps (rainbow, jet) objectively bad
Do not work when printed black/white
Not colorblind friendly
Not perceptually uniform
Use the modern colormaps in matplotlib (available since 1.5)
viridis (default in 2.0)
inferno
magma
plasma
Use fitting colormaps
Step25: <a id=oo_syntax></a>
Using the object orientated syntax
Matplotlib has two APIs (yes, it's strange).
The matlab-like syntax we used until now
Step26: <a id="mjd"></a>
Providing both MJD and human readable date axis
I am not able to convert MJD to normal date in my head
Your audience probably is also not able to do it
Solution
Step27: <a id=publication_plots></a>
Plots for Publication
Use fully blown LaTeX installation using the pgf backend
Same font and font sizes as in your publication
Really high quality, publication ready plots
<a id=plot_styles></a>
Plot styles
List available styles
Step28: <a id=save_figures></a>
Saving figures
Use plt.savefig to save your figure.
You can either give path relative to your working directory or an absolute path.
Not sure what the current working directory is? | Python Code:
# only for the notebook
%matplotlib inline
# only in the ipython shell
# %matplotlib
Explanation: Visualisation using
Table of Contents
Notebook Setup
Simple Line Plots
Using different styles for plots
Setting x and y limits
Labeling plots
Label formatting
LaTeX labels
Legends
Grids
Axis scales
Ticks
Multiple lines in the same plot
Multiple plots in the same figure
Shared axes
Tight layout
Inset plots
Error bars
Asymmetrical errors
Upper and lower limits
Polar plots
Histograms
1D Histograms
2D Histograms
Scatter Plots
Object-Oriented Syntax
MJD Date axis
Plots for Publication
Plot styles
Saving figures
<a id=setup></a>
Notebook Setup (run me first!)
First, we apply a "magic command" to make plots appear "inline" inside the notebook. Alternatively, we could allow plots to appear in a separate window.
End of explanation
import matplotlib.pyplot as plt
# Make the size and fonts larger for this presentation
plt.rcParams['figure.figsize'] = (10, 8)
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2
Explanation: In order to work with Matplotlib, the library must be imported first. So we do not have to type so much, we give it a shorter name:
End of explanation
import numpy as np
Explanation: Matplotlib works best with numpy arrays, so we import numpy as well
End of explanation
x = np.linspace(0, 1, 100) # 100 numbers from 0 to 1
plt.plot(x, x**2)
# If not interactive, e.g. in a script:
# plt.show()
Explanation: <a id=line_plots></a>
Simple Line Plots
End of explanation
t = np.linspace(0, 2 * np.pi) # 50 points between 0 and 2π
plt.plot(t, np.sin(t));
plt.plot(t, np.sin(t), '--');
plt.plot(t, np.sin(t), 'go')
# plt.plot(t, np.sin(t), color='green', marker='o', linestyle=''); # same thing!
# new in matplotlib 2.0, all colors of the color rotation available as C<N>
x = np.linspace(0, 1, 100)
for n in range(9):
plt.plot(x**(n + 1), color='C{}'.format(n))
Explanation: <a id=different_styles></a>
Using different styles for plots
End of explanation
plt.plot(t, np.sin(t))
plt.xlim(0, 2*np.pi)
plt.ylim(-1.2, 1.2);
Explanation: All styles and colors: matplotlib.axes.Axes.plot
<a id=setting_limits></a>
Setting x and y limits
End of explanation
with plt.xkcd():
plt.title('Axes with labels')
plt.plot(t, np.sin(t))
plt.xlabel('t / s')
plt.ylabel('U / V')
plt.ylim(-1.1, 1.1)
plt.xlim(0, 2*np.pi)
Explanation: <a id=plot_labels></a>
We are still lacking something important
End of explanation
plt.plot(t, np.sin(t))
title_font = {'fontsize': 24, 'fontweight': 'bold', 'family': 'serif'}
axes_font = {'fontsize': 18, 'fontstyle': 'italic'}
plt.xlabel('t / s', axes_font)
plt.ylabel('U / V', axes_font)
plt.title('Always label your plots!', title_font);
Explanation: A side note on units
Contrary to some customs in HEP or Astronomy, the BiPM has strict rules on how to typeset quantities, numbers and units in table headers and axis labels:
Symbols for units are treated as mathematical entities. In expressing the value of a
quantity as the product of a numerical value and a unit, both the numerical value and
the unit may be treated by the ordinary rules of algebra.
It is often convenient to
write the quotient of a quantity and a unit in this way for the heading of a column in a
table, so that the entries in the table are all simply numbers. [...]
The axes of a graph may also be labelled in this way, so that the tick marks are
labelled only with numbers
Bureau International des Poids et Measures, The International System of Units, Chapter 5, Section 3
A physical quantity is always a product of a number and a unit, so what is shown, is the quantity divided by the unit.
Especially square brackets have a totally different meaning and are highly problematic if it comes to mathematical operations on quantities like $\log(E / 1\,\mathrm{GeV})$
<a id=label_formatting></a>
Label formatting
These options can be set globally in a matplotlibrc file,
see https://matplotlib.org/users/customizing.html
End of explanation
plt.plot(t, np.sin(t))
plt.xlabel(r'$t / \mathrm{s}$') # leading r means "raw", so that '\' is handled correctly
plt.ylabel(r"$\int_0^t \cos(t') \, \mathrm{d}t'$");
Explanation: <a id=latex_labels></a>
Matplotlib can handle a rather complete subset of LaTeX in any text
End of explanation
plt.plot(t, np.sin(t), label=r'$\sin(t)$')
plt.plot(t, np.cos(t), label=r'$\cos(t)$')
plt.legend()
#plt.legend(loc='upper center')
None # only to avoid cluttering the notebook
Explanation: <a id=legends></a>
Legends
Matplotlib can create legends automatically for plot objects that have a label.
End of explanation
plt.plot(t, np.sin(t))
plt.grid()
Explanation: <a id=grids></a>
Grids
End of explanation
x = np.linspace(0, 10)
# x = np.logspace(-1, 2, 100)
plt.plot(x, np.exp(-x))
plt.yscale('log')
# plt.xscale('log')
Explanation: <a id=axis_scales></a>
Axis-Scales
End of explanation
x = np.linspace(0, 2*np.pi)
plt.plot(x, np.sin(x))
plt.xlim(0, 2*np.pi)
# First argument: position, second argument: labels
plt.xticks(
np.arange(0, 2*np.pi + 0.1, np.pi/2),
[r"$0$", r"$\frac{1}{4}\tau$", r"$\frac{1}{2}\tau$", r"$\frac{3}{4}\tau$", r"$\tau$"]
)
plt.title(r"$\tau$ FTW!") # https://tauday.com/tau-manifesto
None
months = ['January',
'February',
'March',
'April',
'May',
'June',
'July',
'August',
'September',
'October',
'November',
'December']
plt.bar(np.arange(12), np.random.rand(12))
plt.xticks(
np.arange(12),
months,
rotation=45,
rotation_mode='anchor',
horizontalalignment='right', # or ha
verticalalignment='top', # or va
);
Explanation: <a id=ticks></a>
Ticks
End of explanation
x = np.linspace(0, 1)
plt.plot(x, x**2, label=r'$x^2$')
plt.plot(x, x**4)
plt.plot(x, x**6, 'o', label=r'$x^6$')
plt.legend(loc='best');
Explanation: <a id=multiple_lines></a>
Multiple lines in the same plot
End of explanation
x = np.linspace(0, 2*np.pi)
# subplot arguments: # of rows, # of columns, plot index (row * (#cols) + col)
plt.subplot(2, 1, 1)
plt.plot(x, x**2)
plt.xlim(0, 2*np.pi)
plt.subplot(2, 1, 2)
plt.plot(x, np.sin(x))
plt.xlim(0, 2*np.pi);
Explanation: Remember: Legend entries are only generated for plot objects that have a label (note x⁴ is missing)!
<a id=multiple_plots></a>
Multiple plots in the same figure
End of explanation
def poisson(x, k):
return np.exp(-x)*x**k / np.math.factorial(k)
x = np.linspace(0, 12, 40)
y = poisson(x, 2)
y_noise = y + np.random.normal(0, 0.01, len(y))
z = np.linspace(0, 12, 100)
gridspec = {'height_ratios': [2, 1]}
fig, (ax1, ax2) = plt.subplots(2, sharex=True, gridspec_kw=gridspec)
ax1.plot(x, y_noise, 'ko')
ax1.plot(z, poisson(z, 2))
ax1.set_ylim(-0.05, 0.30)
ax1.set_ylabel('Flux')
ax1.set_yticks(ax1.get_yticks()[1:]) # remove bottom y-tick
ax2.plot(x, y_noise - y, 'ko')
ax2.axhline(y=0, color='black', linestyle='--', linewidth=1)
ax2.set_xlabel('Energy')
ax2.set_ylim(-0.03, 0.04)
ax2.set_ylabel('Residuals')
ax2.set_yticks(ax2.get_yticks()[:-2]) # remove top y-tick
fig.subplots_adjust(hspace=0)
fig.suptitle('\nFake Spectrum', fontweight='bold');
Explanation: <a id=shared_axes></a>
Shared Axes
End of explanation
x = np.linspace(0, 2*np.pi)
plt.subplot(2, 1, 1)
plt.plot(x, x**2)
plt.xlim(0, 2*np.pi)
plt.title(r"$f(x)=x^2$")
plt.subplot(2, 1, 2)
plt.plot(x, np.sin(x))
plt.xlim(0, 2*np.pi)
plt.title(r"$f(x)=\sin(x)$")
plt.tight_layout() # try commenting this line out!
Explanation: <a id=tight_layout></a>
You should almost always call plt.tight_layout()
End of explanation
plt.plot(x, x**2)
plt.title("Outer Plot")
# axes coordinates: (0,0) is lower left, (1,1) upper right
plt.axes([0.2, 0.45, 0.3, 0.3])
plt.plot(x, x**3)
plt.title("Inner Plot");
Explanation: <a id=inset_plots></a>
Inset Plots (plot inside a plot)
End of explanation
x = np.linspace(0, 2*np.pi, 10)
errX = np.random.normal(0, 0.4, 10)
errY = np.random.normal(0, 0.4, 10)
plt.errorbar(x + errX, x + errY, xerr=0.4, yerr=errY, fmt='o');
Explanation: <a id=error_bars></a>
Error bars
End of explanation
x = np.linspace(0, 1, 10)
plt.errorbar(
x,
np.sin(2 * np.pi * x),
yerr=[np.full_like(x, 0.5), np.full_like(x, 0.1)],
linestyle='',
marker='o',
)
Explanation: <a id=asym_errors></a>
Asymmetrical errors
Give 2 arrays to the xerr or yerr kwargs:
End of explanation
bins = np.logspace(2, 4, 15)
x = (bins[:-1] + bins[1:]) / 2
y = x**(-2.7)
yerr = y * 0.3
y += np.random.normal(0, yerr)
# mask for which points are upper limits
uplims = np.full_like(x, False)
# last points are only upper limits
y[-3:] += 3 * y[-3:]
yerr[-3:] = 0.3 * y[-3:] # yerr determines length of limit arrow
uplims[-3:] = True
plt.errorbar(
x,
y,
xerr=np.diff(bins/2),
yerr=yerr,
uplims=uplims,
ls='none',
)
plt.xlabel('$E \ / \ \mathrm{GeV}$')
plt.ylabel('$Flux \ / \ \mathrm{GeV}^{-1} \mathrm{s}^{-1} \mathrm{m}^{-2} \mathrm{sr}^{-1}$')
plt.xscale('log')
plt.yscale('log')
Explanation: <a id=upper_limits></a>
Upper and lower limits
Often, we want to give uncertainties for some values, but upper or lower limits for others.
End of explanation
r = np.linspace(0, 10, 50)
# r = np.linspace(0, 10, 1000)
theta = 2*np.pi*r
plt.polar(theta, r);
Explanation: <a id=polar_plots></a>
Polar Plots
End of explanation
# Generate random data:
x = np.random.normal(0, 1, 1000)
plt.hist(x, bins=25);
x1 = np.random.normal(-1, 1, 1000)
x2 = np.random.normal(1, 1, 1000)
bin_edges = np.linspace(-6, 6, 51) # 50 bins between -6 and 6
plt.hist(x1, bins=bin_edges, histtype='step', label='x1')
plt.hist(x2, bins=bin_edges, histtype='step', label='x2')
plt.legend();
Explanation: <a id=histograms></a>
Histograms
<a id=hist1d></a>
1D Histograms
End of explanation
mean = [2, 1]
cov = [[9, 2],
[2, 4]]
x, y = np.random.multivariate_normal(mean, cov, size=10000).T
plt.hist2d(x, y)
# plt.hist2d(x, y, bins=50)
# plt.hist2d(x, y, bins=[25, 50], range=[[-10, 14], [-5, 7]])
plt.colorbar(label='Counts');
from matplotlib.colors import LogNorm
plt.hist2d(x, y, bins=50, norm=LogNorm())
plt.colorbar();
Explanation: <a id=hist2d></a>
2D Histograms
End of explanation
x1, y1 = np.random.multivariate_normal([1, 1], [[1, 0], [0, 1]], 1000).T
x2, y2 = np.random.multivariate_normal([-1, -1], [[1, 0], [0, 1]], 1000).T
plt.scatter(x1, y1)
plt.scatter(x2, y2);
x = np.append(x1, x2)
y = np.append(y1, y2)
s = np.random.uniform(5, 50, 2000)
label = np.append(np.ones_like(x1), np.zeros_like(x2))
plt.scatter(x, y, c=label, s=s);
Explanation: Colormaps
Can influence perception greatly
Physicists most loved colormaps (rainbow, jet) objectively bad
Do not work when printed black/white
Not colorblind friendly
Not perceptually uniform
Use the modern colormaps in matplotlib (available since 1.5)
viridis (default in 2.0)
inferno
magma
plasma
Use fitting colormaps: sequential vs. diverging
More here:
https://www.youtube.com/watch?v=xAoljeRJ3lU&t=6s
<a id="scatter"></a>
Scatter Plots
End of explanation
import matplotlib.pyplot as plt
import numpy as np
t = np.linspace(0, 2*np.pi, 1000)
fig, (ax1, ax2) = plt.subplots(2, 1)
# note that plot is now a method of ax1, not the global plt object
ax1.plot(t, np.sin(t), 'r-')
ax1.set_title(r"$f(t)=\sin(t)$") # use object-oriented get/set syntax
ax1.set_xlabel("$t$")
ax1.set_xlim(0, 2*np.pi)
ax1.set_ylim(-1.1, 1.1)
ax2.plot(t, np.cos(t), 'b-')
ax2.set_title(r"$f(t)=\cos(t)$")
ax2.set_xlabel("$t$")
ax2.set_xlim(0, 2*np.pi)
ax2.set_ylim(-1.1, 1.1)
fig.tight_layout()
Explanation: <a id=oo_syntax></a>
Using the object orientated syntax
Matplotlib has two APIs (yes, it's strange).
The matlab-like syntax we used until now:
Easier to write
Familiar for matlab users
Frequently uses global states
Object-oriented syntax:
More powerful
More control over the plots
Preferable for library code
No (or at least very few) global states
End of explanation
from datetime import datetime, timedelta
# constants for ordinal and mjd date representation
MJD_EPOCH = datetime(1858, 11, 17)
ORDINAL_EPOCH = datetime(1, 1, 1)
def ordinal_to_mjd(ordinal):
''' Converts ordinal date (days since 0001-01-01T00:00) to MJD (days since 1858-11-17T00:00)'''
return ordinal - (MJD_EPOCH - ORDINAL_EPOCH).total_seconds() / 86400
# create some random "Crab nebula" data
n_on = np.random.poisson(60, 25)
n_off = np.random.poisson(30, 25)
n_signal = n_on - 0.2 * n_off
n_signal_err = np.sqrt(n_on + 0.2**2 * n_off)
# create some dates
dates = [datetime(2017, 1, 1) + timedelta(days=i) for i in range(25)]
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.errorbar(dates, n_signal, yerr=n_signal_err, ls='')
ax.axhline(n_signal.mean(), color='C1')
ax.set_ylim(0, 80)
ax.set_ylabel(r'Signal Rate / $\mathrm{h}^{-1}$')
fig.autofmt_xdate()
# create a second axis, using the same y-axis
ax_mjd = ax.twiny()
# set its xlims to the same values of the date axis, but convert to mjd
ax_mjd.set_xlim(*map(ordinal_to_mjd, ax.get_xlim()))
ax_mjd.set_xlabel('MJD')
fig.tight_layout()
Explanation: <a id="mjd"></a>
Providing both MJD and human readable date axis
I am not able to convert MJD to normal date in my head
Your audience probably is also not able to do it
Solution: provide both a human readable and a MJD axis
Matplotlib uses the ordinal date (days since 1.1.1) for internal datetime representation
End of explanation
print(plt.style.available)
from scipy import stats
def plot_stuff():
plt.subplot(2, 2, 1)
x = np.linspace(-1, 1, 1000)
plt.plot(x, np.sin(50*x**3)/(x))
plt.grid()
plt.subplot(2, 2, 2)
x = np.linspace(-1, 1, 10)
y = np.exp(-2.2*x) + np.random.normal(0, 0.1, 10)
yerr = np.random.normal(0, 0.2, 10)
plt.errorbar(x, y, yerr, fmt='o', capsize=3)
plt.yscale('log')
plt.subplot(2, 2, 3)
x = stats.skewnorm.rvs(10, size=1000)
plt.hist(x, bins=50)
plt.subplot(2, 2, 4)
x, y = np.mgrid[-1:1:.01, -1:1:.01]
pos = np.dstack((x, y))
z = stats.multivariate_normal([0.1, 0.3], [[0.2, 0.3], [0.1, 0.4]])
plt.contourf(x, y, z.pdf(pos))
for plot_style in ['classic', 'bmh', 'fivethirtyeight', 'ggplot', 'seaborn']:
plt.figure()
with plt.style.context(plot_style): # use context manager so that changes are temporary
plot_stuff()
plt.suptitle('Plot Style: ' + plot_style, fontweight='bold')
Explanation: <a id=publication_plots></a>
Plots for Publication
Use fully blown LaTeX installation using the pgf backend
Same font and font sizes as in your publication
Really high quality, publication ready plots
<a id=plot_styles></a>
Plot styles
List available styles:
End of explanation
pwd()
x = np.linspace(-5, 5)
plt.plot(x, x**3, marker='s')
plt.title("My Awesome Plot")
# save in current directory; extension determines file type
plt.savefig('awesome_plot.pdf')
plt.savefig('awesome_plot.eps')
plt.savefig('awesome_plot.png', dpi=300) # bitmap graphics; don't use me for publications!
plt.savefig('awesome_plot.jpg', dpi=300) # bitmap graphics; don't use me either!
# relative path with subdirectory
# plt.savefig('build/awesome_plot.pdf')
# absolute path
# plt.saveig('/path/to/output/directory/awesome_plot.pdf')
Explanation: <a id=save_figures></a>
Saving figures
Use plt.savefig to save your figure.
You can either give path relative to your working directory or an absolute path.
Not sure what the current working directory is?
End of explanation |
9,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejercicio de análisis, exploración y visualización de bases de datos.
Para el siguiente ejercicio vamos a utilizar la base de datos de las Estaciones del Estado de Aguascalientes con un tiempo de registro de 15 minutos.
La base de datos contiene los siguientes campos
Step1: Número de estaciones que se encuentran en la base de datos?
Precipitación acumulada de la base de datos?
Los 5 años con mayor precipitación de la base de datos?
Step2: La estación con la mayor acumulación de precipitación de la base de datos?
Año y mes en la que se presenta la mayor acumulación de precipitación en la base de datos?
Step3: Bonus
Desplegar la información en un heatmap | Python Code:
# importar librerías
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.style.use("ggplot")
# leer csv
df = pd.read_csv("/Users/jorgemauricio/Documents/Research/INIFAP_Course/data/ags_ejercicio_curso.csv")
# estructura de la base de datos
Explanation: Ejercicio de análisis, exploración y visualización de bases de datos.
Para el siguiente ejercicio vamos a utilizar la base de datos de las Estaciones del Estado de Aguascalientes con un tiempo de registro de 15 minutos.
La base de datos contiene los siguientes campos:
nombre: Nombre de la estación
fecha: Fecha en que se tomo el registro
prec: Precipitación
Esta base se encuentra localizada en la carpeta data
Tienes 15 minutos como máximo para resolver las siguientes preguntas:
Número de estaciones que se encuentran en la base de datos?
Precipitación acumulada de la base de datos?
Los 5 años con mayor precipitación de la base de datos?
La estación con la mayor acumulación de precipitación de la base de datos?
Año y mes en la que se presenta la mayor acumulación de precipitación en la base de datos?
End of explanation
# debemos de generar la columna año
# agrupar la información por años
Explanation: Número de estaciones que se encuentran en la base de datos?
Precipitación acumulada de la base de datos?
Los 5 años con mayor precipitación de la base de datos?
End of explanation
# debemos de generar la columna mes
# agrupar la información por año y mes
Explanation: La estación con la mayor acumulación de precipitación de la base de datos?
Año y mes en la que se presenta la mayor acumulación de precipitación en la base de datos?
End of explanation
# clasificar los datos a modo de tabla para desplegarlos en un heatmap
# visualizar la tabla de datos
# visualización de la información en heatmap
# cambiar los colores del heatmap
# agregar línea divisora
# agregar el valor de cada una de las celdas
# disminuir el tamaño de la letra del valor de cada una de las celdas
Explanation: Bonus
Desplegar la información en un heatmap
End of explanation |
9,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This script finds the optimal band gaps of mechanical stack III-V-Si solar cells. I uses a detailed balance approach to calculate the I-V of individual subcells. For calculating efficiency, I add up the maximum power of individual subcell and divide it by the total illumination power.
Details of how the I-V is calculated can be referred to this paper.
Step1: Assume that the top cell has 100% EQE
Step2: The maximum efficiency is then 42%, and the optimal band gap is 1.81 eV. For two-terminal, 2J devices, maximum efficiency is 41% with a 1.74-eV top cell on silicon. As we can see, using mechanical stack did not benefit the efficiency fundamentally.
Try if different EQE values shift the peak
Step3: Different top cell's EQEs do not change the optimal band gap of the top cell, as expected.
Assume that the top cell has very low EQE
Step4: The maximum efficiency in this case is around 30%. Which should be very close the limiting efficiency of 1J GaAs. We can check | Python Code:
%matplotlib inline
import numpy as np
from scipy.interpolate import interp2d
import matplotlib.pyplot as plt
from scipy.io import savemat
from iii_v_si import calc_2j_si_eta, calc_2j_si_eta_direct
from detail_balanced_MJ import calc_1j_eta
def vary_top_eg(top_cell_qe,n_s=1):
topcell_eg = np.linspace(0.9, 3, num=100)
eta = np.zeros(topcell_eg.shape)
for p in range(topcell_eg.shape[0]):
eta[p] = calc_2j_si_eta_direct(top_eg=topcell_eg[p], top_rad_eta=1,
top_qe=top_cell_qe, bot_rad_eta=1,
bot_qe=1, n_s=n_s, mj="MS")
print("At AM1.5g, direct band gap assumption of silicon")
print("max eta %s:" % eta.max())
print("optimal Eg: %s" % topcell_eg[eta.argmax()])
return topcell_eg,eta
Explanation: Introduction
This script finds the optimal band gaps of mechanical stack III-V-Si solar cells. I uses a detailed balance approach to calculate the I-V of individual subcells. For calculating efficiency, I add up the maximum power of individual subcell and divide it by the total illumination power.
Details of how the I-V is calculated can be referred to this paper.
End of explanation
eg1,eta1=vary_top_eg(1)
plt.plot(eg1,eta1)
plt.xlabel("top cell's band gap")
plt.ylabel("efficiency")
plt.savefig("mstopeg.pdf")
Explanation: Assume that the top cell has 100% EQE
End of explanation
qe_range=np.linspace(0.5,1,num=3)
for q in qe_range:
eg,eta = vary_top_eg(q)
plt.plot(eg,eta,hold=True,label="QE=%s"%q)
plt.legend(loc="best")
plt.xlabel("top cell's band gap")
plt.ylabel("efficiency")
Explanation: The maximum efficiency is then 42%, and the optimal band gap is 1.81 eV. For two-terminal, 2J devices, maximum efficiency is 41% with a 1.74-eV top cell on silicon. As we can see, using mechanical stack did not benefit the efficiency fundamentally.
Try if different EQE values shift the peak
End of explanation
eg1,eta1=vary_top_eg(0.001)
plt.plot(eg1,eta1)
plt.xlabel("top cell's band gap")
plt.ylabel("efficiency")
Explanation: Different top cell's EQEs do not change the optimal band gap of the top cell, as expected.
Assume that the top cell has very low EQE
End of explanation
# calulate the SQ-limit efficiency of silicon
eta = calc_1j_eta(eg=1.12, qe=1, r_eta=1, cell_temperature=300)
print(eta)
Explanation: The maximum efficiency in this case is around 30%. Which should be very close the limiting efficiency of 1J GaAs. We can check:
End of explanation |
9,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading and interpreting plane wave files
These files are the output of Quantum Espresso using pw2qmcpack, which is contained in some patches to QE shipped with QMCPACK (in the external_files/quantum_espresso directory). These files are in HDF5 format.
Step1: Get some basic information from the file like the version number, number of kpoints (twists), atomic positions, primitive lattice, etc.
Step2: Information about first kpoint
Step3: Conversion to splines
Use FFT to convert to a grid in real-space
Step4: Further processing
For the conversion to splines, there is a further orbital rotation that keeps the overall real and imaginary parts away from zero (why?).
Also the twist (kpoint) factor of $\exp(ikr)$ is applied.
See fix_phase_rotate_c2r in QMCWavefunctions/einspline_helper.h | Python Code:
#f = h5py.File("../LiH-gamma.pwscf.h5","r")
#f = h5py.File("../LiH-arb.pwscf.h5","r")
f = h5py.File("../../bccH/pwscf.pwscf.h5","r")
Explanation: Reading and interpreting plane wave files
These files are the output of Quantum Espresso using pw2qmcpack, which is contained in some patches to QE shipped with QMCPACK (in the external_files/quantum_espresso directory). These files are in HDF5 format.
End of explanation
version = f.get('application/version')
print 'version = ',version[:]
number_of_kpoints = f.get('electrons/number_of_kpoints')
print 'number of kpoints = ',number_of_kpoints[0]
number_of_electrons = f.get('electrons/number_of_electrons')
print 'number of electrons = ',number_of_electrons[0]
atom_pos = f.get('atoms/positions')
print atom_pos[:]
prim_vectors = f.get('supercell/primitive_vectors')
print prim_vectors[:]
# Reciprocal lattice vectors
def get_kspace_basis(basis):
# Volume factor for reciprocal lattice
a1, a2, a3 = basis
vol = a1.dot(np.cross(a2, a3))
pre = 2*math.pi
#pre = 1.0
b1 = pre*np.cross(a2, a3)/vol
b2 = pre*np.cross(a3, a1)/vol
b3 = pre*np.cross(a1, a2)/vol
return [b1, b2, b3]
kbasis = get_kspace_basis(prim_vectors)
print kbasis
Explanation: Get some basic information from the file like the version number, number of kpoints (twists), atomic positions, primitive lattice, etc.
End of explanation
kpoint = f.get('electrons/kpoint_0/reduced_k')
print kpoint[:]
gvectors = f.get('electrons/kpoint_0/gvectors')
print gvectors[0:10,:]
pw_coeffs = f.get('electrons/kpoint_0/spin_0/state_0/psi_g')
print pw_coeffs.shape
print pw_coeffs[0:10,:]
# Compute the orbital value at one point in real-space
def compute_psi(gvectors, kbasis, coeff, twist, r):
kp = kbasis[0]*twist[0] + kbasis[1]*twist[1] + kbasis[2]*twist[2]
total_r = 0.0
total_i = 0.0
for idx in range(len(gvectors)):
G = gvectors[idx]
c = coeff[idx]
q = kbasis[0]*G[0] + kbasis[1]*G[1] + kbasis[2]*G[2] + kp
qr = np.dot(q,r)
cosqr = math.cos(qr)
sinqr = math.sin(qr)
total_r += c[0] * cosqr - c[1] * sinqr
total_i += c[0] * sinqr + c[1] * cosqr
#print 'total = ',total_r, total_i
return complex(total_r, total_i)
# Test it out at one point.
r = np.array([0.0, 0.0, 0.0])
compute_psi(gvectors, kbasis, pw_coeffs, kpoint, r)
# Compute a range of values
psi_vals = []
rvals = []
nstep = 10
cell_width = prim_vectors[0,0]
step = cell_width/nstep
for i in range(nstep+1):
r1 = step*i
rvals.append(r1)
r = np.array([r1, 0.0, 0.0])
pv = compute_psi(gvectors, kbasis, pw_coeffs, kpoint, r)
print r1, pv
psi_vals.append(pv)
plt.plot(rvals, [p.real for p in psi_vals])
Explanation: Information about first kpoint
End of explanation
# Find the mesh size
# See EinsplineSetBuilder::ReadGvectors_ESHDF in QMCWavefunctions/EinsplineSetBuilderReadBands_ESHDF.cpp
# Mesh sizes taken from QMCPACK output.
# BCC H
#meshsize = (52, 52, 52)
# LiH
#meshsize = (68, 68, 68)
MeshFactor = 1.0
max_g = np.zeros(3)
for g in gvectors:
max_g = np.maximum(max_g, np.abs(g))
print 'Maximum G = ',max_g
meshsize = np.ceil(4*max_g*MeshFactor).astype(np.int)
print 'mesh size = ',meshsize
# Plus some more code for mesh sizes larger than 128 than restricts
# sizes to certain allowed values (more efficient FFT?)
# Place points in the box at the right G-vector
# see unpack4fftw in QMCWavefunctions/einspline_helper.h
fftbox = np.zeros(meshsize, dtype=np.complex_)
for c, g in zip(pw_coeffs, gvectors):
idxs = [(g[i] + meshsize[i])%meshsize[i] for i in range(3)]
fftbox[idxs[0], idxs[1], idxs[2]] = complex(c[0], c[1])
realbox = scipy.fftpack.fftn(fftbox)
fftvals = np.array([a.real for a in realbox[0:meshsize[0],0,0]])
fftvals
xstep = prim_vectors[0][0]/meshsize[0]
xvals = [xstep * i for i in range(meshsize[0])]
# Compare results of FFT and the compute_psi function
# They don't line up completely because they are on different real-space grids
line1 = plt.plot(rvals, [p.real for p in psi_vals], label="compute_psi")
line2 = plt.plot(xvals, fftvals, label="FFT")
plt.legend()
Explanation: Conversion to splines
Use FFT to convert to a grid in real-space
End of explanation
realbox_kr = np.empty_like(realbox)
for ix in range(meshsize[0]):
for iy in range(meshsize[1]):
for iz in range(meshsize[2]):
tx = kpoint[0]*ix/meshsize[0]
ty = kpoint[1]*iy/meshsize[1]
tz = kpoint[2]*iz/meshsize[2]
tt = -2*np.pi*(tx+ty+tz)
cos_tt = math.cos(tt)
sin_tt = math.sin(tt)
r = realbox[ix, iy, iz]
realbox_kr[ix,iy,iz] = r*complex(cos_tt, sin_tt)
rNorm = 0.0
iNorm = 0.0
ii = 0
for val in np.nditer(realbox_kr):
#for val in psi_vals:
rNorm += val.real*val.real
iNorm += val.imag*val.imag
ii += 1
print 'real norm, imaginary norm',rNorm,iNorm
arg = math.atan2(iNorm, rNorm)
print 'angle (degrees)',math.degrees(arg)
ang = np.pi/8 - 0.5*arg
sin_ang = math.sin(ang)
cos_ang = math.cos(ang)
rot_psi_vals = []
for val in psi_vals:
rot = val.real*cos_ang - val.imag*sin_ang
rot_psi_vals.append(rot)
# These values should be comparable to the output of the spline orbitals
rot_psi_vals
# These are on a different grid than the values above
fft_rot_vals = []
for val in realbox_kr[:,0,0]:
rot = val.real*cos_ang - val.imag*sin_ang
fft_rot_vals.append(rot)
fft_rot_vals[0:10]
Explanation: Further processing
For the conversion to splines, there is a further orbital rotation that keeps the overall real and imaginary parts away from zero (why?).
Also the twist (kpoint) factor of $\exp(ikr)$ is applied.
See fix_phase_rotate_c2r in QMCWavefunctions/einspline_helper.h
End of explanation |
9,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First Steps
Now that you have installed Marvin, it's time to take your first steps. If you want to learn more about how Marvin works, then go see General Info to learn about Marvin Modes, Versions, or Downloading. If you just want to play, then read on.
First let's run some boilerplate code for Python 2/3 compatibility and plotting in the notebook
Step1: Now, let’s import Marvin
Step2: Let's see what release we're using. Releases can be either MPLs (e.g. MPL-5) or DRs (e.g. DR13), however DRs are currently disabled in Marvin.
Step3: On intial import, Marvin will set the default data release to use the latest MPL available, currently MPL-6. You can change the version of MaNGA data using the Marvin Config.
Step4: But let's work with MPL-6
Step5: My First Cube
Now let’s play with a Marvin Cube!
Import the Marvin-Tools Cube class
Step6: Let's load a cube from a local file. Start by specifying the full path and name of the file, such as
Step7: Create a Cube object
Step8: Now we have a Cube object
Step9: How about we look at some meta-data
Step10: ...and the quality and target bits
Step11: Get a Spaxel
Cubes have several functions currently available
Step12: Spaxels have a spectrum associated with it. It has the wavelengths and fluxes of each spectral channel
Step13: Plot the spectrum!
Step14: Save plot to Downloads directory | Python Code:
from __future__ import print_function, division, absolute_import
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: First Steps
Now that you have installed Marvin, it's time to take your first steps. If you want to learn more about how Marvin works, then go see General Info to learn about Marvin Modes, Versions, or Downloading. If you just want to play, then read on.
First let's run some boilerplate code for Python 2/3 compatibility and plotting in the notebook:
End of explanation
import marvin
Explanation: Now, let’s import Marvin:
End of explanation
marvin.config.release
Explanation: Let's see what release we're using. Releases can be either MPLs (e.g. MPL-5) or DRs (e.g. DR13), however DRs are currently disabled in Marvin.
End of explanation
from marvin import config
config.setRelease('MPL-5')
print('MPL:', config.release)
Explanation: On intial import, Marvin will set the default data release to use the latest MPL available, currently MPL-6. You can change the version of MaNGA data using the Marvin Config.
End of explanation
config.setRelease('MPL-6')
# check designated version
config.release
Explanation: But let's work with MPL-6:
End of explanation
from marvin.tools.cube import Cube
Explanation: My First Cube
Now let’s play with a Marvin Cube!
Import the Marvin-Tools Cube class:
End of explanation
#----- EDIT THIS CELL -----#
# filename = '/Users/Brian/Work/Manga/redux/v1_5_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'
filename = 'path/to/manga/cube/manga-8485-1901-LOGCUBE.fits.gz'
filename = '/Users/andrews/manga/spectro/redux/v2_3_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'
filename = '/Users/Brian/Work/Manga/redux/v2_3_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'
Explanation: Let's load a cube from a local file. Start by specifying the full path and name of the file, such as:
/Users/Brian/Work/Manga/redux/v2_3_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz
EDIT Next Cell
End of explanation
cc = Cube(filename=filename)
Explanation: Create a Cube object:
End of explanation
print(cc)
Explanation: Now we have a Cube object:
End of explanation
cc.ra, cc.dec, cc.header['SRVYMODE']
Explanation: How about we look at some meta-data
End of explanation
cc.target_flags
cc.quality_flag
Explanation: ...and the quality and target bits
End of explanation
spax = cc[10,10]
# print the spaxel to see the x,y coord from the lower left, and the coords relative to the cube center, x_cen/y_cen
spax
Explanation: Get a Spaxel
Cubes have several functions currently available: getSpaxel, getMaps, getAperture. Let's look at spaxels. We can retrieve spaxels from a cube easily via indexing. In this manner, spaxels are 0-indexed from the lower left corner. Let's get spaxel (x=10, y=10):
End of explanation
# let's grab the central spaxel
spax = cc.getSpaxel(x=0, y=0)
spax
spax.flux.wavelength
spax.flux
Explanation: Spaxels have a spectrum associated with it. It has the wavelengths and fluxes of each spectral channel:
Alternatively grab a spaxel with getSpaxel. Use the xyorig keyword to set the coordinate origin point: 'lower' or 'center'. The default is "center"
End of explanation
# turn on interactive plotting
%matplotlib notebook
spax.flux.plot()
Explanation: Plot the spectrum!
End of explanation
# To save the plot, we need to draw it in the same cell as the save command.
spax.flux.plot()
import os
plt.savefig(os.getenv('HOME') + '/Downloads/my-first-spectrum.png')
# NOTE - if you are using the latest version of iPython and Jupyter notebooks, then interactive matplotlib plots
# should be enabled. You can save the figure with the save icon in the interactive toolbar.
Explanation: Save plot to Downloads directory:
End of explanation |
9,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
12.850 - Assignment 3 | Bryce Corlett
Exploring Convergence Properties
This assignment was motivated by examining the different convergence properties of Jacobi, Gauss-Seidel, and SOR iteration schemes as applied to a toy 1-D elliptic problem
Step1: Here I create the code that will perform a specified iteration scheme, with the option of specifying a value for $\omega$.
Step2: The following code will initialize the $[A]$ and $[b]$ matrices according to the inputs, and depending on which code I call, as they use Neumann + Dirichlet and Dirichlet-only boundary conditions, respectively.
Step3: 1. Sensitivity of the System to Boundary Conditions
Note that the spectral radius changes as the boundary conditions change, but not when the values change. The spectral radius is not influenced by boundary condition values per say, but it is influenced by the type of boundary conditions present within the system. This is because the values are not introduced to the matrices $[B]$ or $[C]$, but the boundary conditions will influence the values on the diagonals of the matrix $[A]$, which will either end up in matrices $[B]$ or $[C]$ depending on the iteration scheme.
See below for comparison values
Step4: Variations in boundary condition values
Step5: Variations in the boundary condition itself
Step6: 2. Sensitivity of the solution to $\nu$
The convergence of the system is sensitive to the value of $\nu$, as $\nu$ will change the values on the diagonal of the matrix $[A]$, changing the determinate of $[B]$ in the process of calculating its inverse, and thus the spectral radius ($\rho$).
Step7: Reviewing the effect of $\nu$
After plotting these values, it became apparent that further investigation of the SOR factor was needed. After testing the matrices with varying values for this factor, it appears that the factor can be chosen specifically for given conditions to maximize the code's efficiency. The downside of this is that the SOR method will become inaccurate if the SOR factor is not tailored to matrix $[A]$, i.e. adjusted relative to the magnitude of the terms $C_{k,k}$, as seen in the flat-lining of the SOR spectral radius for given values of $\nu$, or effectively values of $\frac{\nu}{C_{k,k}}$.
Questioning the effectiveness of the SOR code
As the SOR code appears to yield highly-consistent values for the number of iterations required, I am unsure as to whether the issue is with my SOR code, or if the issue is with the SOR procedure. I am not sure if there is a way to pre-calibrate the SOR factor to work with the given conditions, but I hope to look into this in the next week or so.
3. Convergence of the system on N
The system takes longer time to converge for matrices with larger N's, as the computations involve matrices of size N x N; however, it appears that the system requires the same number of iterations in spite of the spectral radius increasing (which should indicate that the procedure takes more iterations to reach the specified tolerance). This may be an artifact of my SOR code; further investigation is needed, which will take place this weekend and next week to iron out any bugs that I might be able to find in the SOR portion of my code. Fortunately, the Gauss-Seidel and Jacobi schemes appear to be working fine. | Python Code:
#Import toolboxes
from scipy import sparse #Allows me to create sparse matrices (i.e. not store all of the zeros in the 'A' matrix)
from scipy.sparse import linalg as linal
from numpy import * #To make matrices and do matrix manipulation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: 12.850 - Assignment 3 | Bryce Corlett
Exploring Convergence Properties
This assignment was motivated by examining the different convergence properties of Jacobi, Gauss-Seidel, and SOR iteration schemes as applied to a toy 1-D elliptic problem:
\begin{align}
\frac{\partial}{\partial x} \kappa \frac{\partial}{\partial x} u = \nu u
\end{align}
The problem is similar to those observed in the previous two assignments, but with the additional forced advection term ($\nu u$), which appears in the $C_{k,k}$ term in the discretized equation:
\begin{align}
u_{k+1}\underbrace{\left[\frac{K_{k}}{\Delta z^{c}{k+1} \Delta z^{f}{k}}\right]}{C{k,k+1}} - u_{k}\underbrace{\left[\frac{K_{k}}{\Delta z^{c}{k+1} \Delta z^{f}{k}} + \frac{K_{k-1}}{\Delta z^{c}{k} \Delta z^{f}{k}} - \nu \right]}{C{k,k}} + u_{k-1}\underbrace{\left[\frac{K_{k-1}}{\Delta z^{c}{k} \Delta z^{f}{k}}\right]}{C{k,k-1}} = 0
\end{align}
When solving, the equation can be written as $[A]x = b$, where $[A]$ is the matrix of discrete values ($C_{k,k-1}$,$C_{k,k}$, and $C_{k,k+1}$), $x$ is a vector of the values you are solving for, and $b$ is a vector of the boundary conditions. The boundary conditions will affect the values within $[A]$, as the top and bottom cells are either forced with known values in the cells beneath (Dirichlet), or one known boundary value and a forced boundary flux (Dirichlet for the known boundary, and Neumann for the known flux boundary). In the first case, the known values move the respective portion of the $C_{k,k}$ term to the boundary conditions vector; in the second case, the flux is forced within the boundary conditions vector.
Once the discrete matrix $[A]$ is formed, the iteration processes break down the procedure in different manners according to whether they are wholly dependent on past iterations (Jacobi), use current-iteration estimates where available (Gauss-Seidel), or use a combination of current-iteration and past-iteration estimates (relaxed by some factor) to reach the solution more quickly (Succesive Over-Relaxation, or SOR).
To further grasp the differences, and what the following code is doing, we write the original equation as:
\begin{align}
Ax = b
\end{align}
where the matrix $A$ can be broken into three discrete matrices: the lower triangle ($\triangleright$), the diagonal ($\diagdown$), and the upper triangle ($\triangleleft$), where $[A] = [\triangleright] + [\diagdown] + [\triangleleft]$.
Using this notation, we can write the iterative methods as $B x_{n+1} = b - C x_{n}$, where $B$ and $C$ are formed from the original matrix $A$. Thus:
\begin{align}
Jacobi: & B = [\diagdown] & C = [\triangleright] + [\triangleleft] \
Gauss-Seidel: & B = [\triangleright] + [\diagdown] & C = [\triangleleft] \
SOR: & B = \omega[\triangleright] + [\diagdown] & C = (1-\omega)[\diagdown] + \omega[\triangleleft] \
\end{align}
The convergence of a system can be quantified as its spectral radius ($\rho$), where $\rho = max(~|\lambda_{i}|~)$, and $\lambda_{i}$ are the eigenvalues of $\left[ B^{-1}C \right]$; thus, how quickly a system converges is intimately related to the iterative method and the type of boundary conditions.
Now, we get to the code.
End of explanation
def space_iterate(method,A,b,resid,**optional):
'''Uses sparse matrices'''
if ('w' in optional):
#print 'w found, it is ', optional['w']
w = optional['w']
else:
#print 'no w found, assumed to be 1.17 if needed.'
w = 1.17
#w = 1.02
n = max(A.get_shape())
t = 0;
Q = b[:,0]
Rold = 100 #initialize value
Rnew = 1
while (absolute(Rnew - Rold)/float((absolute([Rold]))))*100.0 > resid:
t = t+1
Rold = Rnew
Q = append(Q,Q[:,0].dot(0.),axis=1)
if method == 'jacobi': #Jacobi iteration scheme
B = sparse.tril(A,0) - sparse.tril(A,-1) #only the diagonal
C = sparse.triu(A,1) + sparse.tril(A,-1) #only off-diagonal
Q[:,t] = linal.inv(B).dot(b - C.dot(Q[:,t-1]))
elif method == 'gaussseidel': #Gauss-Seidel iteration scheme
B = sparse.tril(A,0)
C = sparse.triu(A,1)
Q[:,t] = linal.inv(B).dot(b - C.dot(Q[:,t-1]))
elif method == 'sor': #SOR method
#Q[i,t] = w*Q[i,t] + (1-w)*Q[i,t-1]
B = sparse.tril(A,-1)*(float(w)) + (sparse.tril(A,0)-sparse.tril(A,-1))
C = ( (sparse.tril(A,0)-sparse.tril(A,-1)).dot(float(w-1))) + (sparse.triu(A,1).dot(float(w)) )
#B = - sparse.tril(A,-1)*(float(w)) + (sparse.tril(A,0)-sparse.tril(A,-1))
#C = ( (sparse.tril(A,0)-sparse.tril(A,-1)).dot(float(1-w))) + (sparse.triu(A,1).dot(float(w)) )
Q[:,t] = linal.inv(B).dot((float(w)*b) - C.dot(Q[:,t-1]))
else:
print('Improper Option - program closing.')
return
Rnew = mean(Q[:,t])
#print (absolute(Rnew - Rold)/float((absolute(Rold))))*100.0
B = B.tocsc() #convert sparse matrices to csc format
C = C.tocsc()
print('Iterations = '+str(t)+'; Spectral Radius = '+str(absolute(linalg.eigvals(linal.inv(B).dot(C).todense())).max()))
#(absolute(linal.eigs(linal.inv(B.tocsc()).dot(C.tocsc()),k=ndim(b)-1,return_eigenvectors=False)).max())
I = t
S = absolute(linalg.eigvals(linal.inv(B).dot(C).todense())).max()
#print('The spectral radius of the problem is '+str(absolute(linal.eigs(linal.inv(B).dot(C),k=ndim(b)-1,return_eigenvectors=False)).max()) )
return[Q,I,S]
Explanation: Here I create the code that will perform a specified iteration scheme, with the option of specifying a value for $\omega$.
End of explanation
def neumann_stable(n,v,u0,Z_f,Z_c,K):
#Create Neumann + Dirichlet boundary conditions, yielding matrices A + b
A=zeros((3,n)) # For solving for 'n+1' solution
for item in range(1,n+1): #Start from bed and work to surface
#j-1
if item>1:
A[0,item-2]=+(K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1])) )
#j
A[1,item-1]=-( (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) + (K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item]))) + v)
if item == n: #Sets free-slip boundary condition at the surface
A[1,item-1]=-( (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) + v)
#j+1
if item != n:
A[2,item]=+(K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item])) )
A = sparse.spdiags(A,array([-1,0,1]),n,n)
# Construct Boundary Condition Matrix
b=zeros(size(A,1))
b[0]=b[0] + (u0* (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) ) #Because u0 is zero, this line does nothing.
# Define + Apply guess + boundary conditions
b=matrix(b).T
return[A,b]
def dirichlet(n,v,u0,u1,Z_f,Z_c,K):
#Create Dirichlet boundary conditions, yielding matrices A + b
A=zeros((3,n)) # For solving for 'n+1' solution
for item in range(1,n+1): #Start from bed and work to surface
#j-1
if item>1:
A[0,item-2]=+(K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1])) )
#j
A[1,item-1]=-( (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) + (K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item]))) + v )
#j+1
if item != n:
A[2,item]=+(K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item])) )
A = sparse.spdiags(A,array([-1,0,1]),n,n)
# Construct Boundary Condition Matrix
b=zeros(size(A,1))
b[0]=b[0] + (u0* (K[1-1]/((Z_f[1]-Z_f[1-1])*(Z_c[1]-Z_c[1-1]))) )
b[n-1]=b[n-1] + (u1* (K[n-1]/((Z_f[n]-Z_f[n-1])*(Z_c[n]-Z_c[n-1]))) )
# Define + Apply guess + boundary conditions
b=matrix(b).T
return [A,b]
#Initialize a comparison
n=20 #n must be greater than 6 for my SOR code to work - the issue lies with calculating eigenvalues.
K=1
v=0.3
u0=0.
u1=1.
[A,b]=dirichlet(n,v,u0,1,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]));
[Q,I,S]=space_iterate('sor',A,b,1,w=1.25);
Explanation: The following code will initialize the $[A]$ and $[b]$ matrices according to the inputs, and depending on which code I call, as they use Neumann + Dirichlet and Dirichlet-only boundary conditions, respectively.
End of explanation
#Initialize a comparison
n=20 #n must be greater than 6 for my SOR code to work - the issue lies with calculating eigenvalues.
K=0.3
v=0.5
u0=0.
u1=1.
#[A,b]=dirichlet(n,v,u0,u1,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]))
#[A,b]=neumann_stable(n,v,u1,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]))
Explanation: 1. Sensitivity of the System to Boundary Conditions
Note that the spectral radius changes as the boundary conditions change, but not when the values change. The spectral radius is not influenced by boundary condition values per say, but it is influenced by the type of boundary conditions present within the system. This is because the values are not introduced to the matrices $[B]$ or $[C]$, but the boundary conditions will influence the values on the diagonals of the matrix $[A]$, which will either end up in matrices $[B]$ or $[C]$ depending on the iteration scheme.
See below for comparison values:
End of explanation
print 'Variations under changing boundary values, using Dirichlet boundary conditions:'
print '\n Conditions with u_surface = 1: '
[A,b]=dirichlet(n,v,u0,1,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]));
[Q,I,S]=space_iterate('sor',A,b,1);
print '\n\n Conditions with u_surface = 2: '
[A,b]=dirichlet(n,v,u0,2,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]));
[Q,I,S]=space_iterate('sor',A,b,1);
print '\n\n Conditions with u_surface = 3: '
[A,b]=dirichlet(n,v,u0,3,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]));
[Q,I,S]=space_iterate('sor',A,b,1);
Explanation: Variations in boundary condition values:
End of explanation
print 'Variations under changing boundary conditions | Dirichlet v Neumann + Dirichlet:'
print '\n\n Conditions with Neumann + Dirichlet: '
[A,b]=neumann_stable(n,v,1,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]));
[Q,I,S]=space_iterate('sor',A,b,1);
print '\n\n Conditions with Dirichlet: '
[A,b]=dirichlet(n,v,1,0,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]));
[Q,I,S]=space_iterate('sor',A,b,1);
Explanation: Variations in the boundary condition itself:
End of explanation
[A,b]=dirichlet(n,0.5,1,0,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]))
[Q,I,S]=space_iterate('jacobi',A,b,1)
[Q,I,S]=space_iterate('gaussseidel',A,b,1)
[Q,I,S]=space_iterate('sor',A,b,1)
nu=[1.0e3,1.0e2,1.0e1,9.0,8.0,7.0,6.0,5.0,4.0,3.0,2.0,1.0e0,.9,.8,.7,.6,.5,.4,.3,.2,1.0e-1,1.0e-2,1.0e-3]
#fig, axs = plt.subplots(1,2)
#,ax=axs[1]
plt.figure(figsize=(20,5))
plt.subplot(121)
for R in range(0,size(nu)): #Varies the value of nu
[A,b]=dirichlet(n,nu[R],1,0,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]))
[Q,Is,Ss]=space_iterate('sor',A,b,1)
[Q,Ig,Sg]=space_iterate('gaussseidel',A,b,1)
[Q,Ij,Sj]=space_iterate('jacobi',A,b,1)
plt.semilogx(nu[R],Ss,'ok')
plt.semilogx(nu[R],Sg,'sk')
plt.semilogx(nu[R],Sj,'+k')
if int == 1.0:
plt.legend(['SOR','Gauss-Seidel','Jacobi'])
#plt.title('Surface Velocity')
plt.ylabel(r'$\rho$', fontsize=20)
plt.xlabel(r'$\nu$', fontsize=20)
plt.subplot(122)
for R in range(0,size(nu)): #Varies the value of nu
[A,b]=dirichlet(n,nu[R],1,0,matrix(arange(0,n+2)).T,matrix(arange(0,n+2)).T,K*ones([n+1,1]))
[Q,Is,Ss]=space_iterate('sor',A,b,1)
[Q,Ig,Sg]=space_iterate('gaussseidel',A,b,1)
[Q,Ij,Sj]=space_iterate('jacobi',A,b,1)
plt.semilogx(nu[R],Is,'ok')
plt.semilogx(nu[R],Ig,'sk')
plt.semilogx(nu[R],Ij,'+k')
if R == 1.0:
plt.legend(['SOR','Gauss-Seidel','Jacobi'])
#plt.title('Surface Velocity')
plt.ylabel(r'$Iterations$', fontsize=20)
plt.xlabel(r'$\nu$', fontsize=20)
Explanation: 2. Sensitivity of the solution to $\nu$
The convergence of the system is sensitive to the value of $\nu$, as $\nu$ will change the values on the diagonal of the matrix $[A]$, changing the determinate of $[B]$ in the process of calculating its inverse, and thus the spectral radius ($\rho$).
End of explanation
K=1.
v=1.
u0=0.
u1=1.
N=(arange(0,20)+1)*10
#fig, axs = plt.subplots(1,2)
#,ax=axs[1]
plt.figure(figsize=(20,5))
plt.subplot(121)
for R in range(0,size(N)): #Varies the value of n
[A,b]=dirichlet(N[R],v,1,0,matrix(arange(0,N[R]+2)).T,matrix(arange(0,N[R]+2)).T,K*ones([N[R]+1,1]))
[Q,Is,Ss]=space_iterate('sor',A,b,1)
[Q,Ig,Sg]=space_iterate('gaussseidel',A,b,1)
[Q,Ij,Sj]=space_iterate('jacobi',A,b,1)
plt.plot(N[R],Ss,'ok')
plt.plot(N[R],Sg,'sk')
plt.plot(N[R],Sj,'+k')
if R == 1.0:
plt.legend(['SOR','Gauss-Seidel','Jacobi'])
#plt.title('Surface Velocity')
plt.ylabel(r'$\rho$', fontsize=20)
plt.xlabel(r'$n$', fontsize=20)
plt.subplot(122)
for R in range(0,size(N)): #Varies the value of n
[A,b]=dirichlet(N[R],v,1,0,matrix(arange(0,N[R]+2)).T,matrix(arange(0,N[R]+2)).T,K*ones([N[R]+1,1]))
[Q,Is,Ss]=space_iterate('sor',A,b,1)
[Q,Ig,Sg]=space_iterate('gaussseidel',A,b,1)
[Q,Ij,Sj]=space_iterate('jacobi',A,b,1)
plt.plot(N[R],Is,'ok')
plt.plot(N[R],Ig,'sk')
plt.plot(N[R],Ij,'+k')
if R == 1.0:
plt.legend(['SOR','Gauss-Seidel','Jacobi'])
#plt.title('Surface Velocity')
plt.ylabel(r'$Iterations$', fontsize=20)
plt.xlabel(r'$n$', fontsize=20)
Explanation: Reviewing the effect of $\nu$
After plotting these values, it became apparent that further investigation of the SOR factor was needed. After testing the matrices with varying values for this factor, it appears that the factor can be chosen specifically for given conditions to maximize the code's efficiency. The downside of this is that the SOR method will become inaccurate if the SOR factor is not tailored to matrix $[A]$, i.e. adjusted relative to the magnitude of the terms $C_{k,k}$, as seen in the flat-lining of the SOR spectral radius for given values of $\nu$, or effectively values of $\frac{\nu}{C_{k,k}}$.
Questioning the effectiveness of the SOR code
As the SOR code appears to yield highly-consistent values for the number of iterations required, I am unsure as to whether the issue is with my SOR code, or if the issue is with the SOR procedure. I am not sure if there is a way to pre-calibrate the SOR factor to work with the given conditions, but I hope to look into this in the next week or so.
3. Convergence of the system on N
The system takes longer time to converge for matrices with larger N's, as the computations involve matrices of size N x N; however, it appears that the system requires the same number of iterations in spite of the spectral radius increasing (which should indicate that the procedure takes more iterations to reach the specified tolerance). This may be an artifact of my SOR code; further investigation is needed, which will take place this weekend and next week to iron out any bugs that I might be able to find in the SOR portion of my code. Fortunately, the Gauss-Seidel and Jacobi schemes appear to be working fine.
End of explanation |
9,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using machine learning techniques
The ideas of
<a href="https
Step1: Parameters
Step2: Load data
Let's load the data
Step3: Let's store features, labels and other data into numpy arrays.
Step4: Data inspection
Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that
Step5: Feature distribution
plot_feature_stats(X, y, feature_names, facies_colors, facies_names)
mpl.rcParams.update(inline_rc)
Step6: Feature imputation
Let us fill missing PE values. Currently no feature engineering is used, but this should be explored in the future.
Step7: Augment features
Step8: Generate training, validation and test data splitsar4_submission_withFac.ipynb
The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that
Step9: Classification parameters optimization
Let us perform the following steps for each set of parameters | Python Code:
# Import
from __future__ import division
get_ipython().magic(u'matplotlib inline')
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (20.0, 10.0)
inline_rc = dict(mpl.rcParams)
from classification_utilities import make_facies_log_plot
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.metrics import f1_score
from sklearn.model_selection import GridSearchCV
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, GradientBoostingClassifier
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
from scipy.signal import medfilt
import sys, scipy, sklearn
print('Python: ' + sys.version.split('\n')[0])
print(' ' + sys.version.split('\n')[0])
print('Pandas: ' + pd.__version__)
print('Numpy: ' + np.__version__)
print('Scipy: ' + scipy.__version__)
print('Sklearn: ' + sklearn.__version__)
print('Xgboost: ' + xgb.__version__)
Explanation: Facies classification using machine learning techniques
The ideas of
<a href="https://home.deib.polimi.it/bestagini/">Paolo Bestagini's</a> "Try 2", <a href="https://github.com/ar4">Alan Richardson's</a> "Try 2",
<a href="https://github.com/dalide">Dalide's</a> "Try 6", augmented, by Dimitrios Oikonomou and Eirik Larsen (ESA AS) by
adding the gradient of gradient of features as augmented features.
with an ML estimator for PE using both training and blind well data.
removing the NM_M from augmented features.
In the following, we provide a possible solution to the facies classification problem described at https://github.com/seg/2016-ml-contest.
The proposed algorithm is based on the use of random forests, xgboost or gradient boost combined in one-vs-one multiclass strategy. In particular, we would like to study the effect of:
- Robust feature normalization.
- Feature imputation for missing feature values.
- Well-based cross-validation routines.
- Feature augmentation strategies.
- Test multiple classifiers
Script initialization
Let's import the used packages and define some parameters (e.g., colors, labels, etc.).
End of explanation
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
#Select classifier type
#clfType='GB' #Gradient Boosting Classifier
clfType='XBA' #XGB Clasifier
# Define window length
N_neig=2
#Seed
seed = 24
np.random.seed(seed)
Explanation: Parameters
End of explanation
# Load data from file
data = pd.read_csv('../facies_vectors.csv')
# Load Test data from file
test_data = pd.read_csv('../validation_data_nofacies.csv')
test_data.insert(0,'Facies',np.ones(test_data.shape[0])*(-1))
#Create Dataset for PE prediction from both dasets
all_data=pd.concat([data,test_data])
Explanation: Load data
Let's load the data
End of explanation
# Store features and labels
X = data[feature_names].values # features
y = data['Facies'].values # labels
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
Explanation: Let's store features, labels and other data into numpy arrays.
End of explanation
# Define function for plotting feature statistics
def plot_feature_stats(X, y, feature_names, facies_colors, facies_names):
# Remove NaN
nan_idx = np.any(np.isnan(X), axis=1)
X = X[np.logical_not(nan_idx), :]
y = y[np.logical_not(nan_idx)]
# Merge features and labels into a single DataFrame
features = pd.DataFrame(X, columns=feature_names)
labels = pd.DataFrame(y, columns=['Facies'])
for f_idx, facies in enumerate(facies_names):
labels[labels[:] == f_idx] = facies
data = pd.concat((labels, features), axis=1)
# Plot features statistics
facies_color_map = {}
for ind, label in enumerate(facies_names):
facies_color_map[label] = facies_colors[ind]
sns.pairplot(data, hue='Facies', palette=facies_color_map, hue_order=list(reversed(facies_names)))
Explanation: Data inspection
Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that:
- Some features seem to be affected by a few outlier measurements.
- Only a few wells contain samples from all classes.
- PE measurements are available only for some wells.
End of explanation
# Facies per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.histogram(y[well == w], bins=np.arange(len(facies_names)+1)+.5)
plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist[0])))
ax.set_xticklabels(facies_names)
ax.set_title(w)
# Features per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.logical_not(np.any(np.isnan(X[well == w, :]), axis=0))
plt.bar(np.arange(len(hist)), hist, color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist)))
ax.set_xticklabels(feature_names)
ax.set_yticks([0, 1])
ax.set_yticklabels(['miss', 'hit'])
ax.set_title(w)
Explanation: Feature distribution
plot_feature_stats(X, y, feature_names, facies_colors, facies_names)
mpl.rcParams.update(inline_rc)
End of explanation
def make_pe(X, seed):
reg = RandomForestRegressor(max_features='sqrt', n_estimators=50, random_state=seed)
DataImpAll = all_data[feature_names].copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X[np.array(data.PE.isnull()),feature_names.index('PE')] = reg.predict(data.loc[data.PE.isnull(),:][['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']])
return X
Explanation: Feature imputation
Let us fill missing PE values. Currently no feature engineering is used, but this should be explored in the future.
End of explanation
# ## Feature augmentation
# Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by:
# - Select features to augment.
# - Aggregating aug_features at neighboring depths.
# - Computing aug_features spatial gradient.
# - Computing aug_features spatial gradient of gradient.
# Feature windows concatenation function
def augment_features_window(X, N_neig, features=-1):
# Parameters
N_row = X.shape[0]
if features==-1:
N_feat = X.shape[1]
features=np.arange(0,X.shape[1])
else:
N_feat = len(features)
# Zero padding
X = np.vstack((np.zeros((N_neig, X.shape[1])), X, (np.zeros((N_neig, X.shape[1])))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig)+X.shape[1]))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
if (c==0):
this_row = np.hstack((this_row, X[r+c,:]))
else:
this_row = np.hstack((this_row, X[r+c,features]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth, features=-1):
if features==-1:
features=np.arange(0,X.shape[1])
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X[:,features], axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1, features=-1, seed=None, pe=True):
seed = seed or 0
if pe:
X = make_pe(X, seed)
if (features==-1):
N_Feat=X.shape[1]
else:
N_Feat=len(features)
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1] + N_Feat*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig,features)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx],features)
X_aug_grad_grad = augment_features_gradient(X_aug_grad, depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad,X_aug_grad_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v, clf):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=3)
return y_v_hat
# Define which features to augment by introducing window and gradients.
augm_Features=['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'RELPOS']
# Get the columns of features to be augmented
feature_indices=[feature_names.index(log) for log in augm_Features]
# # Augment features
# X_aug, padded_rows = augment_features(X, well, depth, N_neig=N_neig, features=feature_indices)
# # Remove padded rows
# data_no_pad = np.setdiff1d(np.arange(0,X_aug.shape[0]), padded_rows)
# X=X[data_no_pad ,:]
# depth=depth[data_no_pad]
# X_aug=X_aug[data_no_pad ,:]
# y=y[data_no_pad]
# data=data.iloc[data_no_pad ,:]
# well=well[data_no_pad]
Explanation: Augment features
End of explanation
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
# Print splits
for s, split in enumerate(split_list):
print('Split %d' % s)
print(' training: %s' % (data.iloc[split['train']]['Well Name'].unique()))
print(' validation: %s' % (data.iloc[split['val']]['Well Name'].unique()))
Explanation: Generate training, validation and test data splitsar4_submission_withFac.ipynb
The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that:
- Features from each well belongs to training or validation set.
- Training and validation sets contain at least one sample for each class.
Initialize model selection methods
End of explanation
# if clfType=='XB':
# md_grid = [2,3]
# # mcw_grid = [1]
# gamma_grid = [0.2, 0.3, 0.4]
# ss_grid = [0.7, 0.9, 0.5]
# csb_grid = [0.6,0.8,0.9]
# alpha_grid =[0.2, 0.4, 0.3]
# lr_grid = [0.04, 0.06, 0.05]
# ne_grid = [100,200,300]
# param_grid = []
# for N in md_grid:
# # for M in mcw_grid:
# for S in gamma_grid:
# for L in ss_grid:
# for K in csb_grid:
# for P in alpha_grid:
# for R in lr_grid:
# for E in ne_grid:
# param_grid.append({'maxdepth':N,
# # 'minchildweight':M,
# 'gamma':S,
# 'subsample':L,
# 'colsamplebytree':K,
# 'alpha':P,
# 'learningrate':R,
# 'n_estimators':E})
# if clfType=='XBA':
# learning_rate_grid=[0.12] #[0.06, 0.10, 0.12]
# max_depth_grid=[3] #[3, 5]
# min_child_weight_grid=[6] #[6, 8, 10]
# colsample_bytree_grid = [0.9] #[0.7, 0.9]
# n_estimators_grid=[120] #[80, 120, 150] #[150]
# param_grid = []
# for max_depth in max_depth_grid:
# for min_child_weight in min_child_weight_grid:
# for colsample_bytree in colsample_bytree_grid:
# for learning_rate in learning_rate_grid:
# for n_estimators in n_estimators_grid:
# param_grid.append({'maxdepth':max_depth,
# 'minchildweight':min_child_weight,
# 'colsamplebytree':colsample_bytree,
# 'learningrate':learning_rate,
# 'n_estimators':n_estimators})
# if clfType=='RF':
# N_grid = [50, 100, 150]
# M_grid = [5, 10, 15]
# S_grid = [10, 25, 50, 75]
# L_grid = [2, 3, 4, 5, 10, 25]
# param_grid = []
# for N in N_grid:
# for M in M_grid:
# for S in S_grid:
# for L in L_grid:
# param_grid.append({'N':N, 'M':M, 'S':S, 'L':L})
# if clfType=='GB':
# N_grid = [80] #[80, 100, 120]
# MD_grid = [5] #[3, 5]
# M_grid = [10]
# LR_grid = [0.12] #[0.1, 0.08, 0.12]
# L_grid = [3] #[3, 5, 7]
# S_grid = [25] #[20, 25, 30]
# param_grid = []
# for N in N_grid:
# for M in MD_grid:
# for M1 in M_grid:
# for S in LR_grid:
# for L in L_grid:
# for S1 in S_grid:
# param_grid.append({'N':N, 'MD':M, 'MF':M1,'LR':S,'L':L,'S1':S1})
def getClf(clfType, param):
if clfType=='RF':
clf = OneVsOneClassifier(RandomForestClassifier(n_estimators=param['N'], criterion='entropy',
max_features=param['M'], min_samples_split=param['S'], min_samples_leaf=param['L'],
class_weight='balanced', random_state=seed), n_jobs=-1)
if clfType=='XB':
clf = OneVsOneClassifier(XGBClassifier(
learning_rate = param['learningrate'],
n_estimators=param['n_estimators'],
max_depth=param['maxdepth'],
# min_child_weight=param['minchildweight'],
gamma = param['gamma'],
subsample=param['subsample'],
colsample_bytree=param['colsamplebytree'],
reg_alpha = param['alpha'],
nthread =4,
seed = seed,
) , n_jobs=4)
if clfType=='XBA':
clf = XGBClassifier(
learning_rate = param['learningrate'],
n_estimators=param['n_estimators'],
max_depth=param['maxdepth'],
min_child_weight=param['minchildweight'],
colsample_bytree=param['colsamplebytree'],
nthread = -1,
seed = param['seed']
)
if clfType=='GB':
clf=OneVsOneClassifier(GradientBoostingClassifier(
loss='exponential',
n_estimators=param['N'],
learning_rate=param['LR'],
max_depth=param['MD'],
max_features= param['MF'],
min_samples_leaf=param['L'],
min_samples_split=param['S1'],
random_state=seed,
max_leaf_nodes=None,)
, n_jobs=-1)
return clf
# # For each set of parameters
# score_param = []
# print('features: %d' % X_aug.shape[1])
# exportScores=[]
# for param in param_grid:
# print('features: %d' % X_aug.shape[1])
# # For each data split
# score_split = []
# split = split_list[5]
# split_train_no_pad = split['train']
# # Select training and validation data from current split
# X_tr = X_aug[split_train_no_pad, :]
# X_v = X_aug[split['val'], :]
# y_tr = y[split_train_no_pad]
# y_v = y[split['val']]
# # Select well labels for validation data
# well_v = well[split['val']]
# # Train and test
# y_v_hat = train_and_test(X_tr, y_tr, X_v, well_v, getClf(clfType,param))
# # Score
# score = f1_score(y_v, y_v_hat, average='micro')
# score_split.append(score)
# #print('Split: {0}, Score = {1:0.3f}'.format(split_list.index(split),score))
# #print('Split: , Score = {0:0.3f}'.format(score))
# # Average score for this param
# score_param.append(np.mean(score_split))
# print('Average F1 score = %.3f %s' % (score_param[-1], param))
# exportScores.append('Average F1 score = %.3f %s' % (score_param[-1], param))
# Best set of parameters
# best_idx = np.argmax(score_param)
# param_best = param_grid[best_idx]
# score_best = score_param[best_idx]
# print('\nBest F1 score = %.3f %s' % (score_best, param_best))
# # Store F1 scores for multiple param grids
# if len(exportScores)>1:
# exportScoresFile=open('results_{0}_{1}_sub01b.txt'.format(clfType,N_neig),'wb')
# exportScoresFile.write('features: %d' % X_aug.shape[1])
# for item in exportScores:
# exportScoresFile.write("%s\n" % item)
# exportScoresFile.write('\nBest F1 score = %.3f %s' % (score_best, param_best))
# exportScoresFile.close()
params = {'minchildweight': 6, 'colsamplebytree': 0.9, 'learningrate': 0.12, 'maxdepth': 3, 'n_estimators': 120}
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
y_pred = []
print('.' * 100)
for seed in range(100):
np.random.seed(seed)
# Make training data.
X_train, padded_rows = augment_features(X, well, depth, seed=seed+100, N_neig=N_neig, features=feature_indices)
y_train = y
X_train = np.delete(X_train, padded_rows, axis=0)
y_train = np.delete(y_train, padded_rows, axis=0)
params['seed'] = seed
# Train classifier
clf = getClf("XBA", params)
# Make blind data.
X_test, _ = augment_features(X_ts, well_ts, depth_ts, seed=seed+100, pe=False, N_neig=N_neig, features=feature_indices)
# Train and test.
y_ts_hat = train_and_test(X_train, y_train, X_test, well_ts, clf)
# Collect result.
y_pred.append(y_ts_hat)
print('|', end='')
np.save('esaTeam_100_realizations.npy', y_pred)
# ## Predict labels on test data
# Let us now apply the selected classification technique to test data.
# Training data
X_tr = X_aug
y_tr = y
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
# Augment Test data features
X_ts, padded_rows = augment_features(X_ts, well_ts,depth_ts,N_neig=N_neig, features=feature_indices)
# Predict test labels
y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts, getClf(clfType,param_best))
# Save predicted labels
test_data['Facies'] = y_ts_hat
test_data.to_csv('esa_predicted_facies_{0}_{1}_sub01c.csv'.format(clfType,N_neig))
# Plot predicted labels
make_facies_log_plot(
test_data[test_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
test_data[test_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
mpl.rcParams.update(inline_rc)
Explanation: Classification parameters optimization
Let us perform the following steps for each set of parameters:
Select a data split.
Normalize features using a robust scaler.
Train the classifier on training data.
Test the trained classifier on validation data.
Repeat for all splits and average the F1 scores.
At the end of the loop, we select the classifier that maximizes the average F1 score on the validation set. Hopefully, this classifier should be able to generalize well on new data.
End of explanation |
9,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MÓDULO NumPy
Los módulos NumPy (Numerical Python) y SciPy proporcionan funciones y rutinas matemáticas para la manipulación de arrays y matrices de datos numéricos de una forma eficiente.
El módulo SciPy extiende la funcionalidad de NumPy con una colección de algoritmos matemáticos (minimización, transformada de Fourier, regresión, ...).
NumPy proporciona
Step1: La instrucción anterior permite utilizar las funciones del módulo NumPy con el nombre abreviado np en lugar de numpy.
Step2: Permite realizar operaciones sobre los arrays multidimensionales como si se tratara de operaciones sobre escalares
Step3: Propiedades de los objetos ndarray
Step4: <BR>
CREACIÓN DE ARRAYS
Existen varias formas para crear un array.
La forma más sencilla de crear un array es utilizando la función array y una lista de objetos, que pueden ser otros arrays
Step5: El módulo NumPy nos permite crear arrays de un cierto tipo, lo que garantizará la eficiencia de las operaciones que se vayan a realizar con los `mismos.
Step6: Acabamos de crear un array de 2 filas y 2 columnas de reales. El argumento dtype sirve para especificar la precisión.
Los elementos de un array suelen ser desconocidos, pero lo normal es que el tamaño de los datos sea conocido.
Las funciones zeros y ones permiten crear un array cuyo contenido son todo ceros y unos, respectivamente. Hay que indicar el tamaño de las dimensiones del array mediante una tupla o una lista.
Step7: Las funciones zeros_like y ones_like son análogas a zeros y ones pero no es necesario indicar las dimensiones, solo hay que indicar el array del cual queremos copiar las dimensiones.
Step8: Crear un array mediante una secuencia de números con cierto criterio. Utilizamos las funciones arange y linspace.
Step10: Como la función arange utiliza argumentos de tipo float, no es posible predecir el número de elementos del array. En ese caso es mejor utilizar la función linspace que genera un array con un número determinado de elementos sin indicar el paso.
Step11: Creación con datos aleatorios mediante la función rand del módulo Random. La función rand devuelve un número aleatorio procedente de una distribución uniforme en el intervalo [0,1).
Step12: <br>
OPERACIONES ENTRE ARRAYS Y ESCALARES
Los operadores aritméticos aplicados a arrays, se aplican elemento a elemento.
Para que la operación tenga éxito, los arrays implicados han de tener la misma dimensión.
El resultado es un nuevo array cuyos datos depende de la operación realizada.
Step13: En el caso de arrays multidimensionales, se sigue manteniendo que las operaciones se realizan elemento a elemento. Por ejemplo en el caso de dos dimensiones, el producto de dos arrays no se corresponde con la multiplicación de matrices según la conocemos.
Step14: El producto matricial se obtiene mediante el uso de la función dot o creando objetos de tipo matrix en lugar de array.
Como recordatorio tenemos esta imagen que nos dice como se calcula el producto de dos matrices
Step15: <br>
FUNCIONES UNIVERSALES
Se trata de funciones que actúan sobre cada uno de los elementos de un array.
Step16: <br>
UPCASTING
Cuando se opera con arrays de distinto tipo, el tipo del array resultante es el tipo con más precisión. Este comportamiento se conoce como upcasting.
Step17: <br>
ELEMENTOS DE UN ARRAY
Step18: Una diferencia importante con las listas, es que las particiones de un ndarray mediante la notación [inicio
Step19: Este comportamiento evita problemas de memoria. Hay que recordar que NumPy ha sido diseñado para manejar grandes cantidades de datos.
El acceso a los elementos de un array bidimensional, se realiza indicando los índices separados por una coma.
Step20: Para recorrer los elementos de un array podemos utilizar un bucle del tipo for. El siguiente ejemplo recorre las filas de la matriz b
Step21: Para acceder a los elementos uno por uno, podemos usar el atributo flat de los arrays
Step22: <br>
USO DE MÁSCARAS EN ARRAYS
Otra forma de acceso a partes de un ndarray es mediante un array de bool que actúa como máscara. Supongamos que queremos seleccionar la primera fila y la cuarta fila
Step23: <br>
CAMBIAR LA FORMA DE UN ARRAY
Métodos reshape, ravel y transpose(T).
A parte de la función reshape que permite redimensionar un array, nos puede interesar aplanar un array mediante la función ravel o transponer un array mediante la función transpose(T).
Step24: <br>
Funciones vstack y hstack
A partir de 2 arrays, es posible concatenarlos por alguna de las dimensiones (por filas o por columnas) mediante las funciones vstack y hstack
Step25: <br>
Copias y vistas de arrays
Cuando se manipula arrays, los datos pueden ser copiados en otro array (y se duplican los datos) o por el contrario, los arrays comparten datos aunque pueden ser accedidos mediante nombres diferentes. Veamos algunos ejemplos
Step26: La función view permite crear un array cuyos datos son compartidos con otro, pero cuya forma y acceso a los datos puede ser diferente
Step27: Si lo que queremos es hacer una copia completa de un array (copia de todos los datos), utilizaremos la función copy | Python Code:
import numpy as np
Explanation: MÓDULO NumPy
Los módulos NumPy (Numerical Python) y SciPy proporcionan funciones y rutinas matemáticas para la manipulación de arrays y matrices de datos numéricos de una forma eficiente.
El módulo SciPy extiende la funcionalidad de NumPy con una colección de algoritmos matemáticos (minimización, transformada de Fourier, regresión, ...).
NumPy proporciona:
El objeto ndarray: Un array multidimensional que permite realizar operaciones aritméticas sobre vectores muy eficientes.
Colección de funciones matemáticas muy eficientes que operan sobre vectores (ndarrays) sin necesidad de escribir bucles (for o while).
Son más eficientes y rápidas que las operaciones sobre listas.
<BR/>
LOS ARRAYS DE NUMPY: NDARRAY
¿Qué es un array?
En programación se denomina matriz, vector (de una sola dimensión) o formación (en inglés array) a una zona de almacenamiento contiguo que contiene una serie de elementos del mismo tipo. Desde el punto de vista lógico, una matriz se puede ver como un conjunto de elementos ordenados en fila (o filas y columnas si tuviera dos dimensiones).
<center>Matriz unidimensional de 10 elementos</center>
<br>
En principio, se puede considerar que todas las matrices son de una dimensión, la dimensión principal, pero los elementos de dicha fila pueden ser a su vez matrices (un proceso que puede ser recursivo), lo que nos permite hablar de la existencia de matrices multidimensionales, aunque las más fáciles de imaginar son los de una, dos y tres dimensiones.
Estas estructuras de datos son adecuadas para situaciones en las que el acceso a los datos se realice de forma aleatoria e impredecible. Por el contrario, si los elementos pueden estar ordenados y se va a utilizar acceso secuencial sería más adecuado utilizar una lista, ya que esta estructura puede cambiar de tamaño fácilmente durante la ejecución de un programa.
En NumPy el tipo fundamental es el array multidimensional: el objeto ndarray.
Los ndarrays son similares a las listas en Python, con la diferencia de que todos los elementos de un ndarray son del mismo tipo.
Para usar NumPy lo primero que debemos hacer es importarlo. Para importar el módulo NumPy debemos la siguiente instruccion:
End of explanation
# Para crear un array escribimos lo siguiente:
a = np.array([[1,2,3],[4,5,6], [7,8,9]])
a
Explanation: La instrucción anterior permite utilizar las funciones del módulo NumPy con el nombre abreviado np en lugar de numpy.
End of explanation
# Se puede, por ejemplo, multiplcar facilmente todos los elementos de un array por un número
a * 10
Explanation: Permite realizar operaciones sobre los arrays multidimensionales como si se tratara de operaciones sobre escalares:
End of explanation
# 'shape' --> Filas, Columnas
a.shape, a.dtype
Explanation: Propiedades de los objetos ndarray:
La propiedad shape que indica las dimensiones del array.
La propiedad dtype indica el tipo de los elementos almacenados en el array.
<br><br>
Tipos para dtype:
int8: Byte (-128 to 127)
int16: Integer (-32768 to 32767)
int32: Integer (-2147483648 to 2147483647)
int64: Integer (-9223372036854775808 to 9223372036854775807)
uint8: Unsigned integer (0 to 255)
uint16: Unsigned integer (0 to 65535)
uint32: Unsigned integer (0 to 4294967295)
uint64: Unsigned integer (0 to 18446744073709551615)
float16: Half precision float: sign bit, 5 bits exponent, 10 bits mantissa
float32: Single precision float: sign bit, 8 bits exponent, 23 bits mantissa
float64: Double precision float: sign bit, 11 bits exponent, 52 bits mantissa
End of explanation
# Definimos el array 'a'
a = np.array( [2,3,4] )
# Definimos el array 'b'
b = np.array( [1.2, 3.5, 5] )
# Nos devuelve el tipo de datos almacenados en cada array
a.dtype,b.dtype
# Recuerda que con la función type() puedes saber el tipo de dato de la variable por la que preguntes
type(a)
Explanation: <BR>
CREACIÓN DE ARRAYS
Existen varias formas para crear un array.
La forma más sencilla de crear un array es utilizando la función array y una lista de objetos, que pueden ser otros arrays:
End of explanation
# Creación de un array de 2f, 2r (2 files, 2 rows) con formato float16
a = np.array([[1,2],[3,4]], dtype='float16')
print(a)
a.shape , a.dtype
Explanation: El módulo NumPy nos permite crear arrays de un cierto tipo, lo que garantizará la eficiencia de las operaciones que se vayan a realizar con los `mismos.
End of explanation
# Array de todo ceros, 1 dimensión y 10 elementos
a1 = np.zeros(10)
print(a1)
a1.dtype
# Array de 3 filas, 4 columnas
a2 = np.zeros((3,4))
print(a2)
a2.dtype
# Array de 3 filas, 4 columnas
# Se puede especificar el tipo de datos al crear el array
a3 = np.zeros((3,4), dtype='int32')
print(a3)
a3.dtype
Explanation: Acabamos de crear un array de 2 filas y 2 columnas de reales. El argumento dtype sirve para especificar la precisión.
Los elementos de un array suelen ser desconocidos, pero lo normal es que el tamaño de los datos sea conocido.
Las funciones zeros y ones permiten crear un array cuyo contenido son todo ceros y unos, respectivamente. Hay que indicar el tamaño de las dimensiones del array mediante una tupla o una lista.
End of explanation
# Creamos el array 'a4' a partir del array 'a'
print(a)
a4 = np.ones_like(a)
a4
Explanation: Las funciones zeros_like y ones_like son análogas a zeros y ones pero no es necesario indicar las dimensiones, solo hay que indicar el array del cual queremos copiar las dimensiones.
End of explanation
# Creamos el array 'a6' con números del 0 al 10(excluído) con pasos 0.5
# La función 'range' crea un objeto iterable de enteros. La función 'arange' crea un objeto de tipo ndarray.
# arange(incio, fin, salto)
a6 = np.arange(0, 10, .5)
print(a6)
a6.dtype, a.shape
c = np.arange(0 , 2 , .3)
print(c)
Explanation: Crear un array mediante una secuencia de números con cierto criterio. Utilizamos las funciones arange y linspace.
End of explanation
# Creamos un array de 10 números del 0 al 2
# linspace(inicio, fin, nº de elementos)
e = np.linspace( 0, 2, 10 )
print(e)
generación de números que se usa habitualmente en la evalución de funciones
# Importamos las librerias
import math as mt
import matplotlib.pyplot as plt
import numpy as np
%pylab inline
# Representación gráfica de la funcion seno(2*pi)
# Creamos un array de 100 elementos entre 0 y 2*pi
x = np.linspace( 0, 2*mt.pi, 100 )
y = np.sin(x)
plt.plot(x, y);
Explanation: Como la función arange utiliza argumentos de tipo float, no es posible predecir el número de elementos del array. En ese caso es mejor utilizar la función linspace que genera un array con un número determinado de elementos sin indicar el paso.
End of explanation
# Genera 10 números aleatorios entre el 0 y el 1
a1 = np.random.rand(10)
print (a1)
a1.dtype
# Representación gráfica de la función rand para 100 elementos
y = np.random.rand(100)
plt.plot(y);
# Genera un array de 3 filas y 4 columnas, con valores aleatorios [0,1)
a2 = np.random.rand(3, 4)
print(a2)
a2.dtype
Explanation: Creación con datos aleatorios mediante la función rand del módulo Random. La función rand devuelve un número aleatorio procedente de una distribución uniforme en el intervalo [0,1).
End of explanation
# Creamos dos arrays 'a' y 'b' de tipo entero(int) y una dimensión
a = np.array([1,2,3], int)
b = np.array([4,5,6], int)
# Operaciones entre arrays
rs = a + b
rr = a - b
rp = a * b
rd = b / a
rm = a % b
re = b ** a
# Operaciones entre arrays y escalares
rp_esc = a * 10
re_esc = b ** 2
print('Operaciones vectoriales')
print('-----------------------')
print("suma: ", rs)
print("producto: ", rp)
print("potencia: ", re)
print()
print('Operaciones escalares')
print('---------------------')
print(rp_esc)
Explanation: <br>
OPERACIONES ENTRE ARRAYS Y ESCALARES
Los operadores aritméticos aplicados a arrays, se aplican elemento a elemento.
Para que la operación tenga éxito, los arrays implicados han de tener la misma dimensión.
El resultado es un nuevo array cuyos datos depende de la operación realizada.
End of explanation
# Creamos dos arrays 'a' y 'b' de tipo real(float) y 2x2 dimensiones
a = np.array([[1,2], [3,4]], float)
b = np.array([[2,0], [1,3]], float)
print(a)
print(b)
# Las operaciones vectoriales en los arrays multidimensionales se ejecutan como en los unidimensionales
p = a * b
print(p)
Explanation: En el caso de arrays multidimensionales, se sigue manteniendo que las operaciones se realizan elemento a elemento. Por ejemplo en el caso de dos dimensiones, el producto de dos arrays no se corresponde con la multiplicación de matrices según la conocemos.
End of explanation
# Creamos dos arrays de 2x2
A = np.array( [[1,7],
[2,4]] )
B = np.array( [[3,3],
[5,2]] )
# Para multiplicar dos matrices (producto matricial), tal y como se hace en matemáticas se usa la función 'dot'
C = np.dot(A,B)
print(C)
Explanation: El producto matricial se obtiene mediante el uso de la función dot o creando objetos de tipo matrix en lugar de array.
Como recordatorio tenemos esta imagen que nos dice como se calcula el producto de dos matrices:
<br>
Para calcular el producto, de una manera mas visual, sería como sigue:<br>
1x3 + 7x5 1x3 + 7x2 = 38 17<br>
2x3 + 4x5 2x3 + 4x2 = 26 14<br>
End of explanation
# Creamos un array de 2x2 y de tipo float
a = np.array([[82.,-25], [12,-4]], float)
a
# Uso de la función 'abs()' que nos devuelve el valor absoluto (sin números negativos)
abs(a)
# Función que muestra los valores máximos al comparar los elementos de un array, columa por columna
np.maximum([2, 3, 4],
[1, 5, 2])
# Función que nos crea una matriz unitaria (donde todos los elementos de su diagonal son 1)
np.eye(4)
# Se pueden concatenar funciones de la siguiente manera:
a = np.random.rand(4)
np.maximum(np.eye(4), a)
# Compara ambas matrices y muestra true cuando se cumpla la condición >=
np.greater_equal(np.eye(4), a)
Explanation: <br>
FUNCIONES UNIVERSALES
Se trata de funciones que actúan sobre cada uno de los elementos de un array.
End of explanation
# Creamos dos arrays, uno con todo ceros de tipo entero (int32)
a = np.ones(3, dtype=int32)
# Creamos un array de 3 elementos entre 0 y pi, tipo float64
b = np.linspace(0, np.pi, 3)
print( "a : ", a)
print( "b : ", b)
print( "Tipo de a: " , a.dtype)
print( "Tipo de b: " , b.dtype)
# El resultado será un array de tipo float64, pues es capaz de representar números con mayor exactitud que int32
# (sin tener en cuenta que además int32 NO es capaz de representar números decimales)
c = a + b
print("c :", c)
print("Tipo de c:", c.dtype)
Explanation: <br>
UPCASTING
Cuando se opera con arrays de distinto tipo, el tipo del array resultante es el tipo con más precisión. Este comportamiento se conoce como upcasting.
End of explanation
# Creamos un array de 10 elementos [0, 10)
a = np.arange(10)**3
a
# Muestra la posición '2' dentro del array
a[2]
# Acceso al rango de elementos [2,5) (excluye la posición 5)
a[2:5]
Explanation: <br>
ELEMENTOS DE UN ARRAY: ACCESO Y RECORRIDO
Cuando trabajamos con arrays de una dimensión, el acceso a los elementos se realiza de forma similar a como se hace en el caso de listas o tuplas de elementos.
End of explanation
# No se define un nuevo array, solo una vista de a
# 'b' son los 5 primeros elementos de 'a'
b = a[0:5]
# Para todos los elementos de 'b' su valor será ahora 0
b[::] = 0
# Al presentar el valor de 'a' comprobamos que se han modificado los 5 primeros elementos
a
Explanation: Una diferencia importante con las listas, es que las particiones de un ndarray mediante la notación [inicio:fin:paso] son vistas del array original. Todos los cambios realizados en las vistas, se reflejan en el array original:
End of explanation
m = np.array([[2,4,6],[1,2,3]], dtype = 'int')
m
# Acceso al elemento de la fila 0, columna 2
m[0, 2]
# Acceso a los elementos de arrays multidimensionales
# array[elementos_fila, elementos_columna]
b = np.array([[ 0, 1, 2, 3],
[10, 11, 12, 13],
[20, 21, 22, 23],
[30, 31, 32, 33],
[40, 41, 42, 43]])
#Acceso a todos los elementos de la segunda columna
b[:, 1]
# Acceso a los todos los elementos de las filas 2 y 3 de todas las columnas
b[1:3, :]
Explanation: Este comportamiento evita problemas de memoria. Hay que recordar que NumPy ha sido diseñado para manejar grandes cantidades de datos.
El acceso a los elementos de un array bidimensional, se realiza indicando los índices separados por una coma.
End of explanation
for row in b:
print("fila: " , row)
Explanation: Para recorrer los elementos de un array podemos utilizar un bucle del tipo for. El siguiente ejemplo recorre las filas de la matriz b:
End of explanation
for elem in b.flat:
print(elem)
Explanation: Para acceder a los elementos uno por uno, podemos usar el atributo flat de los arrays:
End of explanation
b
# Creamos un array/máscara para aplicar al array 'b', el array/máscara es de tipo booleano
# En este caso cuando aparece 'True' muestra los valores de dicha linea del array, cuando es 'False' no los muestra
# Debería mostrar las líneas 0 y 3
mascara = np.array([True, False, False, True, False])
b[mascara]
# Se puede definir la máscara como algo más complejo
# Elementos de una matriz que cumplen una cierta propiedad comparativa
# Creamos un array 'A' de 2x2
A = np.array([[22,0], [1,10]], np.float)
print(A)
# Creamos un array 'B' copia del anterior en el que se mostrará 'True' o 'False' si cumple la condición
B = A < 7.
B
# El resultado es una matriz booleana
# Ahora podemos poner a 0 todos los valores menores que 7.
B[ B < 7 ] = 0
B
Explanation: <br>
USO DE MÁSCARAS EN ARRAYS
Otra forma de acceso a partes de un ndarray es mediante un array de bool que actúa como máscara. Supongamos que queremos seleccionar la primera fila y la cuarta fila:
End of explanation
# Creamos un array de 2x2, de tipo entero
m = np.array([[2,4,6],[1,2,3]], dtype = 'int')
m
# Uso de la función ravel() para aplanar el array
a = m.ravel()
a
# Uso de la función transpose, para transponer un array
t = m.T
t
Explanation: <br>
CAMBIAR LA FORMA DE UN ARRAY
Métodos reshape, ravel y transpose(T).
A parte de la función reshape que permite redimensionar un array, nos puede interesar aplanar un array mediante la función ravel o transponer un array mediante la función transpose(T).
End of explanation
# Creamos dos arrays random de 2x2 mediante la función floor()
# Esta función devuelve el entero más grande no mayor que el parámetro de entrada.
# El suelo de la escalar x es el mayor integer i, tal que i <= x
# Siendo en el ejemplo de abajo el escalar x = 10, todo valor generado por random deberá ser menor o igual a 10
a = np.floor(10*np.random.random((2,2)))
b = np.floor(10*np.random.random((2,2)))
print(a)
print(b)
# Uso de vstack para unir dos arrays por columnas
c = np.vstack((a,b))
c
# Uso de vstack para unir dos arrays por filas
d = np.hstack((a,b))
d
Explanation: <br>
Funciones vstack y hstack
A partir de 2 arrays, es posible concatenarlos por alguna de las dimensiones (por filas o por columnas) mediante las funciones vstack y hstack:
End of explanation
# Creamos un array de 12 elementos, [0,11)
a = np.arange(12)
a
# No se crea un nuevo array. Ambos nombres apuntan al mismo objeto
b = a
a, b
# Si modificamos el array 'b' modificaremos el array 'a'
b.shape = (4,3)
print(b)
print(a)
Explanation: <br>
Copias y vistas de arrays
Cuando se manipula arrays, los datos pueden ser copiados en otro array (y se duplican los datos) o por el contrario, los arrays comparten datos aunque pueden ser accedidos mediante nombres diferentes. Veamos algunos ejemplos:
End of explanation
# Creamos un array 'a' de 12 elementos entre [0,11)
a = np.arange(12)
# Creamos una vista del array 'a' en 'c'
c = a.view()
# Al modificar las propiedades de 'c' no se modifica 'a'
c.shape = (4,3)
print(a)
print(c)
# Sin embargo al modificar el contenido del array 'c', si se modifica el array origen 'a'
c[0:3] = 0
print(c)
print(a)
Explanation: La función view permite crear un array cuyos datos son compartidos con otro, pero cuya forma y acceso a los datos puede ser diferente:
End of explanation
# Creamos un array de 6 elementos
a = np.arange(6)
print('a =',a)
# Creamos una copia de 'a' en 'd'
d = a.copy()
print('d =',d)
# Modificiamos los valores de 'd' y estos no modifican los de 'a' al ser arrays diferentes
d[0] = 9999
print('a =',a)
print('d =',d)
Explanation: Si lo que queremos es hacer una copia completa de un array (copia de todos los datos), utilizaremos la función copy:
End of explanation |
9,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 6</font>
Download
Step1: Retornando Dados no MongoDB com PyMongo | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 6</font>
Download: http://github.com/dsacademybr
End of explanation
# Importamos o Módulo PyMongo
import pymongo
# Criando a conexão com o MongoDB (neste caso, conexão padrão)
client_con = pymongo.MongoClient()
# Listando os bancos de dados disponíveis
# client_con.database_names()
client_con.list_database_names()
# Definindo o objeto db
db = client_con.cadastrodb
# Listando as coleções disponíveis
# db.collection_names()
db.list_collection_names()
# Criando uma coleção
db.create_collection("mycollection")
# Listando as coleções disponíveis
# db.collection_names()
db.list_collection_names()
# Inserindo um documento na coleção criada
db.mycollection.insert_one({
'titulo': 'MongoDB com Python',
'descricao': 'MongoDB é um Banco de Dados NoSQL',
'by': 'Data Science Academy',
'url': 'http://www.datascienceacademy.com.br',
'tags': ['mongodb', 'database', 'NoSQL'],
'likes': 100
})
# Retornando o documento criado
db.mycollection.find_one()
# Preparando um documento
doc1 = {"Nome":"Donald","sobrenome":"Trump","twitter":"@POTUS"}
# Inserindo um documento
db.mycollection.insert_one(doc1)
# Preparando um documento
doc2 = {"Site":"http://www.datascienceacademy.com.br",
"facebook":"facebook.com/dsacademybr"}
# Inserindo um documento
db.mycollection.insert_one(doc2)
# Retornando os documentos na coleção
for rec in db.mycollection.find():
print(rec)
# Conectando a uma coleção
col = db["mycollection"]
type(col)
# Contando os documentos em uma coleção
# col.count()
col.estimated_document_count()
# Encontrando um único documento
redoc = col.find_one()
redoc
Explanation: Retornando Dados no MongoDB com PyMongo
End of explanation |
9,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examles of error propigation
Examples are taken from http
Step1: This version uses prior distributions to do all the work. H and h are both informative priors that then drive the solution to the right answer.
Step2: Example 2
Example
Step3: Example 3
Example
Step4: Example 4
Example
Step5: Example 5
For example, suppose Ann and Billy both measure the speed of a moving ball. Ann measures 3.6±0.2 m/s
and Billy gets 3.3 ± 0.3 m/s. Do the two measurements agree?
D = 0.3 ± 0.4 m/s so 0 is in the range, they do agree.
Step6: Acceleration due to graving example
$g=\frac{2h}{t^2}$ | Python Code:
import numpy as np
import pymc3 as pm
import seaborn as sns
import arviz as ar
sns.set(font_scale=1.5)
%matplotlib inline
Explanation: Examles of error propigation
Examples are taken from http://ipl.physics.harvard.edu/wp-uploads/2013/03/PS3_Error_Propagation_sp13.pdf and used on MCMC to show how the answers work
Example 1
Example: suppose you measure the height H of a door and get 2.00 ± 0.03 m. This means that
H = 2.00 m and δH = 0.03 m. The door has a knob which is a height h = 0.88 ± 0.04 m from the bottom
of the door. Then the distance from the doorknob to the top of the door is Q = H − h = 1.12 m. What
is the uncertainty in Q?
Q = 1.12 ± 0.05 m
End of explanation
with pm.Model() as model:
H = pm.Normal('H', 2.00, sigma=0.03)
h = pm.Normal('h', 0.88, sigma=0.04)
Q = pm.Deterministic('Q', H-h)
trace = pm.sample(10000)
with model:
print(pm.summary(trace).round(3))
with model:
pm.traceplot(trace, combined=False)
print("MCMC gives {:.2f} +/- {:.2f}, analytic gives {} +/- {}".format(trace['Q'].mean(),
trace['Q'].std(), 1.12, 0.05))
with model:
pm.plot_posterior(trace)
Explanation: This version uses prior distributions to do all the work. H and h are both informative priors that then drive the solution to the right answer.
End of explanation
with pm.Model() as model:
d = pm.Normal('d', 123, tau=(3)**-2)
t = pm.Normal('t', 20.0, tau=(1.2)**-2)
v = pm.Deterministic('v', d/t)
trace = pm.sample(40000, chains=4)
with model:
print(pm.summary(trace).round(3))
with model:
pm.traceplot(trace, combined=False, lines=[('d', {}, 123), ('t', {}, 20), ('v', {}, 6)])
print("MCMC gives {0:.2f}, analytic gives {1}".format(trace['v'].std(), 0.39))
Explanation: Example 2
Example: a bird flies a distance d = 120 ± 3 m during a time t = 20.0 ± 1.2 s. The average speed of
the bird is v = d/t = 6 m/s. What is the uncertainty of v?
0.39 m/s.
End of explanation
with pm.Model() as model:
T = pm.Normal('T', 0.20, tau=(0.01)**-2)
pm.Deterministic('1/T', 1/T)
trace = pm.sample(10000, tune=1000)
pm.traceplot(trace, combined=False)
pm.summary(trace).round(3)
print("MCMC gives {0:.1f} +/- {1:.1f}, analytic gives {2} +/- {3}".format(np.mean(trace['1/T']),
np.std(trace['1/T']),
5.0, 0.3))
Explanation: Example 3
Example: the period of an oscillation is measured to be T = 0.20 ± 0.01 s. Thus the frequency is
f = 1/T = 5 Hz. What is the uncertainty in f? Answer: the percent uncertainty in T was 0.01/0.20 = 5%.
Thus the percent uncertainty in f is also 5%, which means that δf = 0.25 Hz. So f = 5.0 ± 0.3 Hz (after
rounding).
f = 5.0 ± 0.3 Hz
End of explanation
with pm.Model() as model:
g = 9.80
t = pm.Normal('t', 0.60, tau=(0.06)**-2)
v0 = pm.Normal('v0', 4.0, tau=(0.2)**-2)
h = pm.Deterministic('h', v0*t - 0.5*g*t**2)
trace = pm.sample(10000)
pm.traceplot(trace, combined=False)
pm.summary(trace).round(3)
print("MCMC gives {0:.1f} +/- {1:.1f}, analytic gives {2} +/- {3}".format(np.mean(trace['h']),
np.std(trace['h']),
0.6, 0.4))
Explanation: Example 4
Example: a ball is tossed straight up into the air with initial speed v0 = 4.0 ± 0.2 m/s. After a time
t = 0.60±0.06 s, the height of the ball is y = v0t−
1
2
gt2 = 0.636 m. What is the uncertainty of y? Assume
g = 9.80 m/s2
(no uncertainty in g).
Thus y would be properly reported as 0.6 ± 0.4 m.
End of explanation
with pm.Model() as model:
A = pm.Normal('A', 3.6, tau=(0.2)**-2)
B = pm.Normal('B', 3.3, tau=(0.3)**-2)
D = pm.Deterministic('D', A-B)
trace = pm.sample(1000, chains=6)
pm.summary(trace).round(3)
pm.traceplot(trace, combined=False);
print("MCMC gives {0:.1f} +/- {1:.1f}, analytic gives {2} +/- {3}".format(np.mean(trace['D']),
np.std(trace['D']),
0.3, 0.4))
Explanation: Example 5
For example, suppose Ann and Billy both measure the speed of a moving ball. Ann measures 3.6±0.2 m/s
and Billy gets 3.3 ± 0.3 m/s. Do the two measurements agree?
D = 0.3 ± 0.4 m/s so 0 is in the range, they do agree.
End of explanation
data = [.22, .23, .21, .22]
with pm.Model() as model:
h = pm.Normal('h', 1.0, sigma=0.01)
t = pm.Normal('t', 2.2, sigma=1, observed=data)
g = pm.Deterministic('g', 2*h/t**2)
trace = pm.sample(10000)
with model:
pm.plot_posterior(trace, com==)
pm.plot_posterior?
Explanation: Acceleration due to graving example
$g=\frac{2h}{t^2}$
End of explanation |
9,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy and Matplotlib Tutorial
Step1: NumPy Arrays
Creation
Step2: Create arrays with array, ones, zeros, empty.
Create 1D-arrays with arange, linspace, logspace.
Create arrays/matrices with the same shape as another with zeros_like, ones_like and empty_like.
Get the shape of an array with a.shape (not a function!)
Access
Step3: Arrays can be accessed like lists (index, slicing).
Slices can be written to!
Can only contain objects of the same type.
Can not change the shape.
Array Math
Step4: Math operations and functions in NumPy all work elementwise with arrays.
Own functions can be written to support both numbers and numpy arrays!
More NumPy functions
NumPy contains various functions to work with arrays (sorting, FFT, ...)
Matplotlib
Simple Plots
Step5: plot to plot numpy arrays.
Have a look at the matplotlib gallery to see how to style the plots.
Subplots
Step6: Subplots can be created.
The number gives the plot grid and the number of the active subplot.
Log and Linear Scales | Python Code:
from __future__ import print_function
from numpy import *
from matplotlib.pylab import *
%pylab --no-import-all inline
Explanation: NumPy and Matplotlib Tutorial
End of explanation
a1 = array([1.0, 2.0, 3.0])
a2 = arange(1.0, 5.0, 0.5)
a3 = linspace(1.0, 10.0, 17)
print(a1)
print(a2)
print(a3)
m1 = array([[1.0, 2.0],
[3.0, 4.0]])
print(m1)
a4 = ones(10)
m2 = zeros((5,5))
a5 = empty(10)
print(a4)
print(m2)
print(a5)
a7 = ones_like(a1)
m2 = zeros_like(m1)
# zeros_like, empty_like
print(a7)
print(m2)
print(a7.shape)
print(m1.shape)
Explanation: NumPy Arrays
Creation
End of explanation
print(a1[1])
print(a2[1:7])
print(a3[2::3])
print(a2)
a2[3] = 7.0
print(a2)
a2[3:5] = 2.0
print(a2)
m3 = zeros((10,10))
m3[5,:] = arange(0.0, 10.0)
print(m3)
m3[2:4,:] = 42.0
print(m3)
m3[3:9, 5:7] = 23.0
print(m3)
m4 = ones((10,10))
m4[ [2,7],: ] = 42.0
m4[ :, [3,4] ] = 42.0
print(m4)
m4[5,:] = 23.0
print(m4)
# swap lines!
m4[ [5,2],: ] = m4[ [2,5],: ]
print(m4)
Explanation: Create arrays with array, ones, zeros, empty.
Create 1D-arrays with arange, linspace, logspace.
Create arrays/matrices with the same shape as another with zeros_like, ones_like and empty_like.
Get the shape of an array with a.shape (not a function!)
Access
End of explanation
a1 = ones((10,10))
a2 = a1 + a1
print(a2)
a3 = 2*a2 - a1/2.
print(a3)
print(sin(2.0))
xs = linspace(0.0, 2*pi, 10)
print(sin(xs))
def f(x):
return x**2 + 1.0
print(f(2.0))
print(f(2*ones(10)))
Explanation: Arrays can be accessed like lists (index, slicing).
Slices can be written to!
Can only contain objects of the same type.
Can not change the shape.
Array Math
End of explanation
xs = linspace(0.0, 2*pi, 100)
plot(sin(xs))
figure()
plot(xs, sin(xs))
# define size of the figure
#rcParams['figure.figsize'] = 10, 7
plot(xs, sin(xs), 'o-', label="sin(x)")
plot(xs, cos(xs), 'rx--', label="cos(x)")
xlabel("x")
ylabel("f(x)")
legend()
Explanation: Math operations and functions in NumPy all work elementwise with arrays.
Own functions can be written to support both numbers and numpy arrays!
More NumPy functions
NumPy contains various functions to work with arrays (sorting, FFT, ...)
Matplotlib
Simple Plots
End of explanation
subplot(221)
plot(xs, sin(xs))
subplot(222)
plot(xs, cos(xs))
subplot(223)
plot(xs, exp(xs))
subplot(224)
plot(xs, log(xs))
Explanation: plot to plot numpy arrays.
Have a look at the matplotlib gallery to see how to style the plots.
Subplots
End of explanation
xs = linspace(1.0, 10.0, 100)
subplot(221)
plot(xs)
plot(xs**2)
plot(0.001*exp(xs))
subplot(222)
loglog(xs)
loglog(xs**2)
loglog(exp(xs))
subplot(223)
semilogy(xs)
semilogy(xs**2)
semilogy(0.001*exp(xs))
subplot(224)
semilogx(xs)
semilogx(xs**2)
semilogx(0.001*exp(xs))
Explanation: Subplots can be created.
The number gives the plot grid and the number of the active subplot.
Log and Linear Scales
End of explanation |
9,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure 1. Sketch of a cell (top left) with the horizontal (red) and vertical (green) velocity nodes and the cell-centered node (blue). Definition of the normal vector to "surface" (segment) $S_{i+\frac{1}{2},j}$ and $S_{i,j+\frac{1}{2}}$ (top right). Sketch of uniform grid (bottom).
<h1>Derivation of 1D Transport Equation</h1>
<h2>1D Transport Without Diffusion</h2>
Consider a small control surface (cell) of dimensions $\Delta x\times\Delta y$ within which, we know the velocities on the surfaces $u_{i\pm\frac{1}{2},j}$ and $v_{i,j\pm\frac{1}{2}}$ and a quantity $\phi_{i,j}$ at the center of the cell. This quantity may be temperature, or the concentration of chemical specie. The variation in time of $\phi$ within the cell is equal to the amount of $\phi$ that is flowing in and out of the cell through the boundaries of cell. The velocity vector is defined as
$$
\vec{u}=u\vec{e}_x+v\vec{e}_y
$$
The fluxes of $\phi$ across the right-hand-side and left-hand-side vertical boundaries are, respectively
Step1: The first two lines deal with the ability to show your graphs (generated via matplotlib) within this notebook, the remaining two lines import matplotlib's sublibrary pyplot as <FONT FACE="courier" style="color
Step2: <h3 style="color
Step3: A slower but easier to understand version of this function is shown below. The tag slow is explained shortly after.
Step4: <h3>Step 3
Step5: The choice for the interpolation is obvious
Step6: <h3>Step 4
Step7: Although the plot suggests that the interpolation works, a visual proof can be deceptive. It is best to calculate the error between the exact and interpolated solution. Here we use an $l^2$-norm
Step8: For reasons that will become clearer later, we want to consider other interpolation schemes
Step9: <h3 style="color
Step10: <h3>Step 5
Step11: The discretization of the time derivative is crude. A better discretization is the 2<sup>nd</sup>-order Runge-Kutta | Python Code:
%matplotlib inline
# plots graphs within the notebook
%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format
import matplotlib.pyplot as plt #calls the plotting library hereafter referred as to plt
import numpy as np
Explanation: Figure 1. Sketch of a cell (top left) with the horizontal (red) and vertical (green) velocity nodes and the cell-centered node (blue). Definition of the normal vector to "surface" (segment) $S_{i+\frac{1}{2},j}$ and $S_{i,j+\frac{1}{2}}$ (top right). Sketch of uniform grid (bottom).
<h1>Derivation of 1D Transport Equation</h1>
<h2>1D Transport Without Diffusion</h2>
Consider a small control surface (cell) of dimensions $\Delta x\times\Delta y$ within which, we know the velocities on the surfaces $u_{i\pm\frac{1}{2},j}$ and $v_{i,j\pm\frac{1}{2}}$ and a quantity $\phi_{i,j}$ at the center of the cell. This quantity may be temperature, or the concentration of chemical specie. The variation in time of $\phi$ within the cell is equal to the amount of $\phi$ that is flowing in and out of the cell through the boundaries of cell. The velocity vector is defined as
$$
\vec{u}=u\vec{e}_x+v\vec{e}_y
$$
The fluxes of $\phi$ across the right-hand-side and left-hand-side vertical boundaries are, respectively:
$$
\int_{S_{i+1/2,j}}\phi(\vec{u}{i+\frac{1}{2},j}\cdot\vec{n}{i+\frac{1}{2},j})dy\text{ and }\int_{S_{i-1/2,j}}\phi(\vec{u}{i-\frac{1}{2},j}\cdot\vec{n}{i+\frac{1}{2},j})dy
$$
In the configuration depicted in Figure 1, the mass or heat variation is equal to the flux of $\phi$ entering the cell minus the flux exiting the cell, or:
$$
-\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j}\Delta y + \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}\Delta y \text{, when $\Delta y\rightarrow 0$}
$$
Assuming that there is no vertical velocity ($v=0$), this sum is equal to the variation of $\phi$ within the cell,
$$
\frac{\partial}{\partial t}\iint_{V_{i,j}}\phi dxdy\approx\frac{\partial \phi_{i,j}}{\partial t}\Delta x\Delta y \text{, when $\Delta x\rightarrow 0$ and $\Delta y\rightarrow 0$}
$$
yielding
$$
\frac{\partial \phi_{i,j}}{\partial t}\Delta x\Delta y=-\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j}\Delta y + \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}\Delta y\;,
$$
reducing to
$$
\frac{\partial \phi_{i,j}}{\partial t}=-\frac{\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j} - \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}}{\Delta x}\;.
$$
In the limit of $\Delta x\rightarrow 0$, we obtain the conservative form of the pure advection equation:
<p class='alert alert-danger'>
$$
\frac{\partial \phi}{\partial t}+\frac{\partial u\phi}{\partial x}=0
$$
</p>
<h2>1.2 Coding the Pure Advection Equation</h2>
The following takes you through the steps to solve numerically the pure advection equation with python. The boundary conditions are (all variables are non-dimensional):
<ol>
<li> Length of the domain: $0\leq x\leq L$ and $L=8\pi$ </li>
<li> Constant velocity $u_0=1$
<li> Inlet $x=0$ and outlet $x=L$: zero-flux variation (in space)</li>
<li> Initial condition:
$$\phi(x,t=0)=\begin{cases}
1+\cos\left(x-\frac{L}{2}\right)&,\text{ for }\left\vert x-\frac{L}{2}\right\vert\leq\pi\\
0&,\text{ for }\left\vert x-\frac{L}{2}\right\vert>\pi
\end{cases}
$$
</li>
</ol>
Here you will <b>discretize</b> your domain in $N$ small control volumes, such that the size of each control volume is
<p class='alert alert-danger'>
$$
\Delta x = \frac{L}{N}
$$
</p>
You will simulate the system defined so far of a time $T$, to be decided, discretized by small time-steps
<p class='alert alert-danger'>
$$
\Delta t = \frac{T}{N_t}
$$
</p>
We adopt the following index convention:
<ul>
<li> Each cell is labeled by a unique integer $i$ with $i\in[0,N-1]$. This is a python convention that vector and matrices start with index 0, instead of 1 for matlab.</li>
<li> A variable defined at the center of cell $i$ is noted with the subscript $i$: $\phi_i$.</li>
<li> A variable defined at the surface of cell $i$ is noted with the subscript $i\pm1/2$: $\phi_{i\pm 1/2}$</li>
<li> The solution $\phi(x_i,t_n)$, where
$$
x_i = i\Delta x\text{ with $x\in[0,N-1]$, and }t_n=n\Delta t\text{ with $n\in[0,N_t]$,}
$$</li>
is noted $\phi_i^n$.
</ul>
At first we will try to solve the advection equation with the following discretization:
$$
\frac{\phi_i^{n+1}-\phi_i^n}{\Delta t}=-\frac{\phi_{i+\frac{1}{2}}u_{i+\frac{1}{2}} - \phi_{i-\frac{1}{2}}u_{i-\frac{1}{2}}}{\Delta x}
$$
or
<p class='alert alert-info'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(\phi^n_{i+\frac{1}{2}}u_{i+\frac{1}{2}} - \phi^n_{i-\frac{1}{2}}u_{i-\frac{1}{2}}\right)
$$
</p>
The velocity $u$ is constant, therefore defined anywhere in the system (cell center or cell surfaces), however $\phi$ is defined only at the cell center, requiring an interpolation at the cell surface $i\pm 1/2$. For now you will consider a mid-point interpolation:
<p class='alert alert-info'>
$$
\phi^n_{i+\frac{1}{2}} = \frac{\phi^n_{i+1}+\phi^n_i}{2}
$$
</p>
Lastly, our governing equation can be recast with the flux of $\phi$ across the surface $u$:
<p class='alert alert-info'>
$$
F^n_{i\pm\frac{1}{2}}=\phi^n_{i\pm\frac{1}{2}}u_{i\pm\frac{1}{2}}=\frac{\phi^n_{i\pm 1}+\phi^n_i}{2}u_{i\pm\frac{1}{2}}
$$
</p>
yielding the equation you will attempt to solve:
<p class='alert alert-danger'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}\right)
$$
</p>
<h3> Step 1: Import libraries</h3>
Python has a huge collection of libraries contained functions to plot, build matrices, performed mathematical operations, etc. To avoid overloading the CPU and to allow you to choose the best library for your code, you need to first import the libraries you will need, here:
<ul>
<li> <FONT FACE="courier" style="color:blue">matplotlib </FONT>: <a href="http://matplotlib.org">http://matplotlib.org</a> for examples of plots you can make in python.</li>
<li><FONT FACE="courier" style="color:blue">numpy </FONT>: <a href="http://docs.scipy.org/doc/numpy/user/index.html">http://docs.scipy.org/doc/numpy/user/index.html</a> Library for operations on matrices and vectors.</li>
</ul>
Loading a libray in python is done by the command <FONT FACE="courier" style="color:blue">import</FONT>. The best practice is to take the habit to use
<FONT FACE="courier" style="color:blue">import [library] as [library_nickname]</FONT>
For example, the library <FONT FACE="courier" style="color:blue">numpy</FONT> contains vector and matrices operations such <FONT FACE="courier" style="color:blue">zeros</FONT>, which allocate memory for a vector or a matrix of specified dimensions and set all components of the vector and matrix to zero. If you import numpy as np,
<FONT FACE="courier" style="color:blue">import numpy as np</FONT>
the allocation of memory for matrix A of dimensions n and m becomes
<FONT FACE="courier" style="color:blue">A = np.zeros((n,m))</FONT>
The following is a standard initialization for the python codes you will write in this course:
End of explanation
L = 8*np.pi
N = 200
dx = L/N
u_0 = 1.
phi = np.zeros(N)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
Explanation: The first two lines deal with the ability to show your graphs (generated via matplotlib) within this notebook, the remaining two lines import matplotlib's sublibrary pyplot as <FONT FACE="courier" style="color:blue">plt</FONT> and numpy as <FONT FACE="courier" style="color:blue">np</FONT>.
<h3>Step 2: Initialization of variables and allocations of memory</h3>
The first real coding task is to define your variables, with the exception of the time-related variables (you will understand why). Note that in our equation, we can store $\phi^n$ into one variable providing that we create a flux variable $F$.
<h3 style="color:red"> Q1: Explain why.</h3>
End of explanation
def init_simulation(x_phi,N):
phi = np.zeros(N)
phi = 1.+np.cos(x_phi-L/2.)
xmask = np.where(np.abs(x_phi-L/2.) > np.pi)
phi[xmask] = 0.
return phi
phi = init_simulation(x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: <h3 style="color:red"> Q2: Search numpy function linspace and describe what <FONT FACE="courier">x_phi</FONT> and <FONT FACE="courier">x_u</FONT> define. Why are the dimensions different?</h3>
<h3>Step 3: Initialization</h3>
Now we define a function to initialize our variables. In python, <b>indentation matters!</b> A function is defined by the command <FONT FACE="courier">def</FONT> followed by the name of the function and the argument given to the function. The variables passed as argument in the function are local, meaning they may or may not have the same names as the variables in the core code. Any other variable used within the function needs to be defined in the function or before.
Note that python accepts implicit loops. Here <FONT FACE="courier">phi</FONT> and <FONT FACE="courier">x_phi</FONT> are two vectors of dimension $N$.
End of explanation
def init_simulation_slow(u,phi,x_phi,N):
for i in range(N):
if (np.abs(x_phi[i]-L/2.) > np.pi):
phi[i] = 0.
else:
phi[i] = 1.+np.cos(x_phi[i]-L/2.)
return phi
phi = init_simulation_slow(u,phi,x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: A slower but easier to understand version of this function is shown below. The tag slow is explained shortly after.
End of explanation
%%timeit
flux0 = np.zeros(N+1)
for i in range(1,N):
flux0[i] = 0.5*(phi[i-1]+phi[i])*u[i]
%%timeit
flux1 = np.zeros(N+1)
flux1[1:N] = 0.5*(phi[0:N-1]+phi[1:N])*u[1:N]
Explanation: <h3>Step 3: Code your interpolation/derivativation subroutine</h3>
Before we can simulate our system, we need to write and test our spatial interpolation and derivative procedure. Below we test the speed of two approaches, The first uses a for loop, whereas the second using the rules of indexing in python.
End of explanation
def compute_flux(a,v,N):
f=np.zeros(N+1)
f[1:N] = 0.5*(a[0:N-1]+a[1:N])*v[1:N]
f[0] = f[1]
f[N] = f[N-1]
return f
Explanation: The choice for the interpolation is obvious:
End of explanation
F_exact = np.zeros(N+1)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
plt.plot(x_u,F_exact,lw=2,label="exact")
plt.plot(x_u,F,'r--',lw=2,label="interpolated")
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.show()
Explanation: <h3>Step 4: Verification</h3>
The interpolation and derivation operations are critical components of the simulation that must be verified. Since the velocity is unity, $F_{i\pm1/2}=\phi_{i\pm1/2}$.
End of explanation
N = 200
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error = np.sqrt(np.sum(np.power(F-F_exact,2)))
errorx = np.power(F-F_exact,2)
plt.plot(x_u,errorx)
plt.show()
print('error norm L 2= %1.4e' %error)
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros(Nerror)
order = np.zeros(Nerror)
for ierror in range(Nerror):
N = Narray[ierror]
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error[ierror] = np.linalg.norm(F-F_exact)
#error[ierror] = np.sqrt(np.sum(np.power(F-F_exact,2)))
print('error norm L 2= %1.4e' %error[ierror])
order = 0.1*delta**(2)
plt.loglog(delta,error,lw=2,label='interpolate')
plt.loglog(delta,order,lw=2,label='$\propto\Delta x^2$')
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.show
Explanation: Although the plot suggests that the interpolation works, a visual proof can be deceptive. It is best to calculate the error between the exact and interpolated solution. Here we use an $l^2$-norm:
$$
\Vert F\Vert_2=\sqrt{\sum_{i=0}^{N}\left(F_i-F_i^e\right)^2}
$$
where $F_e$ is the exact solution for the flux.
End of explanation
Nscheme = 4
Scheme = np.array(['CS','US1','US2','US3'])
g_1 = np.array([1./2.,0.,0.,3./8.])
g_2 = np.array([0.,0.,1./2.,1./8.])
def compute_flux_advanced(a,v,N,num_scheme):
imask = np.where(Scheme == num_scheme)
g1 = g_1[imask]
g2 = g_2[imask]
f=np.zeros(N+1)
f[2:N] = ((1.-g1+g2)*a[1:N-1]+g1*a[2:N]-g2*a[0:N-2])*v[2:N]
if (num_scheme == 'US2') or (num_scheme == 'US3'):
f[1] = ((1.-g1)*a[0]+g1*a[1])*v[1]
f[0] = f[1]
f[N] = f[N-1]
return f
table = ListTable()
table.append(['Scheme', '$g_1$', '$g_2$'])
for i in range(4):
table.append([Scheme[i],g_1[i], g_2[i]])
table
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros((Nerror,Nscheme))
order = np.zeros((Nerror,Nscheme))
for ischeme in range(Nscheme):
num_scheme = Scheme[ischeme]
for ierror in range(Nerror):
N = Narray[ierror]
dx = L/N
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux_advanced(phi,u,N,num_scheme)
error[ierror,ischeme] = np.linalg.norm(F-F_exact)
#print('error norm L 2= %1.4e' %error[ierror,ischeme])
for ischeme in range(Nscheme):
plt.loglog(delta,error[:,ischeme],lw=2,label=Scheme[ischeme])
order = 1.0*(delta/delta[0])
plt.loglog(delta,order,'k:',lw=2,label='$\propto\Delta x$')
order = 1.0*(delta/delta[0])**(2)
plt.loglog(delta,order,'k-',lw=2,label='$\propto\Delta x^2$')
order = 1.0*(delta/delta[0])**(3)
plt.loglog(delta,order,'k--',lw=2,label='$\propto\Delta x^3$')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.xlim(L/300,L/9.)
plt.ylim(1e-5,1e2)
plt.show
Explanation: For reasons that will become clearer later, we want to consider other interpolation schemes:
$$
\phi_{i+\frac{1}{2}}=g_1\phi_{i+1}-g_2\phi_{i-1}+(1-g_1+g_2)\phi_i
$$
The scheme CS is the interpolation scheme we have used so far. Let us test them all, however we have to modify the interpolation function.
End of explanation
def flux_divergence(f,N,dx):
df = np.zeros(N)
df[0:N] = (f[1:N+1]-f[0:N])/dx
return df
Explanation: <h3 style="color:red">Q3: What do you observe? </h3>
<h3 style="color:red">Q4: Write a code to verify the divergence subroutine. </h3>
End of explanation
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'US2'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi -= dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: <h3>Step 5: Writing the simulation code</h3>
The first code solves:
<p class='alert alert-info'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}\right)
$$
</p>
for whatever scheme you choose. Play with the different schemes. Consider that the analytical solution is:
$$
\phi(x,t)=\begin{cases}
1+\cos\left[x-\left(\frac{L}{2}+u_0t\right)\right]&,\text{ for }\left\vert x-\left(\frac{L}{2}+u_0t\right)\right\vert\leq\pi\
0&,\text{ for }\left\vert x-\left(\frac{L}{2}+u_0t\right)\right\vert>\pi
\end{cases}
$$
End of explanation
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'CS'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phiold = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
rk_coef = np.array([0.5,1.])
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
phiold = phi
for irk in range(2):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi = phiold-rk_coef[irk]*dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: The discretization of the time derivative is crude. A better discretization is the 2<sup>nd</sup>-order Runge-Kutta:
<p class='alert alert-info'>
\begin{eqnarray}
\phi_i^{n+1/2}&=&\phi_i^n-\frac{\Delta t}{2}\frac{F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}}{\Delta x}\\
\phi_i^{n+1}&=&\phi_i^n-\Delta t\frac{F^{n+1/2}_{i+\frac{1}{2}} - F^{n+1/2}_{i-\frac{1}{2}}}{\Delta x}
\end{eqnarray}
</p>
End of explanation |
9,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create an example dataframe
Step2: List unique values | Python Code:
# Import modules
import pandas as pd
# Set ipython's max row display
pd.set_option('display.max_row', 1000)
# Set iPython's max column width to 50
pd.set_option('display.max_columns', 50)
Explanation: Title: List Unique Values In A Pandas Column
Slug: pandas_list_unique_values_in_column
Summary: List Unique Values In A Pandas Column
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Special thanks to Bob Haffner for pointing out a better way of doing it.
Preliminaries
End of explanation
# Create an example dataframe
data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'])
df
Explanation: Create an example dataframe
End of explanation
#List unique values in the df['name'] column
df.name.unique()
Explanation: List unique values
End of explanation |
9,533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load fake data into a pandas DataFrame. Use the dt column an the index for the DataFrame
Step1: Convert the type column to a category (similar to factor in R)
Step2: Plot the noise readings as a point plot
Step3: Plot the pump state changes as a line plot.
Step4: from notes found here | Python Code:
raw_data = {'dt': ['2017-01-15 00:06:08',
'2017-01-15 01:09:08',
'2017-01-16 02:07:08',
'2017-01-16 02:07:09',
'2017-01-16 03:04:08',
'2017-01-16 03:04:09',
'2017-01-15 01:06:08'],
'type': ['VOLT',
'VOLT',
'PUMP',
'PUMP',
'PUMP',
'PUMP',
'VOLT'],
'value': [22.4,
34.3,
0.,
1.,
1.,
0.,
34.3]}
df = pd.DataFrame(raw_data, index=raw_data['dt'], columns = ['type', 'value'])
df
Explanation: Load fake data into a pandas DataFrame. Use the dt column an the index for the DataFrame
End of explanation
df.type = df.type.astype('category')
Explanation: Convert the type column to a category (similar to factor in R)
End of explanation
plt.figure()
df[df.type=='VOLT'].plot(rot=90,title='NoiseReading',style='o')
plt.show()
plt.savefig('DataFramePlotting01.png')
Explanation: Plot the noise readings as a point plot
End of explanation
plt.figure()
df[df.type=='PUMP'].plot(rot=90,title='Pump State',style='-')
plt.show()
plt.savefig('DataFramePlotting02.png')
Explanation: Plot the pump state changes as a line plot.
End of explanation
group = df.groupby(['type'])
group.plot()
plt.show()
plt.savefig('DataFramePlotting03.png')
fig, axs = plt.subplots(1,2,sharex=False)
group.get_group("PUMP").plot(ax=axs[0], y='value', rot=90,title='Pump State',style='-')
group.get_group("VOLT").plot(ax=axs[1], y='value', rot=90,title='Volt Noise',style='.')
plt.show()
plt.savefig('DataFramePlotting04.png')
Explanation: from notes found here:
http://matplotlib.org/examples/pylab_examples/subplots_demo.html
http://stackoverflow.com/questions/4270301/matplotlib-multiple-datasets-on-the-same-scatter-plot
http://stackoverflow.com/questions/21654635/scatter-plots-in-pandas-pyplot-how-to-plot-by-category
End of explanation |
9,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom observation models
While bayesloop provides a number of observation models like Poisson or AR1, many applications call for different distributions, possibly with some parameters set to fixed values (e.g. with a mean value set to zero). The sympy.stats and the scipy.stats modules include a large number of continuous as well as discrete probability distributions. The observation model classes SciPy and SymPy allow to create observation models to be used in bayesloop studies on-the-fly, just by passing the desired scipy.stats distribution (and setting values for fixed parameters, if necessary), or by providing a sympy.stats random variable, respectively. Note that these classes can only be used to model statistically independent observations.
In cases where neither scipy.stats nor sympy.stats provide the needed model, one can further define a custom observation model by stating a likelihood function in terms of arbitrary NumPy functions, using the NumPy class.
Sympy.stats random variables
The SymPy module introduces symbolic mathematics to Python. Its sub-module sympy.stats covers a wide range of discrete and continuous random variables. In the following, we re-define the observation model of the coal mining study S defined above, but this time use the sympy.stats version of the Poisson distribution
Step1: First, we specify the only parameter of the Poisson distribution (denoted $\lambda$) symbolically as a positive real number. Note that providing the keyword argument positive=True is important for SymPy to define the Poisson distribution correctly (not setting the keyword argument correctly results in a error). Having defined the parameter, a random variable based on the Poisson distribution is defined. This random variable is then passed to the SymPy class of the bayesloop observation models. Just as for the built-in observation models of bayesloop, one has to specify the parameter names and values (in this case, lambda is the only parameter).
Note that upon creating an instance of the observation model, bayesloop automatically determines the correct Jeffreys prior for the Poisson model
Step2: Finally, it is important to note that the SymPy module can also be used to create random variables for which some parameters have user-defined fixed values. The following example creates a normally distributed random variable with a fixed mean value $\mu = 4$, leaving only the standard deviation as a free parameter of the resulting observation model (which is assigned the parameter interval ]0, 3[)
Step3: In scipy.stats, the rate of events in one time interval of the Poisson distribution is called mu. Additionally, as a discrete distribution, stats.poisson has an additional parameter loc (which is not shown by .shapes attribute!). As we do not want to shift the distribution, we have to set this parameter to zero in bayesloop by passing a dictionary for fixed parameters when initializing the class instance. As for the SymPy model, we have to pass the names and values of all free parameters of the model (here only mu)
Step4: Comparing this result with the regime-switching example, we find that the model evidence value obtained using the scipy.stats implementation of the Poisson distribution is different from the value obtained using the built-in implementation or the sympy.stats version. The deviation is explained by a different prior distribution for the parameter $\lambda$. While both the built-in version and the sympy.stats version use the Jeffreys prior of the Poisson model, the scipy.stats implementation uses a flat prior instead. Since the scipy.stats module does not provide symbolic representations of probability distributions, bayesloop cannot determine the correct Jeffreys prior in this case. Custom priors are still possible, using the keyword argument prior.
NumPy likelihood functions
In some cases, the data at hand cannot be described by a common statistical distribution contained in either scipy.stats or sympy.stats. In the following example, we assume normally distributed data points with known standard deviation $\sigma$, but unknown mean $\mu$. Additionally, we suspect that the data points may be serially correlated and that the correlation coefficient $\rho$ possibly changes over time. For this multivariate problem with the known standard deviation as "extra" data points, we need more flexibility than either the SymPy or the SciPy class of bayesloop can offer. Instead, we may define the likelihood function of the observation model directly, with the help of NumPy functions.
First, we simulate $1000$ random variates with $\mu=3$, $\sigma=1$, and a linearly varying correlation coefficient $\rho$
Step5: Before we create an observation model to be used by bayesloop, we define a pure Python function that takes a segment of data as the first argument, and NumPy arrays with parameter grids as further arguments. Here, one data segment includes two subsequent data points x1 and x2, and their known standard deviations s1 and s2. The likelihood function we evaluate states the probability of observing the current data point x2, given the previous data point x1, the known standard deviations s2, s1 and the parameters $\mu$ and $\rho$
Step6: As bayesloop still needs to know about the parameter boundaries and discrete values of the parameters $\mu$ and $\rho$, we need to create an observation model from the custom likelihood function defined above. This can be done with the NumPy class
Step7: Before we can load the data into a Study instance, we have to format data segments in the order defined by the likelihood function
Step8: Finally, we create a new Study instance, load the formatted data, set the custom observation model, set a suitable transition model, and fit the model parameters
Step9: Plotting the true values of $\rho$ used in the simulation of the data together with the inferred distribution (and posterior mean values) below, we see that the custom model accurately infers the time-varying serial correlation in the data. | Python Code:
import bayesloop as bl
import numpy as np
import sympy.stats
from sympy import Symbol
rate = Symbol('lambda', positive=True)
poisson = sympy.stats.Poisson('poisson', rate)
L = bl.om.SymPy(poisson, 'lambda', bl.oint(0, 6, 1000))
Explanation: Custom observation models
While bayesloop provides a number of observation models like Poisson or AR1, many applications call for different distributions, possibly with some parameters set to fixed values (e.g. with a mean value set to zero). The sympy.stats and the scipy.stats modules include a large number of continuous as well as discrete probability distributions. The observation model classes SciPy and SymPy allow to create observation models to be used in bayesloop studies on-the-fly, just by passing the desired scipy.stats distribution (and setting values for fixed parameters, if necessary), or by providing a sympy.stats random variable, respectively. Note that these classes can only be used to model statistically independent observations.
In cases where neither scipy.stats nor sympy.stats provide the needed model, one can further define a custom observation model by stating a likelihood function in terms of arbitrary NumPy functions, using the NumPy class.
Sympy.stats random variables
The SymPy module introduces symbolic mathematics to Python. Its sub-module sympy.stats covers a wide range of discrete and continuous random variables. In the following, we re-define the observation model of the coal mining study S defined above, but this time use the sympy.stats version of the Poisson distribution:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt # plotting
import seaborn as sns # nicer plots
sns.set_style('whitegrid') # plot styling
S = bl.Study()
S.loadExampleData()
T = bl.tm.RegimeSwitch('log10pMin', -7)
S.set(L, T)
S.fit()
plt.figure(figsize=(8, 4))
plt.bar(S.rawTimestamps, S.rawData, align='center', facecolor='r', alpha=.5)
S.plot('lambda')
plt.xlim([1851, 1962])
plt.xlabel('year');
Explanation: First, we specify the only parameter of the Poisson distribution (denoted $\lambda$) symbolically as a positive real number. Note that providing the keyword argument positive=True is important for SymPy to define the Poisson distribution correctly (not setting the keyword argument correctly results in a error). Having defined the parameter, a random variable based on the Poisson distribution is defined. This random variable is then passed to the SymPy class of the bayesloop observation models. Just as for the built-in observation models of bayesloop, one has to specify the parameter names and values (in this case, lambda is the only parameter).
Note that upon creating an instance of the observation model, bayesloop automatically determines the correct Jeffreys prior for the Poisson model:
$$
p(\lambda) \propto 1/\sqrt{\lambda}
$$
This calculation is done symbolically and therefore represents an important advantage of using the SymPy module within bayesloop. This behavior can be turned off using the keyword argument determineJeffreysPrior, in case one wants to use a flat parameter prior instead or in the case that the automatic determination of the prior takes too long:
M = bl.om.SymPy(poisson, 'lambda', bl.oint(0, 6, 1000), determineJeffreysPrior=False)
Alternatively, you can of course provide a custom prior via the keyword argument prior. This will switch off the automatic determination of the Jeffreys prior as well:
M = bl.om.SymPy(poisson, 'lambda', bl.oint(0, 6, 1000), prior=lambda x: 1/x)
See also this tutorial for further information on prior distributions. Having defined the observation model, it can be used for any type of study introduced above. Here, we reproduce the result of the regime-switching example we discussed before. We find that the parameter distributions as well as the model evidence is identical - as expected:
End of explanation
import scipy.stats
scipy.stats.poisson.shapes
Explanation: Finally, it is important to note that the SymPy module can also be used to create random variables for which some parameters have user-defined fixed values. The following example creates a normally distributed random variable with a fixed mean value $\mu = 4$, leaving only the standard deviation as a free parameter of the resulting observation model (which is assigned the parameter interval ]0, 3[):
```
mu = 4
std = Symbol('stdev', positive=True)
normal = sympy.stats.Normal('normal', mu, std)
L = bl.om.SymPy(normal, 'stdev', bl.oint(0, 3, 1000))
```
Scipy.stats probability distributions
We continue by describing the use of probability distributions of the scipy.stats module. Before we show some usage examples, it is important to note here that scipy.stats does not use the canonical parameter names for probability distributions. Instead, all continuous distributions have two parameters denoted loc (for shifting the distribution) and scale (for scaling the distribution). Discrete distributions only support loc. While some distributions may have additional parameters, loc and scale often take the role of known parameters, like mean and standard deviation in case of the normal distribution. In scipy.stats, you do not have to set loc or scale, as they have default values loc=0 and scale=1. In bayesloop, however, you will have to provide values for these parameters, if you want either of them to be fixed and not treated as a variable.
As a first example, we re-define the observation model of the coal mining study S defined above, but this time use the scipy.stats version of the Poisson distribution. First, we check the parameter names:
End of explanation
L = bl.om.SciPy(scipy.stats.poisson, 'mu', bl.oint(0, 6, 1000), fixedParameters={'loc': 0})
S.set(L)
S.fit()
plt.figure(figsize=(8, 4))
plt.bar(S.rawTimestamps, S.rawData, align='center', facecolor='r', alpha=.5)
S.plot('mu')
plt.xlim([1851, 1962])
plt.xlabel('year');
Explanation: In scipy.stats, the rate of events in one time interval of the Poisson distribution is called mu. Additionally, as a discrete distribution, stats.poisson has an additional parameter loc (which is not shown by .shapes attribute!). As we do not want to shift the distribution, we have to set this parameter to zero in bayesloop by passing a dictionary for fixed parameters when initializing the class instance. As for the SymPy model, we have to pass the names and values of all free parameters of the model (here only mu):
End of explanation
n = 1000
# parameters
mean = 3
sigma = 1
rho = np.concatenate([np.linspace(-0.5, 0.9, 500), np.linspace(0.9, -0.5, 499)])
# covariance matrix
cov = np.diag(np.ones(n)*sigma**2.) + np.diag(np.ones(n-1)*rho*sigma**2., 1) + np.diag(np.ones(n-1)*rho*sigma**2., -1)
# random variates
np.random.seed(123456)
obs_data = np.random.multivariate_normal([mean]*n, cov)
plt.figure(figsize=(8, 4))
plt.plot(obs_data, c='r', alpha=0.7, lw=2)
plt.xlim([0, 1000])
plt.xlabel('time')
plt.ylabel('data');
Explanation: Comparing this result with the regime-switching example, we find that the model evidence value obtained using the scipy.stats implementation of the Poisson distribution is different from the value obtained using the built-in implementation or the sympy.stats version. The deviation is explained by a different prior distribution for the parameter $\lambda$. While both the built-in version and the sympy.stats version use the Jeffreys prior of the Poisson model, the scipy.stats implementation uses a flat prior instead. Since the scipy.stats module does not provide symbolic representations of probability distributions, bayesloop cannot determine the correct Jeffreys prior in this case. Custom priors are still possible, using the keyword argument prior.
NumPy likelihood functions
In some cases, the data at hand cannot be described by a common statistical distribution contained in either scipy.stats or sympy.stats. In the following example, we assume normally distributed data points with known standard deviation $\sigma$, but unknown mean $\mu$. Additionally, we suspect that the data points may be serially correlated and that the correlation coefficient $\rho$ possibly changes over time. For this multivariate problem with the known standard deviation as "extra" data points, we need more flexibility than either the SymPy or the SciPy class of bayesloop can offer. Instead, we may define the likelihood function of the observation model directly, with the help of NumPy functions.
First, we simulate $1000$ random variates with $\mu=3$, $\sigma=1$, and a linearly varying correlation coefficient $\rho$:
End of explanation
def likelihood(data, mu, rho):
x2, x1, s2, s1 = data
exponent = -(((x1-mu)*rho/s1)**2. - (2*rho*(x1-mu)*(x2-mu))/(s1*s2) + ((x2-mu)/s2)**2.) / (2*(1-rho**2.))
norm = np.sqrt(2*np.pi)*s2*np.sqrt(1-rho**2.)
like = np.exp(exponent)/norm
return like
Explanation: Before we create an observation model to be used by bayesloop, we define a pure Python function that takes a segment of data as the first argument, and NumPy arrays with parameter grids as further arguments. Here, one data segment includes two subsequent data points x1 and x2, and their known standard deviations s1 and s2. The likelihood function we evaluate states the probability of observing the current data point x2, given the previous data point x1, the known standard deviations s2, s1 and the parameters $\mu$ and $\rho$:
$$P(x_2~|~x_1, s_2, s_1, \mu, \rho) = \frac{P(x_2, x_1~|~s_2, s_1, \mu, \rho)}{P(x_1~|~s_1, \mu)}~,$$
where $P(x_2, x_1~|~s_2, s_1, \mu, \rho)$ denotes the bivariate normal distribution, and $P(x_1~|~s_1, \mu)$ is the marginal, univariate normal distribution of $x_1$. The resulting distribution is expressed as a Python function below. Note that all mathematical functions use NumPy functions, as the function needs to work with arrays as input arguments for the parameters:
End of explanation
L = bl.om.NumPy(likelihood, 'mu', bl.cint(0, 6, 100), 'rho', bl.oint(-1, 1, 100))
Explanation: As bayesloop still needs to know about the parameter boundaries and discrete values of the parameters $\mu$ and $\rho$, we need to create an observation model from the custom likelihood function defined above. This can be done with the NumPy class:
End of explanation
data_segments = input_data = np.array([obs_data[1:], obs_data[:-1], [sigma]*(n-1), [sigma]*(n-1)]).T
Explanation: Before we can load the data into a Study instance, we have to format data segments in the order defined by the likelihood function:
[[x1, x0, s1, s0],
[x2, x1, s2, s1],
[x3, x2, s3, s2],
...]
Note that in this case, the standard deviation $\sigma = 1$ for all time steps.
End of explanation
S = bl.Study()
S.loadData(data_segments)
S.set(L)
T = bl.tm.GaussianRandomWalk('d_rho', 0.03, target='rho')
S.set(T)
S.fit()
Explanation: Finally, we create a new Study instance, load the formatted data, set the custom observation model, set a suitable transition model, and fit the model parameters:
End of explanation
plt.figure(figsize=(8, 4))
S.plot('rho', label='mean inferred')
plt.plot(rho, c='r', alpha=0.7, lw=2, label='true')
plt.legend()
plt.ylim([-.6, 1]);
Explanation: Plotting the true values of $\rho$ used in the simulation of the data together with the inferred distribution (and posterior mean values) below, we see that the custom model accurately infers the time-varying serial correlation in the data.
End of explanation |
9,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
튜플 활용
주요 내용
파이썬에 내장되어 있는 컬렉션 자료형 중에서 튜플에 대해 알아 본다.
튜플(tuples)
Step1: 튜플의 기본 활용
오늘의 주요 예제의 문제를 해결하려면, 문자열과 사전 자료형 이외의 튜플에 대해
알아 보아야 한다.
튜플은 순서쌍이라고도 불리며, 리스트와 99% 비슷한 용도를 가진다.
리스트와 다른 점은 튜플이 불변 자료형이라는 것 뿐이다.
물론, 튜플이 불변 자료형이기에 리스트 자료형이 갖고 있는 다양한 메소드를 갖지 않는다.
Step2: 튜플의 경우 인덱싱과 슬라이싱은 문자열 또는 리스트에서의 활용과 100% 동일
Step3: 튜플을 사용할 때 소괄호를 생략해도 된다. 하지만 기본적으로 소괄을 사용한다.
Step4: 튜플은 불변 자료형이다.
리스트와는 달리 인덱싱을 사용하여 튜플 특정 원소의 값을 변경할 수 없다.
Step5: 튜플 자료형 활용 예제 1
절대로 변경되지 않거나 변경되어서는 안되는 값들을 저장할 때 사용
예를 들어, 생년월일, 학과 전공 등등.
Step6: 튜플 자료형 활용 예제 2
여러 개의 변수들에 여러 개의 값들을 한 줄에 동시에 할당하기 위해 사용
Step7: 튜플을 이용하면 두 변수에 할당된 값을 스왑(swap)하는 것이 매우 간단하다.
Step8: 주의
Step9: 이제 아래와 같이 리턴값 각각의 항목에 변수를 할당하여 사용할 수 있다.
Step10: 불변성(immutability)대 가변성(mutability)
튜플과 문자열은 불변성 자료형이다.
즉, 이미 생성된 값들을 절대 수정할 수 없다.
예를 들어, 아래와 같이 튜플의 특정 항목을 대체하려거나
문자열의 일부를 다른 문자열로 대체하려는 시도는 오류를 발생시킨다.
Step11: 만약에 튜플의 특정 항목 또는 문자열의 일부를 다른 문자열로 대체하고 싶다면,
기존의 값들을 이용하여 새로운 튜플과 문자열을 생성해야 한다.
Step12: 주의
Step13: 비유해서 설명하면, 아파트가 문자열 또는 튜플 자료형이라면 아파트를 수선할 수는 없고, 대신에 기존 아파트를 부신 다음에 새로 원하는 대로 지어야 함을 의미한다.
튜플과 사전 활용 예제
튜플과 사전을 함께 활용하면 학생들의 신상정보 등을 저장할 때 유용하다.
예를 들어 '학생이름', '학번', '생일' 등을 키로 사용하고, 키값으로는 진짜 나이, 학번, 생일 등을 저장할 수 있다.
아래 코드는 특정 디렉토리에 저장된 학생들의 신상정보 파일을 모두 읽어들여서 활용하는 프로그램을 위한 함수들을 구현하고 있다.
Step15: 먼저 std_record_list 함수는 지정된 디렉토리에 포함된 모든 학생들의 신상정보 파일명을 읽어드린다.
glob 모듈의 glob 함수의 활용을 기억해 두면 좋다.
Step16: 위 함수를 활용하여, 'Sample_Data/Students_Record' 디렉토리에 있는 모든 파일들의 이름을 확인할 수 있다.
주의
Step17: date_of_birth 함수는 생년월일 정보를 (년, 월, 일) 형식으로 변경하는 함수이다.
Step18: record_getter 함수는 지정된 학생의 신상정보를 리스트에 담아 리턴한다.
리스트의 각각의 항목은 항목명과 항목내용으로 구성된 튜플들이다.
Step19: 예를 들어 Byun_Sato 학생의 신상정보가 아래와 같이 확인된다.
Step20: 이제 위 코드를 한 군데 모아서 아래와 같이 각각의 학생의 정보를 얻을 수 있다.
아래 코드는 세 번째 학생의 정보를 확인한다.
Step21: 정보를 확인할 때는 튜플보다 사전이 효율적이다.
위 코드는 학생들의 신상정보의 정리해서 잘 보여준다.
하지만 소속학과, 생년월일 등에 대한 구체적인 정보를 추출하는 일은 좀 번거롭다.
예를 들어, So Ritgun 학생의 소속학과를 확인하려면 다음과 같이 해야 한다.
Step22: 그런데 사전을 이용하면 보다 쉽게 할 수 있다.
먼저 So_data를 사전으로 만들어보자.
Step23: 그러면 소속학과 또는 좋아하는 색깔 등을 확인하는 일이 매우 쉽다.
Step24: 주의
Step25: 주의
Step26: 항목을 삭제하려면 del 함수 또는 pop() 메소드를 사용한다.
존재하지 않는 key를 이용할 경우 어떤 일이 일어나는지 확인하라.
Step27: 이제 사전 자료형을 이용하여 record_getter 함수를 수정하자.
Step28: 아래 코드에서 all_records 변수에는 모든 학생의 신상정보를 리스트로 담고 있다.
각 항목은 각 학생의 신상정보를 담은 사전 자료형이다.
Step29: 이런 식으로 예를 들어 두 번째 학생이 소속학과를 다음처럼 확인 가능하다.
Step30: 또는 첫 번째 학생의 이름을 확인한다. | Python Code:
from __future__ import print_function
Explanation: 튜플 활용
주요 내용
파이썬에 내장되어 있는 컬렉션 자료형 중에서 튜플에 대해 알아 본다.
튜플(tuples): 리스트와 비슷. 하지만 수정 불가능(immutable).
* 사용 형태: 소괄호 사용
even_numbers_tuple = (2, 4, 6, 8, 10)
todays_datatypes_tuple = ('list', 'tuple', 'dictionary')
특징: 임의의 자료형 값들을 섞어서 항목으로 사용 가능
mixed_tuple = (1, 'abs', [2.1, 4.5])
인덱스 또는 슬라이싱을 이용하여 각각의 항목에 또는 여러 개의 항목에 대한
정보를 활용할 수 있다. 사용법은 문자열의 경우와 동일.
튜플은 수정 불가능하다. 즉, 불변 자료형이다.
튜플 자료형은 불변 자료형이라서 메소드가 별로 없다.
많이 사용되는 두 개이다.
count(): 튜플에 포함된 특정 항목이 몇 번 나타나는지 세어 줌.
index(): 특정 항목의 인덱스가 몇 번인지 확인해 줌.
오늘의 주요 예제
Byun_Sato.txt 파일에는 변사또 학생의 개인 신상정보가 아래와 같이 들어 있다.
```
학생들의 중요 개인정보이며, 배포 금지함.
Name: Byun Sato
Date of Birth: 95.4.28
Email: [email protected]
Department: Computer
Student ID: 201700251003
```
파일의 내용을 읽어서 아래와 같은 형식으로 리턴하는 함수를 구현하고자 한다.
{'Date of Birth': (1995, 4, 28),
'Department': 'Computer',
'Email': '[email protected]',
'Name': 'Byun Sato',
'Student ID': '201700251003'}
End of explanation
t = (3, 50, "yellow")
print(t)
type(t)
l = [3, 50, "yellow"]
l
type(l)
Explanation: 튜플의 기본 활용
오늘의 주요 예제의 문제를 해결하려면, 문자열과 사전 자료형 이외의 튜플에 대해
알아 보아야 한다.
튜플은 순서쌍이라고도 불리며, 리스트와 99% 비슷한 용도를 가진다.
리스트와 다른 점은 튜플이 불변 자료형이라는 것 뿐이다.
물론, 튜플이 불변 자료형이기에 리스트 자료형이 갖고 있는 다양한 메소드를 갖지 않는다.
End of explanation
t[1]
t[-1]
t[:2]
t[: : 2]
Explanation: 튜플의 경우 인덱싱과 슬라이싱은 문자열 또는 리스트에서의 활용과 100% 동일
End of explanation
a = 10, 20, 30
type(a)
print(a)
Explanation: 튜플을 사용할 때 소괄호를 생략해도 된다. 하지만 기본적으로 소괄을 사용한다.
End of explanation
t[1] = 5
Explanation: 튜플은 불변 자료형이다.
리스트와는 달리 인덱싱을 사용하여 튜플 특정 원소의 값을 변경할 수 없다.
End of explanation
So_Ritgun_dob = (1996, 12, 16)
Explanation: 튜플 자료형 활용 예제 1
절대로 변경되지 않거나 변경되어서는 안되는 값들을 저장할 때 사용
예를 들어, 생년월일, 학과 전공 등등.
End of explanation
a, b = 1, 2
a
Explanation: 튜플 자료형 활용 예제 2
여러 개의 변수들에 여러 개의 값들을 한 줄에 동시에 할당하기 위해 사용
End of explanation
a, b = b, a
a
Explanation: 튜플을 이용하면 두 변수에 할당된 값을 스왑(swap)하는 것이 매우 간단하다.
End of explanation
def f(x):
return x**2, x**3
Explanation: 주의: C, C#, Java 등에서 앞서의 예제와 같은 스왑기능을 구현하려면 포인터를 사용해야 한다.
튜플 자료형 활용 예제 3
여러 개의 값들을 리턴하는 함수를 정의할 때 사용
함수의 리턴값은 무조건 하나이다.
예를 들어, 2를 입력 받아서 2의 제곱과 2의 세제곱을 동시에 리턴하는 함수는 정의할 수 없다.
하지만, 두 개의 값을 튜플로 묶어서 하나의 값으로 리턴할 수는 있다.
아래 함수는 입력받은 값의 제곱과 세제곱을 튜플로 묶어서 리턴한다.
주의: 소괄호 기호는 생략이 가능하다는 것에 주의한다.
End of explanation
a, b = f(2)
a
Explanation: 이제 아래와 같이 리턴값 각각의 항목에 변수를 할당하여 사용할 수 있다.
End of explanation
a = ('Hello', 'World')
a[0] = 'Hi'
a = ('Hello', 'World')
a[1][0] = 'w'
Explanation: 불변성(immutability)대 가변성(mutability)
튜플과 문자열은 불변성 자료형이다.
즉, 이미 생성된 값들을 절대 수정할 수 없다.
예를 들어, 아래와 같이 튜플의 특정 항목을 대체하려거나
문자열의 일부를 다른 문자열로 대체하려는 시도는 오류를 발생시킨다.
End of explanation
b = ('Hi', a[1])
b
b = ('Hi',) + (a[1],)
b
Explanation: 만약에 튜플의 특정 항목 또는 문자열의 일부를 다른 문자열로 대체하고 싶다면,
기존의 값들을 이용하여 새로운 튜플과 문자열을 생성해야 한다.
End of explanation
a = (a[0], 'w' + a[1][1:])
a
Explanation: 주의: 길이가 1인 튜플에는 반드시 콤마를 사용해야 한다.
그렇지 않으면 튜플로 간주하지 않는다.
End of explanation
import glob
import string
Explanation: 비유해서 설명하면, 아파트가 문자열 또는 튜플 자료형이라면 아파트를 수선할 수는 없고, 대신에 기존 아파트를 부신 다음에 새로 원하는 대로 지어야 함을 의미한다.
튜플과 사전 활용 예제
튜플과 사전을 함께 활용하면 학생들의 신상정보 등을 저장할 때 유용하다.
예를 들어 '학생이름', '학번', '생일' 등을 키로 사용하고, 키값으로는 진짜 나이, 학번, 생일 등을 저장할 수 있다.
아래 코드는 특정 디렉토리에 저장된 학생들의 신상정보 파일을 모두 읽어들여서 활용하는 프로그램을 위한 함수들을 구현하고 있다.
End of explanation
def std_record_list(dir):
지정된 디렉토리에 포함된 모든 학생들의 신상정보 파일명을 읽어드림.
입력값:
디렉토리 이름 - 문자열 이용.
리턴값:
학생들 신상정보 파일이름으로 구성된 리스트
files = glob.glob(dir + '/*.txt')
return sorted(files)
Explanation: 먼저 std_record_list 함수는 지정된 디렉토리에 포함된 모든 학생들의 신상정보 파일명을 읽어드린다.
glob 모듈의 glob 함수의 활용을 기억해 두면 좋다.
End of explanation
filenames = std_record_list('Sample_Data/Students_Records/')
filenames
Explanation: 위 함수를 활용하여, 'Sample_Data/Students_Record' 디렉토리에 있는 모든 파일들의 이름을 확인할 수 있다.
주의: glob() 함수의 리턴값은 해당 디렉토리에 저장된 파일을 임의의 순서대로 확인하여 리스트를 만든다. 따라서, 이름 순서대로 리스트를 얻기 위해서 sorted() 함수를 활용하였다.
End of explanation
def date_of_birth(date_birth):
'''
생년월일 정보를 (년, 월, 일) 형식으로 변경하는 함수
입력값:
* 생년월일 정보 문자열 - "년.월.일"
리턴값:
* 생년월일 정보 튜플 - (년, 월, 일)
'''
year, month, day = date_birth.split('.')
year = int(year) + 1900
month = int(month)
day = int(day)
ymd = (year, month, day)
return ymd
date_of_birth("2017.09.27")
Explanation: date_of_birth 함수는 생년월일 정보를 (년, 월, 일) 형식으로 변경하는 함수이다.
End of explanation
def record_getter(filename):
'''
지정된 학생의 신상정보를 리스트로 출력함.
각 항목은 항목명과 내용의 튜플로 구성됨
입력값:
파일명을 가리키는 경로
리턴값:
학생들의 신상정보의 각 항목을 담은 리스트
'''
std_data = []
a_file = open(filename, u"r")
for line in a_file.readlines():
if line[0] == '#' or line in string.whitespace:
continue
else:
item, value = line.split(':')
item = item.strip()
value = value.strip()
if item.strip() == 'Date of Birth':
value = date_of_birth(value)
std_data.append((item, value))
return std_data
Explanation: record_getter 함수는 지정된 학생의 신상정보를 리스트에 담아 리턴한다.
리스트의 각각의 항목은 항목명과 항목내용으로 구성된 튜플들이다.
End of explanation
record_getter('Sample_Data/Students_Records/Byun_Sato.txt')
Explanation: 예를 들어 Byun_Sato 학생의 신상정보가 아래와 같이 확인된다.
End of explanation
filenames = std_record_list('Sample_Data/Students_Records/')
So_data = record_getter(filenames[2])
So_data
Explanation: 이제 위 코드를 한 군데 모아서 아래와 같이 각각의 학생의 정보를 얻을 수 있다.
아래 코드는 세 번째 학생의 정보를 확인한다.
End of explanation
for i in range( len(So_data) ):
if So_data[i][0] == 'Department':
print("전공은", So_data[i][1], "입니다.")
break
Explanation: 정보를 확인할 때는 튜플보다 사전이 효율적이다.
위 코드는 학생들의 신상정보의 정리해서 잘 보여준다.
하지만 소속학과, 생년월일 등에 대한 구체적인 정보를 추출하는 일은 좀 번거롭다.
예를 들어, So Ritgun 학생의 소속학과를 확인하려면 다음과 같이 해야 한다.
End of explanation
So_data_dict = {}
for i in range( len(So_data) ):
So_data_dict[So_data[i][0]] = So_data[i][1]
So_data_dict
Explanation: 그런데 사전을 이용하면 보다 쉽게 할 수 있다.
먼저 So_data를 사전으로 만들어보자.
End of explanation
So_data_dict['Department']
So_data_dict['Email']
Explanation: 그러면 소속학과 또는 좋아하는 색깔 등을 확인하는 일이 매우 쉽다.
End of explanation
So_data_dict['Residence'] = 'Anseong'
So_data_dict
Explanation: 주의: 하나의 항목의 키값을 변경하거나 새로운 (키, 값) 항목을 추가하려면 아래 형식을 이용한다.
사전이름[키] = 키값
반면에 여러 항목을 사전에 추가하려면 update() 메소드를 이용한다.
End of explanation
So_data_dict.update({'Grade': '2', 'Semester': '2'})
So_data_dict
Explanation: 주의: 순서는 전혀 중요하지 않다.
End of explanation
del So_data_dict['Residence']
So_data_dict
print(So_data_dict.pop('Grade'))
print(So_data_dict.pop('Semester'))
So_data_dict
So_data_dict['Date of Birth']
So_data_dict['Name']
Explanation: 항목을 삭제하려면 del 함수 또는 pop() 메소드를 사용한다.
존재하지 않는 key를 이용할 경우 어떤 일이 일어나는지 확인하라.
End of explanation
def record_getter(filename):
'''
지정된 학생의 신상정보를 리스트로 출력함.
각 항목은 항목명과 내용의 튜플로 구성됨
입력값:
파일명을 가리키는 경로
리턴값:
학생들의 신상정보의 각 항목을 담은 사전 자료형
'''
std_data = {}
a_file = open(filename, u"r")
for line in a_file.readlines():
if line[0] == '#' or line in string.whitespace:
continue
else:
item, value = line.split(':')
item = item.strip()
value = value.strip()
if item.strip() == 'Date of Birth':
value = date_of_birth(value)
std_data[item] = value
return std_data
record_getter('Sample_Data/Students_Records/Byun_Sato.txt')
Explanation: 이제 사전 자료형을 이용하여 record_getter 함수를 수정하자.
End of explanation
filenames = std_record_list('Sample_Data/Students_Records/')
all_records = []
for file in filenames:
data = record_getter(file)
all_records.append(data)
all_records
Explanation: 아래 코드에서 all_records 변수에는 모든 학생의 신상정보를 리스트로 담고 있다.
각 항목은 각 학생의 신상정보를 담은 사전 자료형이다.
End of explanation
all_records[1]['Department']
Explanation: 이런 식으로 예를 들어 두 번째 학생이 소속학과를 다음처럼 확인 가능하다.
End of explanation
all_records[0]['Name']
Explanation: 또는 첫 번째 학생의 이름을 확인한다.
End of explanation |
9,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inbuilt Data Structures
Data Structures in a language determine the level of flexibility of using the language. If a Language has efficient, inbuilt data structures then the effort of the programmer is reduced. He does not have to code everything from the scratch. Furthermore, if it has user friendly syntax, it also makes the code more readable.
Python accounts for both readability and efficiency. It provides many inbuilt data structure classes that are suitable for day to day programming. In next 4 chapters, we will look at the details of lists,tuples,sets and dictionarys.
First, let's look at Zen of Python - Python Design Principles
Step1: Python is designed according to this philosophy. Now we shall examine basic data structures which comes handy in our journey of Python.
Lists
List is a mutable collection of elements(may be of same or different types), which is indexed by a 0-based integer. Lists are so much like C arrays. But the capability of Python lists called Slicing makes them more powerful.
Creating Lists
Creating an empty list
python
x = [] # [] denotes a list type
# or
x = list()
Creating list with some initial elements
python
x = [2,3,0,'g']
Step2: Accessing List elements
List elements can be accessed by 0-based integer index as in C. In addition to this, Negative indexes are also supported. If x is a list, x[-1] gives 1st element from the last, x[-2] gives second element from the last and so on...
Step3: Obtaining Partitions of the List - Slicing
One can extract a portion of a list, and modify the value of it. If x is a list, it is achieved by a statement in the form of
python
x[start
Step4: You have observed that slices return a list, which have the reference to original list. Hence modifying slice results the change in original array.
Deleting List elements by index - del
If the position of element to be deleted is known, it can be deleted by del statement
To delete the ith element of list x,
python
del x[i]
Step5: Using Operators on List
Step6: <div class="alert alert-info">
**Note**
`x + y` returns a new list that contains elements of `y` appended to `x`. This has no effect on original lists `x` and `y`
</div>
Step7: Operations on List
Unlike the Operators, operations performed on list can act directly on lists and may not return anything
Here are some of operations on list. They are member functions of class list. If x is a list,
x.append(elem) - adds a single element to the end of the list. It does not return the new list, just modifies the original list x.
x.insert(index, elem) - inserts the element at the given index, shifting elements to the right.
x.extend(list2) - adds the elements in list2 to the end of the list. Using + or += on a list is similar to using extend().
x.index(ele) - searches for the given element from the start of the list and returns its index. Throws a ValueError if the element does not appear (use in to check without a ValueError).
x.remove(elem) - searches for the first instance of the given element and removes it (throws ValueError if not present)
x.sort() - sorts the list in place (does not return it). (The sorted() function is preferred.)
x.reverse() - reverses the list in place (does not return it)
x.pop(index) - removes and returns the element at the given index. Returns the rightmost element if index is omitted (roughly the opposite of append()).
Step8: List elements can also be lists, which gives 2-D array like structure
Step9: <div class="alert alert-info">
**Note**
There is no rule that the length of each sublist in a list must be same
</div>
Obtaining length of list - len
Step10: Membership Operator in
in operator can be used to check the existance of an element in the list
Step11: Converting an iterator to list
Using yield keyword, one can create an iterator. Using list(), one can make a list of all values yielded by iterator | Python Code:
import this
Explanation: Inbuilt Data Structures
Data Structures in a language determine the level of flexibility of using the language. If a Language has efficient, inbuilt data structures then the effort of the programmer is reduced. He does not have to code everything from the scratch. Furthermore, if it has user friendly syntax, it also makes the code more readable.
Python accounts for both readability and efficiency. It provides many inbuilt data structure classes that are suitable for day to day programming. In next 4 chapters, we will look at the details of lists,tuples,sets and dictionarys.
First, let's look at Zen of Python - Python Design Principles
End of explanation
x = [1,2,4,5]
x
Explanation: Python is designed according to this philosophy. Now we shall examine basic data structures which comes handy in our journey of Python.
Lists
List is a mutable collection of elements(may be of same or different types), which is indexed by a 0-based integer. Lists are so much like C arrays. But the capability of Python lists called Slicing makes them more powerful.
Creating Lists
Creating an empty list
python
x = [] # [] denotes a list type
# or
x = list()
Creating list with some initial elements
python
x = [2,3,0,'g']
End of explanation
x[3]
x[-2]
Explanation: Accessing List elements
List elements can be accessed by 0-based integer index as in C. In addition to this, Negative indexes are also supported. If x is a list, x[-1] gives 1st element from the last, x[-2] gives second element from the last and so on...
End of explanation
x = [1,2,5,6,7,0,3]
x[1:3] # Access from x[1] to x[2]
x[2:5:2] # Access from x[2] to x[4] in the steps of 2
x[1:3] = [6] # They can modify original list
x # Look at modified list, 6 is replaced twice
x[::-1] # Access the array in reverse order
x[:] # Returns copy of list x
Explanation: Obtaining Partitions of the List - Slicing
One can extract a portion of a list, and modify the value of it. If x is a list, it is achieved by a statement in the form of
python
x[start:stop:step]
It returns elements of x from index start to the index stop (excluding stop) in the steps of step. These 3 arguments are not mandatory. If not specified start is set to 0, stop is set to length of list and step is set to 1
End of explanation
del x[2]
x
Explanation: You have observed that slices return a list, which have the reference to original list. Hence modifying slice results the change in original array.
Deleting List elements by index - del
If the position of element to be deleted is known, it can be deleted by del statement
To delete the ith element of list x,
python
del x[i]
End of explanation
x = [4,3,5,0,1]
y = [2,1,5,4,0]
x + y
Explanation: Using Operators on List
End of explanation
y * 2
Explanation: <div class="alert alert-info">
**Note**
`x + y` returns a new list that contains elements of `y` appended to `x`. This has no effect on original lists `x` and `y`
</div>
End of explanation
x = [0,3,7,2,1]
x.append(9)
x
x.insert(4,4)
x
x.extend([8,7,6])
x
x.remove(6)
x
x.sort()
x
x.reverse()
x
x.pop()
x.pop(0)
x
sorted(x)
Explanation: Operations on List
Unlike the Operators, operations performed on list can act directly on lists and may not return anything
Here are some of operations on list. They are member functions of class list. If x is a list,
x.append(elem) - adds a single element to the end of the list. It does not return the new list, just modifies the original list x.
x.insert(index, elem) - inserts the element at the given index, shifting elements to the right.
x.extend(list2) - adds the elements in list2 to the end of the list. Using + or += on a list is similar to using extend().
x.index(ele) - searches for the given element from the start of the list and returns its index. Throws a ValueError if the element does not appear (use in to check without a ValueError).
x.remove(elem) - searches for the first instance of the given element and removes it (throws ValueError if not present)
x.sort() - sorts the list in place (does not return it). (The sorted() function is preferred.)
x.reverse() - reverses the list in place (does not return it)
x.pop(index) - removes and returns the element at the given index. Returns the rightmost element if index is omitted (roughly the opposite of append()).
End of explanation
x = [[2,3,4],
[1,2,2],
[2,3,4]]
x[1]
x[2][1]
Explanation: List elements can also be lists, which gives 2-D array like structure
End of explanation
x = [1,2,3]
len(x)
x = [[2,3,4],2,['a','v']]
len(x)
Explanation: <div class="alert alert-info">
**Note**
There is no rule that the length of each sublist in a list must be same
</div>
Obtaining length of list - len
End of explanation
x = [1,2,3,0,5,4]
4 in x
10 in x
Explanation: Membership Operator in
in operator can be used to check the existance of an element in the list
End of explanation
list(range(10))
Explanation: Converting an iterator to list
Using yield keyword, one can create an iterator. Using list(), one can make a list of all values yielded by iterator
End of explanation |
9,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Code Repositories
The notebook contains problems oriented around building a basic Python code repository and making it public via Github. Of course there are other places to put code repositories, with complexity ranging from services comparable to github to simple hosting a git server on your local machine. But this focuses on git and github as a ready-to-use example with plenty of additional resources to be found online.
Note that these problems assume you are using the Anaconda Python distribution. This is particular useful for these problems because it makes it very easy to install testing packages in virtual environments quickly and with little wasted disk space. If you are not using anaconda, you can either use an alternative virtual environment scheme (e.g. pyenv or virtualenv), or just install pacakges directly into your default python (and hope for the best...).
For git interaction, this notebook also uses the git command line tools directly. There are a variety of GUI tools that make working with git more visually intuitive (e.g. SourceTree, gitkraken, or the github desktop client), but this notebook uses the command line tools as the lowest common denominator. You are welcome to try to reproduce the steps with your client, however - feel free to ask your neighbors or instructors if you run into trouble there.
As a final note, this notebook's examples assume you are using a system with a unix-like shell (e.g. macOS, Linux, or Windows with git-bash or the Linux subsystem shell).
Original by E Tollerud 2017 for LSSTC DSFP Session3 and AstroHackWeek, modified by B Sipocz
Problem 0
Step1: 0b
Step2: 0c
Step3: 0d
Step4: Final note
Step5: If you want to test-run your code
Step6: 1b
Step7: 1c
Step8: The -u is a convenience that means from then on you can use just git push and git pull to send your code to and from github.
1e
Step9: Now add it to the repository via git commit, and push up to github...
Step10: 1f
Step11: Problem 2
Step12: 2c
Step13: 2c
Step14: and push it up (to a branch on your github fork).
Step15: 2d
Step16: Hopefully they are now satisfied and are willing to hit the merge button.
2g
Step17: Now if you look at the local repo, it should include your changes.
Suggestion You mauy want to change the "origin" remote to your username. E.g. git remote rename origin <yourusername>. To go further, you might even delete your fork's master branch, so that only your neighbor's master exists. That might save you headaches in the long run if you were to ever access this repo again in the future.
2h
Step18: 3b
Step19: 3c
Step20: Now the following should work.
Step21: BUT you will probably get an error here. That's because Python is smart about imports
Step22: 3d
Step23: 3e
Step24: To test that it built sucessfully, the easiest thing to do is cd into the build/lib.X-Y-Z directory ("X-Y-Z" here is OS and machine-specific). Then you should be able to import <yourpkgname>. It's usually best to do this as a completely independent process in python. That way you can be sure you aren't accidentally using an old import as we saw above.
Step25: 3f
Step26: Now we can try running the package from anywhere (not just the source code directory), as long as we're in the same environment that we installed the package in.
Step27: 3g
Step28: Problem 4
Step29: 4b
Step30: Verify that there is a <yourpkg>-<version>.tar.gz file in the dist directory. It should have all of the source code necessary for your package.
4c
Step31: 4d | Python Code:
! #complete
! #complete
Explanation: Code Repositories
The notebook contains problems oriented around building a basic Python code repository and making it public via Github. Of course there are other places to put code repositories, with complexity ranging from services comparable to github to simple hosting a git server on your local machine. But this focuses on git and github as a ready-to-use example with plenty of additional resources to be found online.
Note that these problems assume you are using the Anaconda Python distribution. This is particular useful for these problems because it makes it very easy to install testing packages in virtual environments quickly and with little wasted disk space. If you are not using anaconda, you can either use an alternative virtual environment scheme (e.g. pyenv or virtualenv), or just install pacakges directly into your default python (and hope for the best...).
For git interaction, this notebook also uses the git command line tools directly. There are a variety of GUI tools that make working with git more visually intuitive (e.g. SourceTree, gitkraken, or the github desktop client), but this notebook uses the command line tools as the lowest common denominator. You are welcome to try to reproduce the steps with your client, however - feel free to ask your neighbors or instructors if you run into trouble there.
As a final note, this notebook's examples assume you are using a system with a unix-like shell (e.g. macOS, Linux, or Windows with git-bash or the Linux subsystem shell).
Original by E Tollerud 2017 for LSSTC DSFP Session3 and AstroHackWeek, modified by B Sipocz
Problem 0: Using Jupyter as a shell
As an initial step before diving into code repositories, it's important to understand how you can use Jupyter as a shell. Most of the steps in this notebook require interaction with the system that's easier done with a shell or editor rather than using Python code in a notebook. While this could be done by opening up a terminal beside this notebook, to keep most of your work in the notebook itself, you can use the capabilities Jupyter + IPython offer for shell interaction.
0a: Figure out your base shell path and what's in it
The critical trick here is the ! magic in IPython. Anything after a leading ! in IPython gets run by the shell instead of as python code. Run the shell command pwd and ls to see where IPython thinks you are on your system, and the contents of the directory.
hint: Be sure to remove the "#complete"s below when you've done so. IPython will interpret that as part of the shell command if you don't
End of explanation
%%sh
#complete
Explanation: 0b: Try a multi-line shell command
IPython magics often support "cell" magics by having %%<command> at the top of a cell. Use that to cd into the directory below this one ("..") and then ls inside that directory.
Hint: if you need syntax tips, run the magic() function and look for the ! or !! commands
End of explanation
! #complete
Explanation: 0c: Create a new directory from Jupyter
While you can do this almost as easily with os.mkdir in Python, for this case try to do it using shell magics instead. Make a new directory in the directory you are currently in. Use your system file browser to ensure you were sucessful.
End of explanation
%cd -0 #complete
Explanation: 0d: Change directory to your new directory
One thing about shell commands is that they always start wherever you started your IPython instance. So doing cd as a shell command only changes things temporarily (i.e. within that shell command). IPython provides a %cd magic that makes this change last, though. Use this to %cd into the directory you just created, and then use the pwd shell command to ensure this cd "stuck" (You can also try doing cd as a shell command to prove to yourself that it's different from the %cd magic.)
End of explanation
!mkdir #complete only if you didn't do 0c, or want a different name for your code directory
%%file <yourdirectory>/code.py
def do_something():
# complete
print(something)# this will make it much easier in future problems to see that something is actually happening
Explanation: Final note: %cd -0 is a convenient shorthand to switch back to the initial directory.
Problem 1: Creating a bare-bones repo and getting it on Github
Here we'll create a simple (public) code repository with a minimal set of content, and publish it in github.
1a: Create a basic repository locally
Start by creating the simplest possible code repository, composed of a single code file. Create a directory (or use the one from 0c), and place a code.py file in it, with a bit of Python code of your choosing. (Bonus points for witty or sarcastic code...) You could even use non-Python code if you desired, although Problems 3 & 4 feature Python-specific bits so I wouldn't recommend it.
To make the file from the notebook, the %%file <filename> magic is a convenient way to write the contents of a notebook cell to a file.
End of explanation
%run <yourdirectory>/code.py # complete
do_something()
Explanation: If you want to test-run your code:
End of explanation
%cd # complete
!git init
!git add code.py
!git commit -m #complete
Explanation: 1b: Convert the directory into a git repo
Make that code into a git repository by doing git init in the directory you created, then git add and git commit.
End of explanation
!git remote add <yourgithubusername> <the url github shows you on the repo web page> #complete
!git push <yourgithubusername> master -u
Explanation: 1c: Create a repository for your code in Github
Go to github's web site in your web browser. If you do not have a github account, you'll need to create one (follow the prompts on the github site).
Once you've got an account, you'll need to make sure your git client can authenticate with github. If you're using a GUI, you'll have to figure it out (usually it's pretty easy). On the command line you have two options:
* The simplest way is to connect to github using HTTPS. This requires no initial setup, but git will prompt you for your github username and password every so often.
* If you find that annoying, you can set up your system to use SSH to talk to github. Look for the "SSH and GPG keys" section of your settings on github's site, or if you're not sure how to work with SSH keys, check out github's help on the subject.
Once you've got github set up to talk to your computer, you'll need to create a new repository for the code you created. Hit the "+" in the upper-right, create a "new repository" and fill out the appropriate details (don't create a README just yet).
To be consistent, it's recommended using the same name for your repository as the local directory name you used. But that is not a requirement, just a recommendation.
Once you've created the repository, connect your local repository to github and push your changes up to github.
End of explanation
%%file README.md
# complete
Explanation: The -u is a convenience that means from then on you can use just git push and git pull to send your code to and from github.
1e: Modify the code and send it back up to github
Proper documentation is important. But for now make sure to add a README to your code repository. Always add a README with basic documentation. Always. Even if only you are going to use this code, trust me, your future self will be very happy you did it.
You can just call it README, but to get it to get rendered nicely on the github repository, you can call it README.md and write it using markdown syntax, REAMDE.rst in ReST or various other similar markup languages github understands. If you don't know/care, just use README.md, as that's pretty standard at this point.
End of explanation
!git #complete
Explanation: Now add it to the repository via git commit, and push up to github...
End of explanation
!git #complete
Explanation: 1f: Choose a License
A bet you didn't expect to be reading legalese today... but it turns out this is important. If you do not explicitly license your code, in most countries (including the US and EU) it is technically illegal for anyone to use your code for any purpose other than just looking at it.
(Un?)Fortunately, there are a lot of possible open source licenses out there. Assuming you want an open license, the best resources is to use the "Choose a License" website. Have a look over the options there and decide which you think is appropriate for your code.
Once you've chosen a License, grab a copy of the license text, and place it in your repository as a file called LICENSE (or LICENSE.md or the like). Some licenses might also suggest you place the license text or just a copyright notice in the source code as well, but that's up to you.
Once you've done that, do as we've done before: push all your additions up to github. If you've done it right, github will automatically figure out your license and show it in the upper-right corner of your repo's github page.
End of explanation
# Don't forget to do this cd or something like it... otherwise you'll clone *inside* your repo
%cd -0
!git clone <url from github>#complete
%cd <reponame>#complete
Explanation: Problem 2: Collaborating with others' repos
One very important advantages of working in repositories is that sharing the code becomes much easier, others (and your future self) can have a look at it, use it, and contribute to it. So now we'll have you try modify your neighbors' project using github's Pull Request feature.
2a: Get (git?) your neighbor's code repo
Find someone sitting near you who has gotten through Problem 1. Ask them their github user name and the name of their repository.
Once you've got the name of their repo, navigate to it on github. The URL pattern is always "https://www.github.com/username/reponame". Use the github interface to "fork" that repo, yielding a "yourusername/reponame" repository. Go to that one, take note of the URL needed to clone it (you'll need to grab it from the repo web page, either in "HTTPS" or "SSH" form, depending on your choice in 1a). Then clone that onto your local machine.
End of explanation
!git branch <name-of-branch>#complete
Explanation: 2c: create a branch for your change
You're going to make some changes to their code, but who knows... maybe they'll spend so long reviewing it that you want to do another. So it's always best to make changes in a specific "branch" for that change. So to do this we need to make a github branch.
A super useful site to learn more about branching and praticing scenarios, feel free to check it out now, and also ask:
https://learngitbranching.js.org/
End of explanation
!git add <files modified>#complete
!git commit -m ""#complete
Explanation: 2c: modify the code
Make some change to their code repo. Usually this would be a new feature or a bug fix or documentation clarification or the like... But it's up to you.
Once you've done that, be sure to commit the change locally.
End of explanation
!git push origin <name-of-branch>#complete
Explanation: and push it up (to a branch on your github fork).
End of explanation
!git #complete
Explanation: 2d: Issue a pull request
Now use the github interface to create a new "pull request". Once you've pushed your new branch up, you'll see a prompt to do this automatically appear on your fork's web page. But if you don't, use the "branches" drop-down to navigate to the new branch, and then hit the "pull request" button. That should show you an interface that you can use to leave a title and description (in github markdown), and then submit the PR. Go ahead and do this.
2e: Have them review the PR
Tell your neighbor that you've issued the PR. They should be able to go to their repo, and see that a new pull request has been created. There they'll review the PR, possibly leaving comments for you to change. If so, go to 2f, but if not, they should hit the "Merge" button, and you can jump to 2g.
2f: (If necessary) make changes and update the code
If they left you some comments that require changing prior to merging, you'll need to make those changes in your local copy, commit those changes, and then push them up to your branch on your fork.
End of explanation
!git remote add <neighbors-username> <url-from-neighbors-github-repo> #complete
!git fetch <neighbors-username> #complete
!git branch --set-upstream-to=<neighbors-username>/master master
!git checkout master
!git pull
Explanation: Hopefully they are now satisfied and are willing to hit the merge button.
2g: Get the updated version
Now you should get the up-to-date version from the original owner of the repo, because that way you'll have both your changes and any other changes they might have made in the meantime. To do this you'll need to connect your local copy to your nieghbor's github repo (not your fork).
End of explanation
!mkdir <yourpkgname>#complete
!git mv code.py <yourpkgname>#complete
#The "touch" unix command simply creates an empty file if there isn't one already.
#You could also use an editor to create an empty file if you prefer.
!touch <yourpkgname>/__init__.py#complete
Explanation: Now if you look at the local repo, it should include your changes.
Suggestion You mauy want to change the "origin" remote to your username. E.g. git remote rename origin <yourusername>. To go further, you might even delete your fork's master branch, so that only your neighbor's master exists. That might save you headaches in the long run if you were to ever access this repo again in the future.
2h: Have them reciprocate
Science (Data or otherwise) and open source code is a social enterprise built on shared effort, mutual respect, and trust. So ask them to issue a PR aginst your code, too. The more we can stand on each others' shoulders, the farther we will all see.
Hint: Ask them nicely. Maybe offer a cookie or something?
Problem 3: Setting up a bare-bones Python Package
Up to this point we've been working on the simplest possible shared code: a single file with all the content. But for most substantial use cases this isn't going to cut it. After all, Python was designed around the idea of namespaces that let you hide away or show code to make writing, maintaining, and versioning code much easier. But to make use of these, we need to deploy the installational tools that Python provides. This is typically called "packaging". In this problem we will take the code you just made it and build it into a proper python package that can be installed and then used anywhere.
For more background and detail (and the most up-to-date recommendations) see the Python Packaging Guide.
3a: Set up a Python package structure for your code
First we adjust the structure of your code from Problem 1 to allow it to live in a package structure rather than as a stand-alone .py file. All you need to do is create a directory, move the code.py file into that directory, and add a file (can be empty) called __init__.py into the directory.
You'll have to pick a name for the package, which is usually the same as the repo name (although that's not strictly required, notable exemption is e.g. scikit-learn vs sklearn).
Hint: don't forget to switch back to your code repo directory, if you are doing this immediately after Problem 2.
End of explanation
from <yourpkgname> import code#complete
#if your code.py has a function called `do_something` as in the example above, you can now run it like:
code.do_something()
Explanation: 3b: Test your package
You should now be able to import your package and the code inside it as though it were some installed package like numpy, astropy, pandas, etc.
End of explanation
%%file <yourpkgname>/__init__.py
#complete
Explanation: 3c: Apply packaging tricks
One of the nice things about packages is that they let you hide the implementation of some part of your code in one place while exposing a "cleaner" namespace to the users of your package. To see a (trivial) example, of this, lets pull a function from your code.py into the base namespace of the package. In the below make the __init__.py have one line: from .code import do_something. That places the do_something() function into the package's root namespace.
End of explanation
import <yourpkgname>#complete
<yourpkgname>.do_something()#complete
Explanation: Now the following should work.
End of explanation
from importlib import reload
reload(<yourpkgname>)#complete
<yourpkgname>.do_something()#complete
Explanation: BUT you will probably get an error here. That's because Python is smart about imports: once it's imported a package it won't re-import it later. Usually that saves time, but here it's a hassle. Fortunately, we can use the reload function to get around this:
End of explanation
%%file setup.py
#!/usr/bin/env python
from distutils.core import setup
setup(name='<yourpkgname>',
version='0.1dev',
description='<a description>',
author='<your name>',
author_email='<youremail>',
packages=['<yourpkgname>'],
) #complete
Explanation: 3d: Create a setup.py file
Ok, that's great in a pinch, but what if you want your package to be available from other directories? If you open a new terminal somewhere else and try to import <yourpkgname> you'll see that it will fail, because Python doesn't know where to find your package. Fortunately, Python (both the language and the larger ecosystem) provide built-in tools to install packages. These are built around creating a setup.py script that controls installation of a python packages into a shared location on your machine. Essentially all Python packages are installed this way, even if it happens silently behind-the-scenes.
Below is a template bare-bones setup.py file. Fill it in with the relevant details for your package.
End of explanation
!python setup.py build
Explanation: 3e: Build the package
Now you should be able to "build" the package. In complex packages this will involve more involved steps like linking against C or FORTRAN code, but for pure-python packages like yours, it simply involves filtering out some extraneous files and copying the essential pieces into a build directory.
End of explanation
%%sh
cd build/lib.X-Y-Z #complete
python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete
Explanation: To test that it built sucessfully, the easiest thing to do is cd into the build/lib.X-Y-Z directory ("X-Y-Z" here is OS and machine-specific). Then you should be able to import <yourpkgname>. It's usually best to do this as a completely independent process in python. That way you can be sure you aren't accidentally using an old import as we saw above.
End of explanation
%%sh
conda create -n test_<yourpkgname> anaconda #complete
source activate test_<yourpkgname> #complete
python setup.py install
Explanation: 3f: Install the package
Alright, now that it looks like it's all working as expected, we can install the package. Note that if we do this willy-nilly, we'll end up with lots of packages, perhaps with the wrong versions, and it's easy to get confused about what's installed (there's no reliable uninstall command...) So before installing we first create a virtual environment using Anaconda, and install into that. If you don't have anaconda or a similar virtual environment scheme, you can just do python setup.py install. But just remember that this will be difficult to back out (hence the reason for Python environments in the first place!)
End of explanation
%%sh
cd $HOME
source activate test_<yourpkgname> #complete
python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete
Explanation: Now we can try running the package from anywhere (not just the source code directory), as long as we're in the same environment that we installed the package in.
End of explanation
!git #complete
Explanation: 3g: Update the package on github
OK, it's now installable. You'll now want to make sure to update the github version to reflect these improvements. You'll need to add and commit all the files. You'll also want to update the README to instruct users that they should use python setup.py install to install the package.
End of explanation
%%file -a ~/.pypirc
[distutils]
index-servers = pypi
[pypi]
repository = https://test.pypi.org/legacy/
username = <your user name goes here>
password = <your password goes here>
Explanation: Problem 4: Publishing your package on (fake) PyPI
Now that your package can be installed by anyone who comes across it on github. But it tends to scare some people that they need to download the source code and know git to use your code. The Python Package Index (PyPI), combined with the pip tool (now standard in Python) provides a much simpler way to distribute code. Here we will publish your code to a testing version of PyPI.
4a: Create a PyPI account
First you'll need an account on PyPI to register new packages. Go to the testing PyPI, and register. You'll also need to supply your login details in the .pypirc directory in your home directory as shown below. (If it were the real PyPI you'd want to be more secure and not have your password in plain text. But for the testing server that's not really an issue.)
Note that if you've ever done something like this before and hence already have a .pypirc file, you might get unexpected results if you run this without moving/renaming the old version temporarily.
End of explanation
!python setup.py sdist
Explanation: 4b: Build a "source" version of your package
Use distutils to create the source distribution of your package.
Hint: You'll want to make sure your package version is something you want to release before executing the upload command. Released versions can't be duplicates of existing versions, and shouldn't end in "dev" or "b" or the like."
End of explanation
!twine upload dist/<yourpackage>-<version>
Explanation: Verify that there is a <yourpkg>-<version>.tar.gz file in the dist directory. It should have all of the source code necessary for your package.
4c: Upload your package to PyPI
Once you have an account on PyPI (or testPyPI in our case) you can upload your distributions to PyPI using twine. If this is your first time uploading a distribution for a new project, twine will handle registering the project automatically filling out the details you provided in your setup.py.
End of explanation
%%sh
conda create -n test_pypi_<yourpkgname> anaconda #complete
source activate test_pypi_<yourpkgname> #complete
pip install -i https://testpypi.python.org/pypi <yourpkgname>
%%sh
cd $HOME
source activate test_pypi_<yourpkgname> #complete
python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete
Explanation: 4d: Install your package with pip
The pip tool is a convenient way to install packages on PyPI. Again, we use Anaconda to create a testing environment to make sure everything worked correctly.
(Normally the -i wouldn't be necessary - we're using it here only because we're using the "testing" PyPI)
End of explanation |
9,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ROMS Ocean Model Example
The Regional Ocean Modeling System (ROMS) is an open source hydrodynamic model that is used for simulating currents and water properties in coastal and estuarine regions. ROMS is one of a few standard ocean models, and it has an active user community.
ROMS uses a regular C-Grid in the horizontal, similar to other structured grid ocean and atmospheric models, and a stretched vertical coordinate (see the ROMS documentation for more details). Both of these require special treatment when using xarray to analyze ROMS ocean model output. This example notebook shows how to create a lazily evaluated vertical coordinate, and make some basic plots. The xgcm package is required to do analysis that is aware of the horizontal C-Grid.
Step1: Load a sample ROMS file. This is a subset of a full model available at
http
Step2: Add a lazilly calculated vertical coordinates
Write equations to calculate the vertical coordinate. These will be only evaluated when data is requested. Information about the ROMS vertical coordinate can be found (here)[https
Step3: A naive vertical slice
Creating a slice using the s-coordinate as the vertical dimension is typically not very informative.
Step4: We can feed coordinate information to the plot method to give a more informative cross-section that uses the depths. Note that we did not need to slice the depth or longitude information separately, this was done automatically as the variable was sliced.
Step5: A plan view
Now make a naive plan view, without any projection information, just using lon/lat as x/y. This looks OK, but will appear compressed because lon and lat do not have an aspect constrained by the projection.
Step6: And let's use a projection to make it nicer, and add a coast. | Python Code:
import numpy as np
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
%matplotlib inline
import xarray as xr
Explanation: ROMS Ocean Model Example
The Regional Ocean Modeling System (ROMS) is an open source hydrodynamic model that is used for simulating currents and water properties in coastal and estuarine regions. ROMS is one of a few standard ocean models, and it has an active user community.
ROMS uses a regular C-Grid in the horizontal, similar to other structured grid ocean and atmospheric models, and a stretched vertical coordinate (see the ROMS documentation for more details). Both of these require special treatment when using xarray to analyze ROMS ocean model output. This example notebook shows how to create a lazily evaluated vertical coordinate, and make some basic plots. The xgcm package is required to do analysis that is aware of the horizontal C-Grid.
End of explanation
# load in the file
ds = xr.tutorial.open_dataset("ROMS_example.nc", chunks={"ocean_time": 1})
# This is a way to turn on chunking and lazy evaluation. Opening with mfdataset, or
# setting the chunking in the open_dataset would also achieve this.
ds
Explanation: Load a sample ROMS file. This is a subset of a full model available at
http://barataria.tamu.edu/thredds/catalog.html?dataset=txla_hindcast_agg
The subsetting was done using the following command on one of the output files:
#open dataset
ds = xr.open_dataset('/d2/shared/TXLA_ROMS/output_20yr_obc/2001/ocean_his_0015.nc')
# Turn on chunking to activate dask and parallelize read/write.
ds = ds.chunk({'ocean_time': 1})
# Pick out some of the variables that will be included as coordinates
ds = ds.set_coords(['Cs_r', 'Cs_w', 'hc', 'h', 'Vtransform'])
# Select a a subset of variables. Salt will be visualized, zeta is used to
# calculate the vertical coordinate
variables = ['salt', 'zeta']
ds[variables].isel(ocean_time=slice(47, None, 7*24),
xi_rho=slice(300, None)).to_netcdf('ROMS_example.nc', mode='w')
So, the ROMS_example.nc file contains a subset of the grid, one 3D variable, and two time steps.
Load in ROMS dataset as an xarray object
End of explanation
if ds.Vtransform == 1:
Zo_rho = ds.hc * (ds.s_rho - ds.Cs_r) + ds.Cs_r * ds.h
z_rho = Zo_rho + ds.zeta * (1 + Zo_rho / ds.h)
elif ds.Vtransform == 2:
Zo_rho = (ds.hc * ds.s_rho + ds.Cs_r * ds.h) / (ds.hc + ds.h)
z_rho = ds.zeta + (ds.zeta + ds.h) * Zo_rho
ds.coords["z_rho"] = z_rho.transpose() # needing transpose seems to be an xarray bug
ds.salt
Explanation: Add a lazilly calculated vertical coordinates
Write equations to calculate the vertical coordinate. These will be only evaluated when data is requested. Information about the ROMS vertical coordinate can be found (here)[https://www.myroms.org/wiki/Vertical_S-coordinate]
In short, for Vtransform==2 as used in this example,
$Z_0 = (h_c \, S + h \,C) / (h_c + h)$
$z = Z_0 (\zeta + h) + \zeta$
where the variables are defined as in the link above.
End of explanation
ds.salt.isel(xi_rho=50, ocean_time=0).plot()
Explanation: A naive vertical slice
Creating a slice using the s-coordinate as the vertical dimension is typically not very informative.
End of explanation
section = ds.salt.isel(xi_rho=50, eta_rho=slice(0, 167), ocean_time=0)
section.plot(x="lon_rho", y="z_rho", figsize=(15, 6), clim=(25, 35))
plt.ylim([-100, 1]);
Explanation: We can feed coordinate information to the plot method to give a more informative cross-section that uses the depths. Note that we did not need to slice the depth or longitude information separately, this was done automatically as the variable was sliced.
End of explanation
ds.salt.isel(s_rho=-1, ocean_time=0).plot(x="lon_rho", y="lat_rho")
Explanation: A plan view
Now make a naive plan view, without any projection information, just using lon/lat as x/y. This looks OK, but will appear compressed because lon and lat do not have an aspect constrained by the projection.
End of explanation
proj = ccrs.LambertConformal(central_longitude=-92, central_latitude=29)
fig = plt.figure(figsize=(15, 5))
ax = plt.axes(projection=proj)
ds.salt.isel(s_rho=-1, ocean_time=0).plot(
x="lon_rho", y="lat_rho", transform=ccrs.PlateCarree()
)
coast_10m = cfeature.NaturalEarthFeature(
"physical", "land", "10m", edgecolor="k", facecolor="0.8"
)
ax.add_feature(coast_10m)
Explanation: And let's use a projection to make it nicer, and add a coast.
End of explanation |
9,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 2
This chapter introduces more PyMC syntax and design patterns, and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model.
A little more on PyMC
Parent and Child relationships
To assist with describing Bayesian relationships, and to be consistent with PyMC's documentation, we introduce parent and child variables.
parent variables are variables that influence another variable.
child variable are variables that are affected by other variables, i.e. are the subject of parent variables.
A variable can be both a parent and child. For example, consider the PyMC code below.
Step1: parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.
Likewise, data_generator is a parent to the variable data_plus_one (hence making data_generator both a parent and child variable). Although it does not look like one, data_plus_one should be treated as a PyMC variable as it is a function of another PyMC variable, hence is a child variable to data_generator.
This nomenclature is introduced to help us describe relationships in PyMC modeling. You can access a variable's children and parent variables using the children and parents attributes attached to variables.
Step2: Of course a child can have more than one parent, and a parent can have many children.
PyMC Variables
All PyMC variables also expose a value attribute. This method produces the current (possibly random) internal value of the variable. If the variable is a child variable, its value changes given the variable's parents' values. Using the same variables from before
Step3: PyMC is concerned with two types of programming variables
Step4: The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.
Warning
Step5: The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\lambda$ looked like
Step6: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable.
Inside the deterministic decorator, the Stochastic variables passed in behave like scalars or Numpy arrays (if multivariable), and not like Stochastic variables. For example, running the following
Step7: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
PyMC stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role
Step8: This is how we include data into our models
Step9: Finally...
We wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. I may or may not use this class in future examples ;)
Step10: Modeling approaches
A good starting point in Bayesian modeling is to think about how your data might have been generated. Put yourself in an omniscient position, and try to imagine how you would recreate the dataset.
In the last chapter we investigated text message data. We begin by asking how our observations may have been generated
Step11: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution
Step12: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example
Step13: 4. Plot the artificial dataset
Step14: It is okay that our fictional dataset does not look like our observed dataset
Step15: Later we will see how we use this to make predictions and test the appropriateness of our models.
Example
Step16: Had we had stronger beliefs, we could have expressed them in the prior above.
For this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution
Step17: The observed frequency is
Step18: We combine the observations into the PyMC observed variable, and run our inference algorithm
Step19: We plot the posterior distribution of the unknown $p_A$ below
Step20: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
A and B Together
A similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )
Step21: Below we plot the posterior distributions for the three unknowns
Step22: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
With respect to the posterior distribution of $\text{delta}$, we can see that the majority of the distribution is above $\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable
Step23: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Try playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned
Step24: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
The expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.
Example
Step25: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students
Step26: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$
Step27: Although not everyone flips a second time, we can still model the possible realization of second coin-flips
Step28: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC deterministic variable
Step29: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
Step30: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be "Yes".
The researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35
Step31: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step32: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
I would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with a uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters.
This kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful.
Alternative PyMC Model
Given a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes
Step33: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
If we know the probability of respondents saying "Yes", which is p_skewed, and we have $N=100$ students, the number of "Yes" responses is a binomial random variable with parameters N and p_skewed.
This is where we include our observed 35 "Yes" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True.
Step34: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step35: More PyMC Tricks
Protip
Step36: The remainder of this chapter examines some practical examples of PyMC and PyMC modeling
Step37: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
We need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function.
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t } } $$
In this model, $\beta$ is the variable we are uncertain about. Below is the function plotted for $\beta = 1, 3, -5$.
Step38: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function
Step39: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Let's start modeling this in PyMC. The $\beta, \alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next.
Normal distributions
A Normal random variable, denoted $X \sim N(\mu, 1/\tau)$, has a distribution with two parameters
Step40: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter
Step41: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like
Step42: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$
Step43: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Similarly, all $\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\alpha$ is significantly less than 0.
Regarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected).
Next, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.
Step44: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
An interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature.
Step45: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
More generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next
Step46: Is our model appropriate?
The skeptical reader will say "You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\; \forall t$, which guarantees a defect always occurring
Step47: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
We wish to assess how good our model is. "Good" is a subjective term of course, so results must be relative to other models.
We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.
The following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible original paper, but I'll summarize their use here.
For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \;\text{Defect} = 1 | t, \alpha, \beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above
Step48: Next we sort each column by the posterior probabilities
Step49: We can present the above data better in a figure
Step50: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
The black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.
It is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others
Step51: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
The perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.
Exercises
1. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50?
2. Try plotting $\alpha$ samples versus $\beta$ samples. Why might the resulting plot look like this?
Step52: References
[1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957.
[2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http | Python Code:
import pymc as pm
parameter = pm.Exponential("poisson_param", 1)
data_generator = pm.Poisson("data_generator", parameter)
data_plus_one = data_generator + 1
Explanation: Chapter 2
This chapter introduces more PyMC syntax and design patterns, and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model.
A little more on PyMC
Parent and Child relationships
To assist with describing Bayesian relationships, and to be consistent with PyMC's documentation, we introduce parent and child variables.
parent variables are variables that influence another variable.
child variable are variables that are affected by other variables, i.e. are the subject of parent variables.
A variable can be both a parent and child. For example, consider the PyMC code below.
End of explanation
print("Children of `parameter`: ")
print(parameter.children)
print("\nParents of `data_generator`: ")
print(data_generator.parents)
print("\nChildren of `data_generator`: ")
print(data_generator.children)
Explanation: parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.
Likewise, data_generator is a parent to the variable data_plus_one (hence making data_generator both a parent and child variable). Although it does not look like one, data_plus_one should be treated as a PyMC variable as it is a function of another PyMC variable, hence is a child variable to data_generator.
This nomenclature is introduced to help us describe relationships in PyMC modeling. You can access a variable's children and parent variables using the children and parents attributes attached to variables.
End of explanation
print("parameter.value =", parameter.value)
print("data_generator.value =", data_generator.value)
print("data_plus_one.value =", data_plus_one.value)
Explanation: Of course a child can have more than one parent, and a parent can have many children.
PyMC Variables
All PyMC variables also expose a value attribute. This method produces the current (possibly random) internal value of the variable. If the variable is a child variable, its value changes given the variable's parents' values. Using the same variables from before:
End of explanation
lambda_1 = pm.Exponential("lambda_1", 1) # prior on first behaviour
lambda_2 = pm.Exponential("lambda_2", 1) # prior on second behaviour
tau = pm.DiscreteUniform("tau", lower=0, upper=10) # prior on behaviour change
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value, "\n")
lambda_1.random(), lambda_2.random(), tau.random()
print("After calling random() on the variables...")
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value)
Explanation: PyMC is concerned with two types of programming variables: stochastic and deterministic.
stochastic variables are variables that are not deterministic, i.e., even if you knew all the values of the variables' parents (if it even has any parents), it would still be random. Included in this category are instances of classes Poisson, DiscreteUniform, and Exponential.
deterministic variables are variables that are not random if the variables' parents were known. This might be confusing at first: a quick mental check is if I knew all of variable foo's parent variables, I could determine what foo's value is.
We will detail each below.
Initializing Stochastic variables
Initializing a stochastic variable requires a name argument, plus additional parameters that are class specific. For example:
some_variable = pm.DiscreteUniform("discrete_uni_var", 0, 4)
where 0, 4 are the DiscreteUniform-specific lower and upper bound on the random variable. The PyMC docs contain the specific parameters for stochastic variables. (Or use object??, for example pm.DiscreteUniform?? if you are using IPython!)
The name attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the name.
For multivariable problems, rather than creating a Python array of stochastic variables, addressing the size keyword in the call to a Stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a Numpy array when used like one, and references to its value attribute return Numpy arrays.
The size argument also solves the annoying case where you may have many variables $\beta_i, \; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like:
beta_1 = pm.Uniform("beta_1", 0, 1)
beta_2 = pm.Uniform("beta_2", 0, 1)
...
we can instead wrap them into a single variable:
betas = pm.Uniform("betas", 0, 1, size=N)
Calling random()
We can also call on a stochastic variable's random() method, which (given the parent values) will generate a new, random value. Below we demonstrate this using the texting example from the previous chapter.
End of explanation
type(lambda_1 + lambda_2)
Explanation: The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.
Warning: Don't update stochastic variables' values in-place.
Straight from the PyMC docs, we quote [4]:
Stochastic objects' values should not be updated in-place. This confuses PyMC's caching scheme... The only way a stochastic variable's value should be updated is using statements of the following form:
A.value = new_value
The following are in-place updates and should never be used:
A.value += 3
A.value[2,1] = 5
A.value.attribute = new_attribute_value
Deterministic variables
Since most variables you will be modeling are stochastic, we distinguish deterministic variables with a pymc.deterministic wrapper. (If you are unfamiliar with Python wrappers (also called decorators), that's no problem. Just prepend the pymc.deterministic decorator before the variable declaration and you're good to go. No need to know more. ) The declaration of a deterministic variable uses a Python function:
@pm.deterministic
def some_deterministic_var(v1=v1,):
#jelly goes here.
For all purposes, we can treat the object some_deterministic_var as a variable and not a Python function.
Prepending with the wrapper is the easiest way, but not the only way, to create deterministic variables: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:
End of explanation
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
@pm.deterministic
def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
out = np.zeros(n_data_points)
out[:tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after tau is lambda2
return out
Explanation: The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\lambda$ looked like:
$$
\lambda =
\begin{cases}
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
\end{cases}
$$
And in PyMC code:
End of explanation
%matplotlib inline
from IPython.core.pylabtools import figsize
from matplotlib import pyplot as plt
figsize(12.5, 4)
samples = [lambda_1.random() for i in range(20000)]
plt.hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
Explanation: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable.
Inside the deterministic decorator, the Stochastic variables passed in behave like scalars or Numpy arrays (if multivariable), and not like Stochastic variables. For example, running the following:
@pm.deterministic
def some_deterministic(stoch=some_stochastic_var):
return stoch.value**2
will return an AttributeError detailing that stoch does not have a value attribute. It simply needs to be stoch**2. During the learning phase, it's the variable's value that is repeatedly passed in, not the actual variable.
Notice in the creation of the deterministic function we added defaults to each variable used in the function. This is a necessary step, and all variables must have default values.
Including observations in the Model
At this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like "What does my prior distribution of $\lambda_1$ look like?"
End of explanation
data = np.array([10, 5])
fixed_variable = pm.Poisson("fxd", 1, value=data, observed=True)
print("value: ", fixed_variable.value)
print("calling .random()")
fixed_variable.random()
print("value: ", fixed_variable.value)
Explanation: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
PyMC stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role: fix the variable's current value, i.e. make value immutable. We have to specify an initial value in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an Numpy array for speed). For example:
End of explanation
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
obs = pm.Poisson("obs", lambda_, value=data, observed=True)
print(obs.value)
Explanation: This is how we include data into our models: initializing a stochastic variable to have a fixed value.
To complete our text message example, we fix the PyMC variable observations to the observed dataset.
End of explanation
model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])
Explanation: Finally...
We wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. I may or may not use this class in future examples ;)
End of explanation
tau = pm.rdiscrete_uniform(0, 80)
print(tau)
Explanation: Modeling approaches
A good starting point in Bayesian modeling is to think about how your data might have been generated. Put yourself in an omniscient position, and try to imagine how you would recreate the dataset.
In the last chapter we investigated text message data. We begin by asking how our observations may have been generated:
We started by thinking "what is the best random variable to describe this count data?" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution.
Next, we think, "Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?" Well, the Poisson distribution has a parameter $\lambda$.
Do we know $\lambda$? No. In fact, we have a suspicion that there are two $\lambda$ values, one for the earlier behaviour and one for the latter behaviour. We don't know when the behaviour switches though, but call the switchpoint $\tau$.
What is a good distribution for the two $\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\alpha$.
Do we know what the parameter $\alpha$ might be? No. At this point, we could continue and assign a distribution to $\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\lambda$, ("it probably changes over time", "it's likely between 10 and 30", etc.), we don't really have any strong beliefs about $\alpha$. So it's best to stop here.
What is a good value for $\alpha$ then? We think that the $\lambda$s are between 10-30, so if we set $\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\alpha$ as to reflect our belief is to set the value so that the mean of $\lambda$, given $\alpha$, is equal to our observed mean. This was shown in the last chapter.
We have no expert opinion of when $\tau$ might have occurred. So we will suppose $\tau$ is from a discrete uniform distribution over the entire timespan.
Below we give a graphical visualization of this, where arrows denote parent-child relationships. (provided by the Daft Python library )
<img src="http://i.imgur.com/7J30oCG.png" width = 700/>
PyMC, and other probabilistic programming languages, have been designed to tell these data-generation stories. More generally, B. Cronin writes [5]:
Probabilistic programming will unlock narrative explanations of data, one of the holy grails of business analytics and the unsung hero of scientific persuasion. People think in terms of stories - thus the unreasonable power of the anecdote to drive decision-making, well-founded or not. But existing analytics largely fails to provide this kind of story; instead, numbers seemingly appear out of thin air, with little of the causal context that humans prefer when weighing their options.
Same story; different ending.
Interestingly, we can create new datasets by retelling the story.
For example, if we reverse the above steps, we can simulate a possible realization of the dataset.
1. Specify when the user's behaviour switches by sampling from $\text{DiscreteUniform}(0, 80)$:
End of explanation
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
print(lambda_1, lambda_2)
Explanation: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution:
End of explanation
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
Explanation: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example:
End of explanation
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
Explanation: 4. Plot the artificial dataset:
End of explanation
def plot_artificial_sms_dataset():
tau = pm.rdiscrete_uniform(0, 80)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlim(0, 80)
figsize(12.5, 5)
plt.suptitle("More examples of artificial datasets", fontsize=14)
for i in range(1, 5):
plt.subplot(4, 1, i)
plot_artificial_sms_dataset()
Explanation: It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC's engine is designed to find good parameters, $\lambda_i, \tau$, that maximize this probability.
The ability to generate artificial datasets is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:
End of explanation
import pymc as pm
# The parameters are the bounds of the Uniform.
p = pm.Uniform('p', lower=0, upper=1)
Explanation: Later we will see how we use this to make predictions and test the appropriateness of our models.
Example: Bayesian A/B testing
A/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results.
Similarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. The data is recorded (in real-time), and analyzed afterwards.
Often, the post-experiment analysis is done using something called a hypothesis test like difference of means test or difference of proportions test. This involves often misunderstood quantities like a "Z-score" and even more confusing "p-values" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily learned this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural.
A Simple Case
As this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \lt p_A \lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us.
Suppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \frac{n}{N}$. Unfortunately, the observed frequency $\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\frac{1}{6}$. Knowing the true frequency of events like:
fraction of users who make purchases,
frequency of social attributes,
percent of internet users with cats etc.
are common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data.
The observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data.
With respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be.
To set up a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:
End of explanation
# set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = pm.rbernoulli(p_true, N)
print(occurrences) # Remember: Python treats True == 1, and False == 0
print(occurrences.sum())
Explanation: Had we had stronger beliefs, we could have expressed them in the prior above.
For this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\ \sim \text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data.
End of explanation
# Occurrences.mean is equal to n/N.
print("What is the observed frequency in Group A? %.4f" % occurrences.mean())
print("Does this equal the true frequency? %s" % (occurrences.mean() == p_true))
Explanation: The observed frequency is:
End of explanation
# include the observations, which are Bernoulli
obs = pm.Bernoulli("obs", p, value=occurrences, observed=True)
# To be explained in chapter 3
mcmc = pm.MCMC([p, obs])
mcmc.sample(18000, 1000)
Explanation: We combine the observations into the PyMC observed variable, and run our inference algorithm:
End of explanation
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(mcmc.trace("p")[:], bins=25, histtype="stepfilled", normed=True)
plt.legend();
Explanation: We plot the posterior distribution of the unknown $p_A$ below:
End of explanation
import pymc as pm
figsize(12, 4)
# these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
# notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
# generate some observations
observations_A = pm.rbernoulli(true_p_A, N_A)
observations_B = pm.rbernoulli(true_p_B, N_B)
print("Obs from Site A: ", observations_A[:30].astype(int), "...")
print("Obs from Site B: ", observations_B[:30].astype(int), "...")
print(observations_A.mean())
print(observations_B.mean())
# Set up the pymc model. Again assume Uniform priors for p_A and p_B.
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
@pm.deterministic
def delta(p_A=p_A, p_B=p_B):
return p_A - p_B
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, value=observations_A, observed=True)
obs_B = pm.Bernoulli("obs_B", p_B, value=observations_B, observed=True)
# To be explained in chapter 3.
mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])
mcmc.sample(20000, 1000)
Explanation: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
A and B Together
A similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )
End of explanation
p_A_samples = mcmc.trace("p_A")[:]
p_B_samples = mcmc.trace("p_B")[:]
delta_samples = mcmc.trace("delta")[:]
figsize(12.5, 10)
# histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
Explanation: Below we plot the posterior distributions for the three unknowns:
End of explanation
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
(delta_samples < 0).mean())
print("Probability site A is BETTER than site B: %.3f" % \
(delta_samples > 0).mean())
Explanation: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
With respect to the posterior distribution of $\text{delta}$, we can see that the majority of the distribution is above $\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable:
End of explanation
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
Explanation: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Try playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis.
I hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation.
An algorithm for human deceit
Social data has an additional layer of interest as people are not always honest with responses, which adds a further complication into inference. For example, simply asking individuals "Have you ever cheated on a test?" will surely contain some rate of dishonesty. What you can say for certain is that the true rate is less than your observed rate (assuming individuals lie only about not cheating; I cannot imagine one who would admit "Yes" to cheating when in fact they hadn't cheated).
To present an elegant solution to circumventing this dishonesty problem, and to demonstrate Bayesian modeling, we first need to introduce the binomial distribution.
The Binomial Distribution
The binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like:
$$P( X = k ) = {{N}\choose{k}} p^k(1-p)^{N-k}$$
If $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \sim \text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \le X \le N$), and $p$ is the probability of a single event. The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters.
End of explanation
import pymc as pm
N = 100
p = pm.Uniform("freq_cheating", 0, 1)
Explanation: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
The expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.
Example: Cheating among students
We will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ "Yes I did cheat" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$.
This is a completely absurd model. No student, even with a free-pass against punishment, would admit to cheating. What we need is a better algorithm to ask students if they had cheated. Ideally the algorithm should encourage individuals to be honest while preserving privacy. The following proposed algorithm is a solution I greatly admire for its ingenuity and effectiveness:
In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers "Yes, I did cheat" if the coin flip lands heads, and "No, I did not cheat", if the coin flip lands tails. This way, the interviewer does not know if a "Yes" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers.
I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some Yes's are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of liars.
Suppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\text{Uniform}(0,1)$ prior.
End of explanation
true_answers = pm.Bernoulli("truths", p, size=N)
Explanation: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.
End of explanation
first_coin_flips = pm.Bernoulli("first_flips", 0.5, size=N)
print(first_coin_flips.value)
Explanation: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a Heads and 0 a Tails.
End of explanation
second_coin_flips = pm.Bernoulli("second_flips", 0.5, size=N)
Explanation: Although not everyone flips a second time, we can still model the possible realization of second coin-flips:
End of explanation
@pm.deterministic
def observed_proportion(t_a=true_answers,
fc=first_coin_flips,
sc=second_coin_flips):
observed = fc * t_a + (1 - fc) * sc
return observed.sum() / float(N)
Explanation: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC deterministic variable:
End of explanation
observed_proportion.value
Explanation: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
End of explanation
X = 35
observations = pm.Binomial("obs", N, observed_proportion, observed=True,
value=X)
Explanation: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be "Yes".
The researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35:
End of explanation
model = pm.Model([p, true_answers, first_coin_flips,
second_coin_flips, observed_proportion, observations])
# To be explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(40000, 15000)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
Explanation: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
End of explanation
p = pm.Uniform("freq_cheating", 0, 1)
@pm.deterministic
def p_skewed(p=p):
return 0.5 * p + 0.25
Explanation: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
I would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with a uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters.
This kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful.
Alternative PyMC Model
Given a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes:
\begin{align}
P(\text{"Yes"}) &= P( \text{Heads on first coin} )P( \text{cheater} ) + P( \text{Tails on first coin} )P( \text{Heads on second coin} ) \\
& = \frac{1}{2}p + \frac{1}{2}\frac{1}{2}\\
& = \frac{p}{2} + \frac{1}{4}
\end{align}
Thus, knowing $p$ we know the probability a student will respond "Yes". In PyMC, we can create a deterministic function to evaluate the probability of responding "Yes", given $p$:
End of explanation
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed,
value=35, observed=True)
Explanation: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
If we know the probability of respondents saying "Yes", which is p_skewed, and we have $N=100$ students, the number of "Yes" responses is a binomial random variable with parameters N and p_skewed.
This is where we include our observed 35 "Yes" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True.
End of explanation
model = pm.Model([yes_responses, p_skewed, p])
# To Be Explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(25000, 2500)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
Explanation: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
End of explanation
N = 10
x = np.empty(N, dtype=object)
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)
Explanation: More PyMC Tricks
Protip: Lighter deterministic variables with Lambda class
Sometimes writing a deterministic function using the @pm.deterministic decorator can seem like a chore, especially for a small function. I have already mentioned that elementary math operations can produce deterministic variables implicitly, but what about operations like indexing or slicing? Built-in Lambda functions can handle this with the elegance and simplicity required. For example,
beta = pm.Normal("coefficients", 0, size=(N, 1))
x = np.random.randn((N, 1))
linear_combination = pm.Lambda(lambda x=x, beta=beta: np.dot(x.T, beta))
Protip: Arrays of PyMC variables
There is no reason why we cannot store multiple heterogeneous PyMC variables in a Numpy array. Just remember to set the dtype of the array to object upon initialization. For example:
End of explanation
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
# drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
# plot it, as a function of temperature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
Explanation: The remainder of this chapter examines some practical examples of PyMC and PyMC modeling:
Example: Challenger Space Shuttle Disaster <span id="challenger"/>
On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see [1]):
End of explanation
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.title("Logistic functon plotted for several value of $\\beta$ parameter", fontsize=14)
plt.legend();
Explanation: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
We need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function.
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t } } $$
In this model, $\beta$ is the variable we are uncertain about. Below is the function plotted for $\beta = 1, 3, -5$.
End of explanation
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.title("Logistic functon with bias, plotted for several value of $\\alpha$ bias parameter", fontsize=14)
plt.legend(loc="lower left");
Explanation: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function:
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t + \alpha } } $$
Some plots are below, with differing $\alpha$.
End of explanation
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
Explanation: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Let's start modeling this in PyMC. The $\beta, \alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next.
Normal distributions
A Normal random variable, denoted $X \sim N(\mu, 1/\tau)$, has a distribution with two parameters: the mean, $\mu$, and the precision, $\tau$. Those familiar with the Normal distribution already have probably seen $\sigma^2$ instead of $\tau^{-1}$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: the smaller $\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\tau$ is always positive.
The probability density function of a $N( \mu, 1/\tau)$ random variable is:
$$ f(x | \mu, \tau) = \sqrt{\frac{\tau}{2\pi}} \exp\left( -\frac{\tau}{2} (x-\mu)^2 \right) $$
We plot some different density functions below.
End of explanation
import pymc as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
# notice the`value` here. We explain why below.
beta = pm.Normal("beta", 0, 0.001, value=0)
alpha = pm.Normal("alpha", 0, 0.001, value=0)
@pm.deterministic
def p(t=temperature, alpha=alpha, beta=beta):
return 1.0 / (1. + np.exp(beta * t + alpha))
Explanation: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter:
$$ E[ X | \mu, \tau] = \mu$$
and its variance is equal to the inverse of $\tau$:
$$Var( X | \mu, \tau ) = \frac{1}{\tau}$$
Below we continue our modeling of the Challenger space craft:
End of explanation
p.value
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
observed = pm.Bernoulli("bernoulli_obs", p, value=D, observed=True)
model = pm.Model([observed, beta, alpha])
# Mysterious code to be explained in Chapter 3
map_ = pm.MAP(model)
map_.fit()
mcmc = pm.MCMC(model)
mcmc.sample(120000, 100000, 2)
Explanation: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like:
$$ \text{Defect Incident, $D_i$} \sim \text{Ber}( \;p(t_i)\; ), \;\; i=1..N$$
where $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of beta and alpha to 0. The reason for this is that if beta and alpha are very large, they make p equal to 1 or 0. Unfortunately, pm.Bernoulli does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to 0, we set the variable p to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC.
End of explanation
alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d
beta_samples = mcmc.trace('beta')[:, None]
figsize(12.5, 6)
# histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", normed=True)
plt.legend();
Explanation: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$:
End of explanation
t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
Explanation: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Similarly, all $\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\alpha$ is significantly less than 0.
Regarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected).
Next, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.
End of explanation
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
Explanation: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
An interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature.
End of explanation
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
Explanation: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
More generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how wide the posterior distribution is.
What about the day of the Challenger disaster?
On the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings.
End of explanation
simulated = pm.Bernoulli("bernoulli_sim", p)
N = 10000
mcmc = pm.MCMC([simulated, alpha, beta, observed])
mcmc.sample(N)
figsize(12.5, 5)
simulations = mcmc.trace("bernoulli_sim")[:]
print(simulations.shape)
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i + 1)
plt.scatter(temperature, simulations[1000 * i, :], color="k",
s=50, alpha=0.6)
Explanation: Is our model appropriate?
The skeptical reader will say "You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\; \forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's goodness of fit.
We can think: how can we test whether our model is a bad fit? An idea is to compare observed data (which if we recall is a fixed stochastic variable) with an artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data.
Previously in this Chapter, we simulated artificial datasets for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the posterior distributions to create very plausible datasets. Luckily, our Bayesian framework makes this very easy. We only need to create a new Stochastic variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our Stochastic variable that stored our observed data was:
observed = pm.Bernoulli( "bernoulli_obs", p, value=D, observed=True)
Hence we create:
simulated_data = pm.Bernoulli("simulation_data", p)
Let's simulate 10 000:
End of explanation
posterior_probability = simulations.mean(axis=0)
print("posterior prob of defect | realized defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[i], D[i]))
Explanation: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
We wish to assess how good our model is. "Good" is a subjective term of course, so results must be relative to other models.
We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.
The following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible original paper, but I'll summarize their use here.
For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \;\text{Defect} = 1 | t, \alpha, \beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:
End of explanation
ix = np.argsort(posterior_probability)
print("probb | defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]]))
Explanation: Next we sort each column by the posterior probabilities:
End of explanation
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
Explanation: We can present the above data better in a figure: I've wrapped this up into a separation_plot function.
End of explanation
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7. / 23 * np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model");
Explanation: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
The black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.
It is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others:
the perfect model, which predicts the posterior probability to be equal to 1 if a defect did occur.
a completely random model, which predicts random probabilities regardless of temperature.
a constant model: where $P(D = 1 \; | \; t) = c, \;\; \forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23.
End of explanation
# type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$");
Explanation: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
The perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.
Exercises
1. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50?
2. Try plotting $\alpha$ samples versus $\beta$ samples. Why might the resulting plot look like this?
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: References
[1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957.
[2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http://data.princeton.edu/wws509/datasets/#smoking.
[3] McLeish, Don, and Cyntha Struthers. STATISTICS 450/850 Estimation and Hypothesis Testing. Winter 2012. Waterloo, Ontario: 2012. Print.
[4] Fonnesbeck, Christopher. "Building Models." PyMC-Devs. N.p., n.d. Web. 26 Feb 2013. http://pymc-devs.github.com/pymc/modelbuilding.html.
[5] Cronin, Beau. "Why Probabilistic Programming Matters." 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1.
[6] S.P. Brooks, E.A. Catchpole, and B.J.T. Morgan. Bayesian animal survival estimation. Statistical Science, 15: 357–376, 2000
[7] Gelman, Andrew. "Philosophy and the practice of Bayesian statistics." British Journal of Mathematical and Statistical Psychology. (2012): n. page. Web. 2 Apr. 2013.
[8] Greenhill, Brian, Michael D. Ward, and Audrey Sacks. "The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models." American Journal of Political Science. 55.No.4 (2011): n. page. Web. 2 Apr. 2013.
End of explanation |
9,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 5
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.
Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights
Step4: Normalize features
In the house dataset, features vary wildly in their relative magnitude
Step5: Numpy provides a shorthand for computing 2-norms of each column
Step6: To normalize, apply element-wise division
Step7: Using the shorthand we just covered, write a short function called normalize_features(feature_matrix), which normalizes columns of a given feature matrix. The function should return a pair (normalized_features, norms), where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data.
Step8: To test the function, run the following
Step9: Implementing Coordinate Descent with normalized features
We seek to obtain a sparse set of weights by minimizing the LASSO cost function
SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|).
(By convention, we do not include w[0] in the L1 penalty term. We never want to push the intercept to zero.)
The absolute value sign makes the cost function non-differentiable, so simple gradient descent is not viable (you would need to implement a method called subgradient descent). Instead, we will use coordinate descent
Step10: Don't forget to normalize features
Step11: We assign some random set of initial weights and inspect the values of ro[i]
Step12: Use predict_output() to make predictions on this data.
Step13: Compute the values of ro[i] for each feature in this simple model, using the formula given above, using the formula
Step14: QUIZ QUESTION
Recall that, whenever ro[i] falls between -l1_penalty/2 and l1_penalty/2, the corresponding weight w[i] is sent to zero. Now suppose we were to take one step of coordinate descent on either feature 1 or feature 2. What range of values of l1_penalty would not set w[1] zero, but would set w[2] to zero, if we were to take a step in that coordinate?
QUIZ QUESTION
What range of values of l1_penalty would set both w[1] and w[2] to zero, if we were to take a step in that coordinate?
So we can say that ro[i] quantifies the significance of the i-th feature
Step15: To test the function, run the following cell
Step16: Cyclical coordinate descent
Now that we have a function that optimizes the cost function over a single coordinate, let us implement cyclical coordinate descent where we optimize coordinates 0, 1, ..., (d-1) in order and repeat.
When do we know to stop? Each time we scan all the coordinates (features) once, we measure the change in weight for each coordinate. If no coordinate changes by more than a specified threshold, we stop.
For each iteration
Step17: Using the following parameters, learn the weights on the sales dataset.
Step18: First create a normalized version of the feature matrix, normalized_simple_feature_matrix
Step19: Then, run your implementation of LASSO coordinate descent
Step20: QUIZ QUESTIONS
1. What is the RSS of the learned model on the normalized dataset?
2. Which features had weight zero at convergence?
Step21: Evaluating LASSO fit with more features
Let us split the sales dataset into training and test sets.
Step22: Let us consider the following set of features.
Step23: First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later)
Step24: First, learn the weights with l1_penalty=1e7, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e7, you will need them later.
Step25: QUIZ QUESTION
What features had non-zero weight in this case?
Step26: Next, learn the weights with l1_penalty=1e8, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e8, you will need them later.
Step27: QUIZ QUESTION
What features had non-zero weight in this case?
Step28: Finally, learn the weights with l1_penalty=1e4, on the training data. Initialize weights to all zeros, and set the tolerance=5e5. Call resulting weights weights1e4, you will need them later. (This case will take quite a bit longer to converge than the others above.)
Step29: QUIZ QUESTION
What features had non-zero weight in this case?
Step30: Rescaling learned weights
Recall that we normalized our feature matrix, before learning the weights. To use these weights on a test set, we must normalize the test data in the same way.
Alternatively, we can rescale the learned weights to include the normalization, so we never have to worry about normalizing the test data
Step31: To check your results, if you call normalized_weights1e7 the normalized version of weights1e7, then
Step32: Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix | Python Code:
import graphlab
Explanation: Regression Week 5: LASSO (coordinate descent)
In this notebook, you will implement your very own LASSO solver via coordinate descent. You will:
* Write a function to normalize features
* Implement coordinate descent for LASSO
* Explore effects of L1 penalty
Fire up graphlab create
Make sure you have the latest version of graphlab (>= 1.7)
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to int, before using it below
sales['floors'] = sales['floors'].astype(int)
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
print features
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.
End of explanation
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
Explanation: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights:
End of explanation
X = np.array([[3.,5.,8.],[4.,12.,15.]])
print X
Explanation: Normalize features
In the house dataset, features vary wildly in their relative magnitude: sqft_living is very large overall compared to bedrooms, for instance. As a result, weight for sqft_living would be much smaller than weight for bedrooms. This is problematic because "small" weights are dropped first as l1_penalty goes up.
To give equal considerations for all features, we need to normalize features as discussed in the lectures: we divide each feature by its 2-norm so that the transformed feature has norm 1.
Let's see how we can do this normalization easily with Numpy: let us first consider a small matrix.
End of explanation
norms = np.linalg.norm(X, axis=0) # gives [norm(X[:,0]), norm(X[:,1]), norm(X[:,2])]
print norms
Explanation: Numpy provides a shorthand for computing 2-norms of each column:
End of explanation
print X / norms # gives [X[:,0]/norm(X[:,0]), X[:,1]/norm(X[:,1]), X[:,2]/norm(X[:,2])]
Explanation: To normalize, apply element-wise division:
End of explanation
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
return feature_matrix / norms, norms
Explanation: Using the shorthand we just covered, write a short function called normalize_features(feature_matrix), which normalizes columns of a given feature matrix. The function should return a pair (normalized_features, norms), where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data.
End of explanation
features, norms = normalize_features(np.array([[3.,6.,9.],[4.,8.,12.]]))
print features
# should print
# [[ 0.6 0.6 0.6]
# [ 0.8 0.8 0.8]]
print norms
# should print
# [5. 10. 15.]
Explanation: To test the function, run the following:
End of explanation
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
Explanation: Implementing Coordinate Descent with normalized features
We seek to obtain a sparse set of weights by minimizing the LASSO cost function
SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|).
(By convention, we do not include w[0] in the L1 penalty term. We never want to push the intercept to zero.)
The absolute value sign makes the cost function non-differentiable, so simple gradient descent is not viable (you would need to implement a method called subgradient descent). Instead, we will use coordinate descent: at each iteration, we will fix all weights but weight i and find the value of weight i that minimizes the objective. That is, we look for
argmin_{w[i]} [ SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|) ]
where all weights other than w[i] are held to be constant. We will optimize one w[i] at a time, circling through the weights multiple times.
1. Pick a coordinate i
2. Compute w[i] that minimizes the cost function SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|)
3. Repeat Steps 1 and 2 for all coordinates, multiple times
For this notebook, we use cyclical coordinate descent with normalized features, where we cycle through coordinates 0 to (d-1) in order, and assume the features were normalized as discussed above. The formula for optimizing each coordinate is as follows:
┌ (ro[i] + lambda/2) if ro[i] < -lambda/2
w[i] = ├ 0 if -lambda/2 <= ro[i] <= lambda/2
└ (ro[i] - lambda/2) if ro[i] > lambda/2
where
ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ].
Note that we do not regularize the weight of the constant feature (intercept) w[0], so, for this weight, the update is simply:
w[0] = ro[i]
Effect of L1 penalty
Let us consider a simple model with 2 features:
End of explanation
simple_feature_matrix, norms = normalize_features(simple_feature_matrix)
Explanation: Don't forget to normalize features:
End of explanation
weights = np.array([1., 4., 1.])
Explanation: We assign some random set of initial weights and inspect the values of ro[i]:
End of explanation
prediction = predict_output(simple_feature_matrix, weights)
prediction
Explanation: Use predict_output() to make predictions on this data.
End of explanation
simple_feature_matrix.shape[1]
def calculate_ro(feature_matrix, weights, output, prediction):
ro = []
for i in xrange(feature_matrix.shape[1]):
ro.append(np.dot((output - prediction + weights[i] * simple_feature_matrix[:, i]), np.transpose(simple_feature_matrix[:, i])))
return ro
ro = calculate_ro(simple_feature_matrix, weights, output, prediction)
ro
Explanation: Compute the values of ro[i] for each feature in this simple model, using the formula given above, using the formula:
ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ]
Hint: You can get a Numpy vector for feature_i using:
simple_feature_matrix[:,i]
End of explanation
def lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty):
# compute prediction
prediction = predict_output(feature_matrix, weights)
# compute ro[i] = SUM[ [feature_i]*(output - prediction + weight[i]*[feature_i]) ]
ro_i = np.dot((output - prediction + weights[i] * feature_matrix[:, i]), np.transpose(feature_matrix[:, i]))
if i == 0: # intercept -- do not regularize
new_weight_i = ro_i
elif ro_i < -l1_penalty/2.:
new_weight_i = ro_i + l1_penalty/2
elif ro_i > l1_penalty/2.:
new_weight_i = ro_i - l1_penalty/2
else:
new_weight_i = 0.
return new_weight_i
Explanation: QUIZ QUESTION
Recall that, whenever ro[i] falls between -l1_penalty/2 and l1_penalty/2, the corresponding weight w[i] is sent to zero. Now suppose we were to take one step of coordinate descent on either feature 1 or feature 2. What range of values of l1_penalty would not set w[1] zero, but would set w[2] to zero, if we were to take a step in that coordinate?
QUIZ QUESTION
What range of values of l1_penalty would set both w[1] and w[2] to zero, if we were to take a step in that coordinate?
So we can say that ro[i] quantifies the significance of the i-th feature: the larger ro[i] is, the more likely it is for the i-th feature to be retained.
Single Coordinate Descent Step
Using the formula above, implement coordinate descent that minimizes the cost function over a single feature i. Note that the intercept (weight 0) is not regularized. The function should accept feature matrix, output, current weights, l1 penalty, and index of feature to optimize over. The function should return new weight for feature i.
End of explanation
# should print 0.425558846691
import math
print lasso_coordinate_descent_step(1, np.array([[3./math.sqrt(13),1./math.sqrt(10)],[2./math.sqrt(13),3./math.sqrt(10)]]),
np.array([1., 1.]), np.array([1., 4.]), 0.1)
Explanation: To test the function, run the following cell:
End of explanation
def lasso_cyclical_coordinate_descent(feature_matrix, output, initial_weights, l1_penalty, tolerance):
not_converged = True
itr = 1
weights = initial_weights
while (not_converged):
changes = []
for i in range(len(weights)):
old_weights_i = weights[i] # remember old value of weight[i], as it will be overwritten
# the following line uses new values for weight[0], weight[1], ..., weight[i-1]
# and old values for weight[i], ..., weight[d-1]
weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty)
# use old_weights_i to compute change in coordinate
changes.append(abs(old_weights_i - weights[i]))
# check the stopping criteria
if max(changes) < tolerance:
not_converged = False
itr += 1
# return the weights
return weights
Explanation: Cyclical coordinate descent
Now that we have a function that optimizes the cost function over a single coordinate, let us implement cyclical coordinate descent where we optimize coordinates 0, 1, ..., (d-1) in order and repeat.
When do we know to stop? Each time we scan all the coordinates (features) once, we measure the change in weight for each coordinate. If no coordinate changes by more than a specified threshold, we stop.
For each iteration:
1. As you loop over features in order and perform coordinate descent, measure how much each coordinate changes.
2. After the loop, if the maximum change across all coordinates is falls below the tolerance, stop. Otherwise, go back to step 1.
Return weights
IMPORTANT: when computing a new weight for coordinate i, make sure to incorporate the new weights for coordinates 0, 1, ..., i-1. One good way is to update your weights variable in-place. See following pseudocode for illustration.
```
for i in range(len(weights)):
old_weights_i = weights[i] # remember old value of weight[i], as it will be overwritten
# the following line uses new values for weight[0], weight[1], ..., weight[i-1]
# and old values for weight[i], ..., weight[d-1]
weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty)
# use old_weights_i to compute change in coordinate
...
```
End of explanation
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
initial_weights = np.zeros(3)
l1_penalty = 1e7
tolerance = 1.0
Explanation: Using the following parameters, learn the weights on the sales dataset.
End of explanation
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
(normalized_simple_feature_matrix, simple_norms) = normalize_features(simple_feature_matrix) # normalize features
Explanation: First create a normalized version of the feature matrix, normalized_simple_feature_matrix
End of explanation
weights = lasso_cyclical_coordinate_descent(normalized_simple_feature_matrix, output,
initial_weights, l1_penalty, tolerance)
Explanation: Then, run your implementation of LASSO coordinate descent:
End of explanation
weights
# find rss
prediction = predict_output(normalized_simple_feature_matrix, weights)
error = prediction - output
error_squared = error * error
rss = error_squared.sum()
rss
Explanation: QUIZ QUESTIONS
1. What is the RSS of the learned model on the normalized dataset?
2. Which features had weight zero at convergence?
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Evaluating LASSO fit with more features
Let us split the sales dataset into training and test sets.
End of explanation
all_features = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated']
Explanation: Let us consider the following set of features.
End of explanation
my_output = 'price'
(train_feature_matrix, train_output) = get_numpy_data(train_data, all_features, my_output)
(normalized_train_feature_matrix, norms) = normalize_features(train_feature_matrix) # normalize features
Explanation: First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later)
End of explanation
initial_weights = np.zeros(train_feature_matrix.shape[1])
l1_penalty = 1e7
tolerance = 1.0
initial_weights
weights1e7 = lasso_cyclical_coordinate_descent(normalized_train_feature_matrix, train_output,
initial_weights, l1_penalty, tolerance)
Explanation: First, learn the weights with l1_penalty=1e7, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e7, you will need them later.
End of explanation
weights1e7[11]
all_features
Explanation: QUIZ QUESTION
What features had non-zero weight in this case?
End of explanation
initial_weights = np.zeros(train_feature_matrix.shape[1])
l1_penalty = 1e8
tolerance = 1.0
weights1e8 = lasso_cyclical_coordinate_descent(normalized_train_feature_matrix, train_output,
initial_weights, l1_penalty, tolerance)
Explanation: Next, learn the weights with l1_penalty=1e8, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e8, you will need them later.
End of explanation
weights1e8[11]
Explanation: QUIZ QUESTION
What features had non-zero weight in this case?
End of explanation
initial_weights = np.zeros(train_feature_matrix.shape[1])
l1_penalty = 1e4
tolerance = 1.0
weights1e4 = lasso_cyclical_coordinate_descent(normalized_train_feature_matrix, train_output,
initial_weights, l1_penalty, tolerance)
Explanation: Finally, learn the weights with l1_penalty=1e4, on the training data. Initialize weights to all zeros, and set the tolerance=5e5. Call resulting weights weights1e4, you will need them later. (This case will take quite a bit longer to converge than the others above.)
End of explanation
weights1e4[11]
Explanation: QUIZ QUESTION
What features had non-zero weight in this case?
End of explanation
weights1e4_normalized = weights1e4 / norms
weights1e7_normalized = weights1e7 / norms
weights1e8_normalized = weights1e8 / norms
print weights1e7_normalized[3]
Explanation: Rescaling learned weights
Recall that we normalized our feature matrix, before learning the weights. To use these weights on a test set, we must normalize the test data in the same way.
Alternatively, we can rescale the learned weights to include the normalization, so we never have to worry about normalizing the test data:
In this case, we must scale the resulting weights so that we can make predictions with original features:
1. Store the norms of the original features to a vector called norms:
features, norms = normalize_features(features)
2. Run Lasso on the normalized features and obtain a weights vector
3. Compute the weights for the original features by performing element-wise division, i.e.
weights_normalized = weights / norms
Now, we can apply weights_normalized to the test data, without normalizing it!
Create a normalized version of each of the weights learned above. (weights1e4, weights1e7, weights1e8).
End of explanation
(test_feature_matrix, test_output) = get_numpy_data(test_data, all_features, 'price')
Explanation: To check your results, if you call normalized_weights1e7 the normalized version of weights1e7, then:
print normalized_weights1e7[3]
should return 161.31745624837794.
Evaluating each of the learned models on the test data
Let's now evaluate the three models on the test data:
End of explanation
# find rss on test
prediction = predict_output(test_feature_matrix, weights1e4_normalized)
error = prediction - test_output
error_squared = error * error
rss1e4 = error_squared.sum()
rss1e4
prediction = predict_output(test_feature_matrix, weights1e7_normalized)
error = prediction - test_output
error_squared = error * error
rss1e7 = error_squared.sum()
rss1e7
prediction = predict_output(test_feature_matrix, weights1e8_normalized)
error = prediction - test_output
error_squared = error * error
rss1e8 = error_squared.sum()
rss1e8
Explanation: Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix:
End of explanation |
9,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning - Part I
Theory
Two important figures from Chapter 5
Step1: Randomly select 20% of the samples as test set.
Step2: Using cross-validation, try out $d=1,2,\ldots,20$.
Use accuracy to determine the train/test error.
Step3: The cross-validation results can be loaded into a pandas DataFrame. We see that the model starts overfitting for polynomial degrees $>3$.
Step4: Finally, train the model with lowest mean test error in cross-validation on all training data and determine the error on the test set. | Python Code:
import numpy as np
import pandas as pd
from sklearn import svm, datasets
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV, train_test_split
# load iris data
iris = datasets.load_iris()
X = iris.data
y = iris.target
X[:3]
y[:3]
Explanation: Deep Learning - Part I
Theory
Two important figures from Chapter 5:
Practical
Training an SVM in scikit-learn and choosing its hyperparameters using cross-validation. We are using a polynomial kernel and are tuning the polynomial degree of the kernel:
$
\kappa(\mathbf{u}, \mathbf{v}) = (\mathbf{u}^T \mathbf{v} + c)^d
$
We are using the Iris flower data set first introduced by Ronald Fisher https://en.wikipedia.org/wiki/Iris_flower_data_set which contains:
50 samples
4 features (Sepal length, Sepal width, Petal length, Petal width)
3 classes
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
Explanation: Randomly select 20% of the samples as test set.
End of explanation
parameters = {'degree':list(range(1, 21))}
svc = svm.SVC(kernel='poly')
clf = GridSearchCV(svc, parameters, scoring='accuracy')
clf.fit(X_train, y_train)
Explanation: Using cross-validation, try out $d=1,2,\ldots,20$.
Use accuracy to determine the train/test error.
End of explanation
pd.DataFrame(clf.cv_results_)
Explanation: The cross-validation results can be loaded into a pandas DataFrame. We see that the model starts overfitting for polynomial degrees $>3$.
End of explanation
e = clf.estimator.fit(X_train, y_train)
e
y_pred = e.predict(X_test)
accuracy_score(y_test, y_pred)
Explanation: Finally, train the model with lowest mean test error in cross-validation on all training data and determine the error on the test set.
End of explanation |
9,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Before doing anything, define a simple object which will allow us to perform calculations using the properties of air. The object air is defined in the package atmosphere. The object air is a child of the abstract class gas which has two properties, temperature and pressure. The default conditions are $T=20^{\circ}$C and $P=1013.25$ mb.
Step1: DMA Calculations
Below are the equations used to define the calculations used by the DMA. First, we calculate the electic mobility $Z_e$ as
\begin{equation}
Z_e = \frac{e*C_c}{3\pi\mu{D}}
\end{equation}
where $\mu$ is the viscocity as a function of temperature $T$, $C_c$ is the Cunningham correction factor for particles in the transition regime and $e$ is the .
Viscocity
In the equation above, the viscocity of air is calcualted as
\begin{equation}
\mu = \mu_0\frac{C+T_0}{C+T}\left(\frac{T}{T_0}\right)^{1.5}
\end{equation}
This is <a href = 'http
Step2: Cunningham Correction Factor
The Cunningham correction factor from the equation above is a function of particle diameter and the mean free path of the carrier gas and can be calculated as
\begin{equation}
C_c = \left[1.05\exp\left(-0.39\frac{D}{\lambda}\right)+2.34\right]\times\frac{\lambda}{D}+1
\end{equation}
for diameters less than 100 nm and
\begin{equation}
C_c = \frac{2.25\lambda}{D}+1
\end{equation}
where $D$ is the particle diameter in $\mu$m and $\lambda$ is the mean free path of the gas which can be calculated as
\begin{equation}
\lambda = \lambda_0\frac{P_0}{P}
\end{equation}
Here, $P_0$ and $\lambda_0$ define reference values. At 0.7 atm the mean free path is 66 nm.
Step3: Now, we can calculate the electric mobility
Step4: Calculating the Diameter as Function of DMA Voltage
If we know the dimensions of the DMA, we may now calculate the expected particle diameter as a function of the flow rates and the DMA voltage. Since the function is implicit (the mobility is a function of diameter), the diameter must be solved for iteratively. This can be calculated as in Knutson and Whitby (1975) by equating the center-rod voltage to the electical mobility
Step5: Retrieving a Size Distribution from Scan Data
From the paper Stolzenburg and McMurry [2008], we can retrieve the size distribution using the follow equation (27, from the paper)
Step6: Aligning the Data
Reading the Data using PANDAS
Step7: Once the data is read in, we will need to align the up and down scans. But first, we will need to find where the scans begin and end.
Step8: Truncating and Padding the Data
Step9: Retrieving the Charge Distribution
The following equations were pulled from two different sources. For particles smaller than 1 micron, we can use the Wiedensohler [1988] approximation of the bipolar charge distribution. This looks like
\begin{equation}
f\left(N\right)=10^{\sum_{i=0}^{5}a_i\left(N\right)\left(\log{\frac{D_p}{nm}}\right)^i}
\end{equation}
where the constants defined by $a_i$ are given in the paper and in the code below, $D_p$ is the particle size in nm and $N$ is the number of charges.
For larger particles, use Gunn. The solution in this case is
\begin{equation}
f\left(N\right)=\frac{e}{\sqrt{4\pi^2\varepsilon_0D_pkT}}\exp\left[{-\frac{N-\frac{2\pi\varepsilon_0D_pkT}{e^2}\ln\left(\frac{c_{NI+}Z_{I+}}{c_{NI-}Z_{I-}}\right)}{\frac{4\pi\varepsilon_0D_pkT}{e^2}}}\right]
\end{equation}
where $\varepsilon_0$ is the dielectric constant, $e$ is the elementary electronic charge, $k$ is Boltzman's constant and $c_{I\pm}$ and $Z_{I\pm}$ are the ion concentration and mobility respectively. The concentration of negative and positive ions is assumed to be equal and the ratio of the positive to negative mobility was measured to be 0.875 by Wiedensohler et al. [1986].
Now, let's plot the charging efficiency over a multiplicity of diameters to see if we have this right.
Step10: This plot compares favorably with the results from Table 2 in Wiedensohler [1987]. The code for the charging efficiency was taken from the Igor SMPS code and some of the coefficients are slightly different, so the results should be slightly different.
Solve for Diameter with a Known Mobility
Example
What follow is an example of how to use the function above. We can test this using the functions above for the electic mobility. Start with a 213 nm particle at 23 degrees Celsius and 850 mb.
Step11: The resulting mobility is $8.61\times10^{-9}$ m$^2$/(Vs). Using this, we can plug the mobility in and solve using a starting diameter of 100 nm.
Step12: The result is a particle of diameter 213 nm.
Solve for the FWHM at the given Diameter
Step13: Example of Using FWHM Function
The following is an example of how to use the transfer function to find the FWHM.
Step14: The result is a width that is 5.64 nm. This compares favorably with Chuck's work which shows a width of 5.61nm for the same conditions.
Bringing it all together
Now that we have all of the raw functionality in order, we can begin to put a distribution together by looping.
Step15: Correct for Multiple Charges
In the below function, we will loop through the concentrations starting from the lowest diameter and look for multiply charged particles that would have been mistakenly classified as larger particles. These misclassified particles will be removed from the bins and of those larger particles and placed in the current bin. To do this, we will have to search for the bin which contains the nearest value to the multiply charged value. This can be achieved by using a search function demonstrated below
Step16: This function will return both the index and the value in a list.
Step17: Demonstation of Correction
Continuing with the above examples from the file. | Python Code:
air = atmos.Air()
Explanation: Before doing anything, define a simple object which will allow us to perform calculations using the properties of air. The object air is defined in the package atmosphere. The object air is a child of the abstract class gas which has two properties, temperature and pressure. The default conditions are $T=20^{\circ}$C and $P=1013.25$ mb.
End of explanation
T0 = 275 # Initial temperature for plotting
T1 = 325 # Final temperature for plotting
def mu(T):
air.t = T
return air.mu()
# range of viscocity covering the range of temperature above
z = [mu(i) for i in range(T0,T1)]
# List of temperatures in Kelvin
x = list(range(275,325))
# Convert the list of temperatures to degrees Celsius
x = [float(i)-273.15 for i in x]
# Plot the data above
fig = plt.figure()
ax = plt.axes()
plt.ylabel(r'$\mu$ (Pa-s)')
plt.xlabel(r'$T$ (C)')
plt.title('Viscocity as a function of temperature')
ax.yaxis.set_major_formatter(matplotlib.ticker.ScalarFormatter(useOffset=False))
ax.grid(True)
plt.plot(x, z)
Explanation: DMA Calculations
Below are the equations used to define the calculations used by the DMA. First, we calculate the electic mobility $Z_e$ as
\begin{equation}
Z_e = \frac{e*C_c}{3\pi\mu{D}}
\end{equation}
where $\mu$ is the viscocity as a function of temperature $T$, $C_c$ is the Cunningham correction factor for particles in the transition regime and $e$ is the .
Viscocity
In the equation above, the viscocity of air is calcualted as
\begin{equation}
\mu = \mu_0\frac{C+T_0}{C+T}\left(\frac{T}{T_0}\right)^{1.5}
\end{equation}
This is <a href = 'http://en.wikipedia.org/wiki/Viscosity#Effect_of_temperature_on_the_viscosity_of_a_gas'>Sutherland's formula</a>. In the above equation, the values are as follows:
$C$ is the Sutherlan's constant. For air this value is 120.
$T_0$ is the reference temperature. In this case, we use 291.15 K.
$\mu_0$ is the corresponding reference viscocity. For the given reference temperature, this value is 18.27e-6 Pa-s.
The cell below shows the calculation with a plot for a range of temperatures from 2 to 52 degrees Celsius.
End of explanation
D0 = 50 # Starting diameter
D1 = 500 # Final diameter
air.T = 20
# Get a range of Cc for plotting
cc_range = [aerosol.cc(i,air) for i in range(D0,D1)]
# Range of diameter over which to plot
D = list(range(D0,D1))
fig = plt.figure()
ax = plt.axes()
plt.ylabel(r'$C_c$')
plt.xlabel(r'$D$ (nm)')
plt.title('Cunningham Correction Factor as a function of diameter at sea level')
ax.yaxis.set_major_formatter(matplotlib.ticker.ScalarFormatter(useOffset=False))
plt.plot(D,cc_range)
ax.grid(True)
Explanation: Cunningham Correction Factor
The Cunningham correction factor from the equation above is a function of particle diameter and the mean free path of the carrier gas and can be calculated as
\begin{equation}
C_c = \left[1.05\exp\left(-0.39\frac{D}{\lambda}\right)+2.34\right]\times\frac{\lambda}{D}+1
\end{equation}
for diameters less than 100 nm and
\begin{equation}
C_c = \frac{2.25\lambda}{D}+1
\end{equation}
where $D$ is the particle diameter in $\mu$m and $\lambda$ is the mean free path of the gas which can be calculated as
\begin{equation}
\lambda = \lambda_0\frac{P_0}{P}
\end{equation}
Here, $P_0$ and $\lambda_0$ define reference values. At 0.7 atm the mean free path is 66 nm.
End of explanation
Zrange = [aerosol.z(D, air,1) for D in range (D0,D1)]
D = list(range(D0,D1))
fig = plt.figure()
ax = plt.axes()
plt.ylabel(r'$Z$ (m$^2$/V$\times$s)')
plt.xlabel(r'$D$ (nm)')
plt.title('Electrical mobility for singly charged particles')
ax.yaxis.set_major_formatter(matplotlib.ticker.ScalarFormatter(useOffset=False))
plt.plot(D,Zrange)
ax.grid(True)
Explanation: Now, we can calculate the electric mobility:
End of explanation
noaa_dma = dma.NoaaWide()
# Flows in lpm
qc = 5
qm = 5
# Maximum number of iterations for Newton-Raphson
maxit = 1000
# Low diameter for range
Vlow = 5
# High diameter for range
Vhigh = 2500
# Define range
Vrange = list(range(Vlow,Vhigh))
air.T = 20
air.P = 840
xrange = [noaa_dma.v2d(i, air, qc,qm) for i in range(Vlow,Vhigh)]
fig = plt.figure()
ax = plt.axes()
plt.ylabel(r'$D$ (nm)')
plt.xlabel(r'Voltage (V)')
plt.title('Expected DMA Diameter: Qc=Qm=5, n = 1, P = 840, T = 20')
ax.yaxis.set_major_formatter(matplotlib.ticker.ScalarFormatter(useOffset=False))
plt.plot(Vrange,xrange)
ax.grid(True)
Explanation: Calculating the Diameter as Function of DMA Voltage
If we know the dimensions of the DMA, we may now calculate the expected particle diameter as a function of the flow rates and the DMA voltage. Since the function is implicit (the mobility is a function of diameter), the diameter must be solved for iteratively. This can be calculated as in Knutson and Whitby (1975) by equating the center-rod voltage to the electical mobility:
\begin{equation}
Z = \frac{q_c+q_m}{4\pi\Lambda{V}}
\end{equation}
The flows, $q_c$ and $q_m$, are the sheath flows at the entrance and exit. $\Lambda$ is a DMA constant given by
\begin{equation}
\Lambda = \frac{L}{\ln\left(\frac{r_{inner}}{r_{outer}}\right)}
\end{equation}
Here, $L$ is the length of the DMA column and $r_{inner}$ and $r_{outer}$ are the inner and outer radii of the annular gap in the DMA. For the NOAA wide, these values are as follows:
$L=0.34054$
$r_i = 0.0312$
$r_o = 0.03613$
End of explanation
dfile = 'C:/Users/mrichardson/Documents/HAGIS/SMPS/Scans/SCAN_20150306_06_05_16.txt'
# read in a file and parse the column labeled 'Date_Time' as a date object.
# Index on the first column with the header being in the third row.
# This file uses the EOL constant and therefore has to use the lineterminator = newline
fdata = pd.read_csv(dfile, parse_dates = 'Date_Time', index_col = 0, header = 2, lineterminator = '\n')
# Show the list of column headers
list(fdata.columns.values)
# Plot the diameter as a function of time
fdata['DMA_Diam'].plot()
# Plot the
fdata['CPC_1_Cnt'].plot()
Explanation: Retrieving a Size Distribution from Scan Data
From the paper Stolzenburg and McMurry [2008], we can retrieve the size distribution using the follow equation (27, from the paper):
\begin{equation}
\frac{dN}{d\ln{D_{p1}}}\bigg|{D^_{p1}}=\frac{N_1\left(V_1\right)a_1^}{\frac{Q{a1}}{Q_{s1}}\beta_1\left(1+\delta_1\right)f_c\left(D^{p1},1\right)\eta{CPC}\left(D^_{p1}\right)}
\end{equation}
This equation is accurate only under certain circumstances. Here,
\begin{equation}
a^=\left(\frac{-d\ln{Z_p}}{d\ln{D_p}}\right)\bigg|_{D^_p}
\end{equation}
and $D_p$ is the particle diameter, $D^_p$ is the particle diameter associated with the centroid transfer function $Z^p$, $Q_a$ is the aerosol flow rate, $Q_s$ is the sheath flow rate, $\eta{CPC}$ is the CPC counting efficiency and $N\left(V\right)$ is the CPC concentration at voltage $V$.
In addition, we have the two ratios
\begin{equation}
\delta = \frac{Q_s-Q_a}{Q_s+Q_a}
\end{equation}
and
\begin{equation}
\beta = \frac{Q_s+Q_a}{Q_m+Q_c}
\end{equation}
where $Q_m$ is the main excess air outlet flow rate and $Q_c$ is the clean sheath inlet flow rate.
Reading in a DMA File
End of explanation
# Retrieve the meta data from the header
meta_data = pd.read_csv(dfile,header = 0, lineterminator = '\n', nrows = 1)
list(meta_data.columns.values)
Explanation: Aligning the Data
Reading the Data using PANDAS
End of explanation
print(str(meta_data.Date[0]))
# Number of seconds in scan
tscan = meta_data['Scan_Time'].values[0]
tdwell = meta_data['Dwell_Time'].values[0]
vhigh = meta_data['High_Voltage'].values[0]
vlow = meta_data['Low_Voltage'].values[0]
dhigh = meta_data['High_Diameter'].values[0]
dlow = meta_data['Low_Diameter'].values[0]
Explanation: Once the data is read in, we will need to align the up and down scans. But first, we will need to find where the scans begin and end.
End of explanation
# Retrieve the concentration and diameters of interest
cpc_cnt = fdata['CPC_1_Cnt'].values/fdata['CPC_Flw'].values
#d = fdata['DMA_Diam'].values;
up_data = fdata
# Reverse the entire data frame
down_data = fdata.iloc[::-1]
cpc_up = fdata['CPC_1_Cnt'].values[0:tscan]/fdata['CPC_Flw'].values[0:tscan]
cpc_down = down_data['CPC_1_Cnt'].values[(tdwell):(tscan+tdwell)]/down_data['CPC_Flw'].values[(tdwell):(tscan+tdwell)]
# Get the lag
corr = np.correlate(cpc_up,cpc_down,mode = 'full')
corr = corr[corr.size/2:]
# Here is the lag as applied to the two sets. The lag is divided by two because
# the lag applies to both sets (up and down)
delta = 13 # a fudge factor for alignment purposes
f=floor(corr.argmax(axis=0)/2+ delta)
print('The lag is ' + str(f))
# Shift the up data with the number of zeros padding on the end equal to the lag
cpc_up = np.pad(fdata['CPC_1_Cnt'].values[f:tscan]/fdata['CPC_Flw'].values[f:tscan],[0,f], 'constant', constant_values=(0,0));
cpc_up[np.where(np.isinf(cpc_up))]=0.0
cpc_up[np.where(np.isnan(cpc_up))]=0.0
# Padding the down scan is trickier - if the parameter f (should be the lag in the correlation)
# is larger than the dwell time, we will have a negative resize parameter - this is no good.
# Pad the front with the number of zeros that goes beyond the end (front in the reveresed array).
# This makes sense. I guess.
pad = 0
if (f>tdwell):
pad = f-tdwell
f = tdwell
# Shift the down data so that we pad the front and back appropriately
cpc_down = np.pad(down_data['CPC_1_Cnt'].values[(tdwell-f):(tscan+tdwell-(f+pad))]/
down_data['CPC_Flw'].values[(tdwell-f):(tscan+tdwell-(f+pad))],
[0,pad], 'constant', constant_values= (0,0))
# Truncate the data from the up scan
up_data = up_data.iloc[:tscan]
# Get the voltage from the up scan
vup = up_data['DMA_Volts'].values
# Get the mean of all the columns in up_data
mup = up_data.mean(axis=0)
smooth_p = 0.3
smooth_up = sm.nonparametric.lowess(cpc_up,up_data.DMA_Diam.values, frac = smooth_p, it = 1, missing='none')
# Truncate the down data. In Chuck's code, this is tdwell:tscan+tdwell.
# But, I don't understand why this would be since we are flipping the
# distribution. Seems to me that the conditions of interest should
# be in the range of 0 to tscan.
cpc_down[np.where(np.isinf(cpc_down))]=0.0
cpc_down[np.where(np.isnan(cpc_down))]=0.0
down_data = down_data.iloc[:tscan]
smooth_down = sm.nonparametric.lowess(cpc_down,down_data['DMA_Diam'].values, frac = smooth_p, missing='none')
# Get the down voltages.
vdown = down_data['DMA_Volts'].values
# Get the mean of all the columns in down_data
mdown = down_data.mean(axis=0)
air.t = mup.Aer_Temp_C
air.p = mup.Aer_Pres_PSI
cdhigh = noaa_dma.v2d(vhigh,air, mup.Sh_Q_VLPM, mup.Sh_Q_VLPM)
cdlow = noaa_dma.v2d(vlow,air, mup.Sh_Q_VLPM, mup.Sh_Q_VLPM)
# Number of bins for the interpolated matrix of diameters
numbins = 300
# Array for scan to interpolate diameters over; base is e
diam_interp = np.logspace(np.log10(1),np.log10(1000), numbins)
#fig = plt.figure()
#ax = plt.axes()
#plt.plot(ndiam,x,'.')
#ax.set_xscale('log')
#ax.xaxis
# plot the truncated data
plt.plot(cpc_up, 'r.', cpc_down, 'b.',smooth_up[:,1], 'r+', smooth_down[:,1], 'b+')
Explanation: Truncating and Padding the Data
End of explanation
# Set the diameter range to 50 - 500 nm
dp0 = 50
dp1 = 500
plt.plot(list(range(dp0,dp1)),[aerosol.ndistr(i) for i in range(dp0,dp1)])
plt.ylabel(r'Charging Efficiency')
plt.xlabel(r'$D$ (nm)')
plt.title('Charging efficiency of singly charged particles')
plt.grid()
Explanation: Retrieving the Charge Distribution
The following equations were pulled from two different sources. For particles smaller than 1 micron, we can use the Wiedensohler [1988] approximation of the bipolar charge distribution. This looks like
\begin{equation}
f\left(N\right)=10^{\sum_{i=0}^{5}a_i\left(N\right)\left(\log{\frac{D_p}{nm}}\right)^i}
\end{equation}
where the constants defined by $a_i$ are given in the paper and in the code below, $D_p$ is the particle size in nm and $N$ is the number of charges.
For larger particles, use Gunn. The solution in this case is
\begin{equation}
f\left(N\right)=\frac{e}{\sqrt{4\pi^2\varepsilon_0D_pkT}}\exp\left[{-\frac{N-\frac{2\pi\varepsilon_0D_pkT}{e^2}\ln\left(\frac{c_{NI+}Z_{I+}}{c_{NI-}Z_{I-}}\right)}{\frac{4\pi\varepsilon_0D_pkT}{e^2}}}\right]
\end{equation}
where $\varepsilon_0$ is the dielectric constant, $e$ is the elementary electronic charge, $k$ is Boltzman's constant and $c_{I\pm}$ and $Z_{I\pm}$ are the ion concentration and mobility respectively. The concentration of negative and positive ions is assumed to be equal and the ratio of the positive to negative mobility was measured to be 0.875 by Wiedensohler et al. [1986].
Now, let's plot the charging efficiency over a multiplicity of diameters to see if we have this right.
End of explanation
air.t = 23
air.p = 850
z = aerosol.z(213,air,1)
print(z)
Explanation: This plot compares favorably with the results from Table 2 in Wiedensohler [1987]. The code for the charging efficiency was taken from the Igor SMPS code and some of the coefficients are slightly different, so the results should be slightly different.
Solve for Diameter with a Known Mobility
Example
What follow is an example of how to use the function above. We can test this using the functions above for the electic mobility. Start with a 213 nm particle at 23 degrees Celsius and 850 mb.
End of explanation
d0 = 1e-9
aerosol.z2d(z,air,1)
Explanation: The resulting mobility is $8.61\times10^{-9}$ m$^2$/(Vs). Using this, we can plug the mobility in and solve using a starting diameter of 100 nm.
End of explanation
'''
Return the full-width, half-max of the transfer function in diameter space.
This implementation ignores diffusion broadening.
@param dp: particle size in nm
@param qa: aerosol flow rate in lpm
@param qs: aerosol flow rate in lpm
@param T: temperature in degrees Celsius
@param P: pressure in millibars
@return: Width of transfer function in nm.
'''
def xferFWHM(dp,qa,qs,gas):
beta = float(qa)/float(qs)
# Retrieve the center mobility
Zc = aerosol.z(dp,gas,1)
# Upper bound of the mobility
Zm = (1-beta/2)*Zc
# Lower bound of the mobility
Zp = (1+beta/2)*Zc
return aerosol.z2d(Zm, gas, 1)-aerosol.z2d(Zp, gas, 1)
Explanation: The result is a particle of diameter 213 nm.
Solve for the FWHM at the given Diameter
End of explanation
air.t = 20
air.p = 850
xferFWHM(100,1,10,air)
Explanation: Example of Using FWHM Function
The following is an example of how to use the transfer function to find the FWHM.
End of explanation
# Make sure the conditions are good
air.t = mup.Aer_Temp_C
air.p = mup.Aer_Pres_PSI
# Calculate the diameters to use
sup = [noaa_dma.v2d(i,air, mup.Sh_Q_VLPM, mup.Sh_Q_VLPM) for i in up_data.DMA_Set_Volts.values]
#output_sd = []
ls = len(sup)
dlogd = np.zeros(ls) # calculate dlogd
fwhm = np.zeros(ls) # hold width
dnlogd = np.zeros(ls)
for e,i in enumerate(sup):
try:
fwhm[e] = xferFWHM(i, mup.Aer_Q_VLPM, mup.Sh_Q_VLPM,air )
#output_sd[e] = smooth_up[e,1]
dlogd[e] = np.log10(i+fhwm[e]/2)-np.log10(i-fhwm[e]/2)
#dnlogd[e] = smooth_up[e,1]/dlogd[e]
except (ValueError,ZeroDivisionError):
fwhm[e] = np.nan
print('Handling divide by zero error')
except:
fwhm[e] = np.nan
print('Handling unknown error: ' + str(sys.exc_info()[0]))
print(sup[100])
print(up_data.DMA_Set_Volts.values)
Explanation: The result is a width that is 5.64 nm. This compares favorably with Chuck's work which shows a width of 5.61nm for the same conditions.
Bringing it all together
Now that we have all of the raw functionality in order, we can begin to put a distribution together by looping.
End of explanation
# The following shows two different ways to get the index closest to the value 136.5
index = min(enumerate(sup), key =lambda x: abs(x[1]-136.5))
idx = (np.abs(np.asarray(sup) - 136.5)).argmin()
print(idx)
print(index)
Explanation: Correct for Multiple Charges
In the below function, we will loop through the concentrations starting from the lowest diameter and look for multiply charged particles that would have been mistakenly classified as larger particles. These misclassified particles will be removed from the bins and of those larger particles and placed in the current bin. To do this, we will have to search for the bin which contains the nearest value to the multiply charged value. This can be achieved by using a search function demonstrated below:
End of explanation
def chargeCorr(diam,dn,gas,n=3,pos_neg=-1):
'''
Correct the input concentrations for multiple charges.
This function does not return anything as it handles array input by reference.
Parameters
----------
diam: array of float
array of diameters in nm
dn: array of integers
Array of particle concentrations corresponding to diameter 'diam'
gas: gas object
Gas object that defines the properties of the gas
n: int, optional
Number of charges to consider. Default is 3.
pos_neg: int, optional
Positive or negative one indicating whether to consider positive or negative charges.
Default is -1.
Returns
-------
None
'''
dn_removed = np.zeros(len(dn))
single_frac = np.zeros(len(dn))
# Flip both the incoming diamter array and the concentration distribution
rdiam = diam[::-1]
rdn = dn[::-1]
dn_raw = dn
dn_work = dn
dn_raw = dn
# We are working backwards, so we need to have the length to get this all right...
l = len(dn)-1
# Find the value closest to diameter d in the array diam
fmin = lambda d: (np.abs(np.asarray(diam) - d)).argmin()
for i,d in enumerate(rdiam):
# Get the fraction of particles that are singly charged
single_frac[i] = aerosol.ndistr(d,pos_neg,gas.t)
#print("Diameter is " + str(d))
#print("Single charge fraction is " + str(single_frac[i]))
for j in reversed(range(2,n+1)):
ne = j*pos_neg
# Ratio of singly charge particles to particles with charge ne
c_rat = single_frac[i]/aerosol.ndistr(d,ne,gas.t)
# print("c_rat = " + str(c_rat))
#print("charge is " + str(ne))
#print("charge efficiency is " + str(aerosol.ndistr(d,ne,gas.t)))
z = aerosol.z(d,gas,1)
z_mult = abs(ne*aerosol.z(d,gas,pos_neg))
#print("Z_mult is " + str(z_mult))
d_mult = aerosol.z2d(z_mult, gas, 1)
#print("d_mult is " + str(d_mult))
# Do NOT try to move particles for which we don't have a diameter
if (d_mult >= diam[0]):
# Find the index of the multiple charges
k = fmin(d_mult)
#print("k is " + str(k))
#print("l-i is = " + str(l-i))
# Calculate the number to move
n2move = min(dn_raw[l-i]/c_rat, dn_work[l-i])
#print("n2move is " + str(n2move))
#print("dn[k] is = " + str(dn[k]))
dn[k] += n2move
dn_work[l-i] -= n2move
#print("dn[k] + n2move is = " + str(dn[k]))
#print("dn[l-i] is = " + str(dn[l-i]))
dn[l-i] -= n2move
#print("dn[l-i] - n2move is = " + str(dn[l-i]))
# Correct for single charging
single_frac = single_frac[::-1]
plt.plot(dn, 'g')
#print(dn[0])
#print(single_frac[0])
dn = dn/single_frac
plt.plot(dn,'b')
#print(single_frac)
return None
Explanation: This function will return both the index and the value in a list.
End of explanation
plt.plot(smooth_up[:,1])
plt.grid()
ax = plt.axes()
smooth_up[45,1]
print(smooth_up[200,1])
# Copy the array so that we don't have to run all the way back
output_sd = np.copy(smooth_up[:,1]) # Space for the size distribution
#plt.plot(output_sd)
chargeCorr(sup,output_sd,air)
#plt.plot(output_sd, 'r', smooth_up[:,1], 'b')
#plt.plot(output_sd)
#ax = plt.axes()
#ax.set_xlim(0,50)
plt.plot(output_sd, '+')
ax = plt.axes()
plt.grid()
Explanation: Demonstation of Correction
Continuing with the above examples from the file.
End of explanation |
9,543 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I need to square a 2D numpy array (elementwise) and I have tried the following code: | Problem:
import numpy as np
a = np.arange(4).reshape(2, 2)
power = 5
a = a ** power |
9,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
molPX Di-Ala example
<pre>
Guillermo Pérez-Hernández [email protected]
</pre>
In this notebook we will be using a trajectory of Di-Ala-peptide to easily identify conformations in the Ramachandran plot.
Step1: Start from files on disk
Step2: Featurize to Ramachandran $(\phi,\psi)$-pairs with PyEMMA
Step3: Visualize a FES and the molecular structures behind it
Execute the following cell and click either on the FES or on the slidebar
Step4: Visualize trajectories, FES and molecular structures
Step5: Paths samples along the different projections (=axis)
Step6: Let's do TICA and try to look a the correlations in a TICA analysis | Python Code:
from os.path import exists
import molpx
from matplotlib import pylab as plt
%matplotlib ipympl
import pyemma
import numpy as np
Explanation: molPX Di-Ala example
<pre>
Guillermo Pérez-Hernández [email protected]
</pre>
In this notebook we will be using a trajectory of Di-Ala-peptide to easily identify conformations in the Ramachandran plot.
End of explanation
top = molpx._molpxdir(join='notebooks/data/ala2.pdb')
# What data do we have?
if exists('/group/ag_cmb/scratch/gph82/Di-Ala-nbdata/ala2.dcd'):
MD_trajfiles = ['/group/ag_cmb/scratch/gph82/Di-Ala-nbdata/ala2.dcd'] #long trajectory
elif exists('/home/guille/ala2.dcd'):
MD_trajfiles = ['/home/guille/ala2.dcd'] # extra for Stralsund
else:
MD_trajfiles = [molpx._molpxdir(join='notebooks/data/ala2.mini.xtc')] #short trajectory
Explanation: Start from files on disk
End of explanation
feat = pyemma.coordinates.featurizer(top)
feat.add_backbone_torsions()
src = pyemma.coordinates.source(MD_trajfiles, features=feat)
Y = src.get_output()
Explanation: Featurize to Ramachandran $(\phi,\psi)$-pairs with PyEMMA
End of explanation
mpx_widget_box = molpx.visualize.FES(MD_trajfiles,
top,
Y,
#proj_idxs=[1],
nbins=50,
proj_labels=['$\phi$',
'$\psi$'],
atom_selection="symbol != H",
#n_overlays=5,
#sticky=True,
#color_list='random'
)
mpx_widget_box
Explanation: Visualize a FES and the molecular structures behind it
Execute the following cell and click either on the FES or on the slidebar
End of explanation
from molpx import visualize, _linkutils
from imp import reload
reload(visualize)
reload(_linkutils)
mpl_wdg_box = molpx.visualize.traj(MD_trajfiles,
top,
Y,
plot_FES = True,
#dt = dt*1e-6, tunits='ms',
max_frames=10000,
proj_idxs=[0, 1],
panel_height=2,
proj_labels=['$\phi$', '$\psi$']
)
mpl_wdg_box
Explanation: Visualize trajectories, FES and molecular structures
End of explanation
paths_dict, idata = molpx.generate.projection_paths(MD_trajfiles,
top,
Y,
n_points=50,
proj_idxs=[0,1],
n_projs=3,
proj_dim = 3,
verbose=False,
)
# Choose the coordinate and the type of path
coord = 1
path_type = 'min_rmsd'
#path_type = 'min_disp'
igeom = paths_dict[coord][path_type]["geom"]
ipath = paths_dict[coord][path_type]["proj"]
# Choose the proj_idxs for the path and the FES
# to be shown
proj_idxs = [0,1]
plt.ioff() # Turn of interactive plotting
plt.figure(figsize=(4,4))
h, (x,y) = np.histogramdd(np.vstack(Y)[:,proj_idxs], bins=50)
plt.contourf(x[:-1], y[:-1], -np.log(h.T), alpha=.50)
plt.ion()
linked_NGL_wdg, linked_ax_widget = molpx.visualize.sample(ipath[:,proj_idxs],
igeom,
plt.gca(),
clear_lines=True,
n_smooth = 2,
plot_path=True,
#radius=True,
)
linked_NGL_wdg._set_size('4in', '4in')
from ipywidgets import HBox
HBox([linked_NGL_wdg, linked_ax_widget.canvas])
Explanation: Paths samples along the different projections (=axis)
End of explanation
feat = pyemma.coordinates.featurizer(top)
#feat.add_backbone_torsions(cossin=True)
feat.add_distances(feat.topology.select('symbol != H'))
src = pyemma.coordinates.source(MD_trajfiles, features=feat)
tica = pyemma.coordinates.tica(src, lag=np.int(src.trajectory_lengths()/3000))
Y_tica = tica.get_output()
mpx_wdg_box = molpx.visualize.FES(MD_trajfiles,
top,
Y_tica,
n_overlays=5,
atom_selection='backbone',
#sticky=True,
#color_list='rand'
)
mpx_wdg_box
mpx_wdg_box = molpx.visualize.traj(MD_trajfiles,
top,
Y_tica,
plot_FES = True,
#dt = dt*1e-6, tunits='ms',
max_frames=10000,
#proj_idxs=[0,1],
panel_height=2,
projection=tica
)
mpx_wdg_box
paths_dict, idata = molpx.generate.projection_paths(MD_trajfiles,
top,
Y_tica,
n_points=50,
proj_idxs=[0,1],
n_projs=2,
proj_dim = 2,
verbose=False,
)
# Choose the coordinate and the type of path
coord = 0
path_type = 'min_rmsd'
#path_type = 'min_disp'
igeom = paths_dict[coord][path_type]["geom"]
ipath = paths_dict[coord][path_type]["proj"]
# Choose the proj_idxs for the path and the FES
# to be shown
proj_idxs = [0,1]
plt.figure(figsize=(4,4))
h, (x,y) = np.histogramdd(np.vstack(Y_tica)[:,proj_idxs], bins=50)
plt.contourf(x[:-1], y[:-1], -np.log(h.T), alpha=.50)
linked_wdg, axes_widget = molpx.visualize.sample(ipath[:,proj_idxs],
igeom,
plt.gca(),
clear_lines=True,
n_smooth = 1,
plot_path=True,
)
# You can even choose to add the correlations a posteriori
molpx.visualize.correlations(tica, widget=linked_wdg, proj_idxs=0)
linked_wdg.center_view()
linked_wdg
Explanation: Let's do TICA and try to look a the correlations in a TICA analysis
End of explanation |
9,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy 소개
NumPy(보통 "넘파이"라고 발음한다)는 2005년에 Travis Oliphant가 발표한 수치해석용 Python 패키지이다. 다차원의 행렬 자료구조인 ndarray 를 지원하여 벡터와 행렬을 사용하는 선형대수 계산에 주로 사용된다. 내부적으로는 BLAS 라이브러리와 LAPACK 라이브러리에 기반하고 있어서 C로 구현된 CPython에서만 사용할 수 있으며 Jython, IronPython, PyPy 등의 Python 구현에서는 사용할 수 없다. NumPy의 행렬 연산은 C로 구현된 내부 반복문을 사용하기 때문에 Python 반복문에 비해 속도가 빠르다. 행렬 인덱싱(array indexing)을 사용한 질의(Query) 기능을 이용하여 짧고 간단한 코드로 복잡한 수식을 계산할 수 있다.
NumPy
수치해석용 Python 라이브러리
CPython에서만 사용 가능
BLAS/LAPACK 기반
ndarray 다차원 행렬 자료 구조 제공
내부 반복문 사용으로 빠른 행렬 연산 가능
행렬 인덱싱(array indexing) 기능
ndarray 클래스
NumPy의 핵심은 ndarray라고 하는 클래스 이다. ndarray 클래스는 다차원 행렬 자료 구조를 지원한다. 실제로 ndarray를 사용하여 1차원 행렬(벡터)을 만들어 보자
Step1: 만들어진 ndarray 객체의 표현식(representation)을 보면 바깥쪽에 array()란 것이 붙어 있을 뿐 리스트와 동일한 구조처럼 보인다. 실제로 0, 1, 2, 3 이라는 원소가 있는 리스트는 다음과 같이 만든다.
Step2: 그러나 ndarray 클래스 객체 a와 리스트 클래스 객체 b는 많은 차이가 있다. 우선 리스트 클래스 객체는 내부적으로 linked list와 같은 형태를 가지므로 각각의 원소가 다른 자료형이 될 수 있다. 그러나 ndarray 클래스 객체는 C언어의 행렬처럼 연속적인 메모리 배치를 가지기 때문에 모든 원소가 같은 자료형이어야 한다. 이러한 제약을 가지는 대신 내부의 원소에 대한 접근과 반복문 실행이 빨라진다.
ndarray 클래스의 또 다른 특성은 행렬의 각 원소에 대한 연산을 한 번에 처리하는 벡터화 연산(vectorized operation)을 지원한다는 점이다. 예를 들어 ndarray 클래스 객체의 원소의 크기를 모두 제곱하기 위해서는 객체 자체를 제곱하는 것만으로 원하는 결과를 얻을 수 있다.
Step3: 리스트 객체의 경우에는 다음과 같이 반복문을 사용해야 한다.
Step4: 각각의 코드 실행시에 IPython의 %time 매직 명령을 이용하여 실행 시간을 측정한 결과 ndarray의 유니버설 연산 실행 속도가 리스트 반복문 보다 빠른 것을 볼 수 있다. ndarray의 메모리 할당을 한 번에 하는 것도 빨라진 이유의 하나이고 유니버설 연산을 사용하게 되면 NumPy 내부적으로 구현된 반복문을 사용하기 때문에 반복문 실행 자체도 빨라진다.
따라서 Python의 성능 개선을 위해 반드시 지켜야하는 코딩 관례 중의 하나가 NumPy의 ndarray의 벡터화 연산으로 대체할 수 있는 경우에는 Python 자체의 반복문을 사용하지 않는다는 점이다.
Python 리스트
여러가지 타입의 원소
linked List 구현
메모리 용량이 크고 속도가 느림
벡터화 연산 불가
NumPy ndarray
동일 타입의 원소
contiguous memory layout
메모리 최적화, 계산 속도 향상
벡터화 연산 가능
참고로 일반적인 리스트 객체에 정수를 곱하면 객체의 크기가 정수배 만큼으로 증가한다.
Step5: 다차원 행렬의 생성
ndarray 는 N-dimensional Array의 약자이다. 이름 그대로 ndarray 클래스는 단순 리스트와 유사한 1차원 행렬 이외에도 2차원 행렬, 3차원 행렬 등의 다차원 행렬 자료 구조를 지원한다.
예를 들어 다음과 같이 리스트의 리스트를 이용하여 2차원 행렬을 생성하거나 리스트의 리스트의 리스트를 이용하여 3차원 행렬을 생성할 수 있다.
Step6: 행렬의 차원 및 크기는 ndim 속성과 shape 속성으로 알 수 있다.
Step7: 다차원 행렬의 인덱싱
ndarray 클래스로 구현한 다차원 행렬의 원소 하나 하나는 다음과 같이 콤마(comma ,)를 사용하여 접근할 수 있다. 콤마로 구분된 차원을 축(axis)이라고도 한다. 플롯의 x축과 y축을 떠올리면 될 것이다.
Step8: 다차원 행렬의 슬라이싱
ndarray 클래스로 구현한 다차원 행렬의 원소 중 복수 개를 접근하려면 일반적인 파이썬 슬라이싱(slicing)과 comma(,)를 함께 사용하면 된다.
Step9: 행렬 인덱싱
NumPy ndarray 클래스의 또다른 강력한 기능은 행렬 인덱싱(fancy indexing)이라고도 부르는 행렬 인덱싱(array indexing) 방법이다. 인덱싱이라는 이름이 붙었지만 사실은 데이터베이스의 질의(Query) 기능을 수행한다.
행렬 인덱싱에서는 대괄호(Bracket, [])안의 인덱스 정보로 숫자나 슬라이스가 아닌 ndarray 행렬을 받을 수 있다. 여기에서는 이 행렬을 편의상 인덱스 행렬이라고 부르겠다. 행렬 인덱싱의 방식에는 불리안(Boolean) 행렬 방식과 정수 행렬 방식 두가지가 있다.
먼저 불리안 행렬 인덱싱 방식은 인덱스 행렬의 원소가 True, False 두 값으로만 구성되며 인덱스 행렬의 크기가 원래 ndarray 객체의 크기와 같아야 한다.
예를 들어 다음과 같은 1차원 ndarray에서 홀수인 원소만 골라내려면 홀수인 원소에 대응하는 인덱스 값이 True이고 짝수인 원소에 대응하는 인덱스 값이 False인 인덱스 행렬을 사용한다.
Step10: 이는 다음과 같이 간단하게 쓸 수도 있다.
Step11: 2차원 이상의 인덱스인 경우에는 다음과 같이
Step12: 정수 행렬 인덱싱에서는 인덱스 행렬의 원소 각각이 원래 ndarray 객체 원소 하나를 가리키는 인덱스 정수이여야 한다.
예를 들어 1차원 행렬에서 홀수번째 원소만 골라내려만 다음과 같다
Step13: 정수 행렬 인덱스의 크기는 원래의 행렬 크기와 달라도 상관없다. 같은 원소를 반복해서 가리키는 경우에는 원래의 행렬보다 더 커지기도 한다. | Python Code:
import numpy as np
a = np.array([0,1,2,3,4,5,6,7,8,9])
print(type(a))
a
Explanation: NumPy 소개
NumPy(보통 "넘파이"라고 발음한다)는 2005년에 Travis Oliphant가 발표한 수치해석용 Python 패키지이다. 다차원의 행렬 자료구조인 ndarray 를 지원하여 벡터와 행렬을 사용하는 선형대수 계산에 주로 사용된다. 내부적으로는 BLAS 라이브러리와 LAPACK 라이브러리에 기반하고 있어서 C로 구현된 CPython에서만 사용할 수 있으며 Jython, IronPython, PyPy 등의 Python 구현에서는 사용할 수 없다. NumPy의 행렬 연산은 C로 구현된 내부 반복문을 사용하기 때문에 Python 반복문에 비해 속도가 빠르다. 행렬 인덱싱(array indexing)을 사용한 질의(Query) 기능을 이용하여 짧고 간단한 코드로 복잡한 수식을 계산할 수 있다.
NumPy
수치해석용 Python 라이브러리
CPython에서만 사용 가능
BLAS/LAPACK 기반
ndarray 다차원 행렬 자료 구조 제공
내부 반복문 사용으로 빠른 행렬 연산 가능
행렬 인덱싱(array indexing) 기능
ndarray 클래스
NumPy의 핵심은 ndarray라고 하는 클래스 이다. ndarray 클래스는 다차원 행렬 자료 구조를 지원한다. 실제로 ndarray를 사용하여 1차원 행렬(벡터)을 만들어 보자
End of explanation
L = [0,1,2,3,4,5,6,7,8,9]
print(type(L))
L
Explanation: 만들어진 ndarray 객체의 표현식(representation)을 보면 바깥쪽에 array()란 것이 붙어 있을 뿐 리스트와 동일한 구조처럼 보인다. 실제로 0, 1, 2, 3 이라는 원소가 있는 리스트는 다음과 같이 만든다.
End of explanation
a = np.arange(1000000) #백만
%time a2 = a**2
Explanation: 그러나 ndarray 클래스 객체 a와 리스트 클래스 객체 b는 많은 차이가 있다. 우선 리스트 클래스 객체는 내부적으로 linked list와 같은 형태를 가지므로 각각의 원소가 다른 자료형이 될 수 있다. 그러나 ndarray 클래스 객체는 C언어의 행렬처럼 연속적인 메모리 배치를 가지기 때문에 모든 원소가 같은 자료형이어야 한다. 이러한 제약을 가지는 대신 내부의 원소에 대한 접근과 반복문 실행이 빨라진다.
ndarray 클래스의 또 다른 특성은 행렬의 각 원소에 대한 연산을 한 번에 처리하는 벡터화 연산(vectorized operation)을 지원한다는 점이다. 예를 들어 ndarray 클래스 객체의 원소의 크기를 모두 제곱하기 위해서는 객체 자체를 제곱하는 것만으로 원하는 결과를 얻을 수 있다.
End of explanation
L = range(100000) #십만
%time L2 = [i**2 for i in L]
Explanation: 리스트 객체의 경우에는 다음과 같이 반복문을 사용해야 한다.
End of explanation
L = range(10)
print(L)
print(L * 2)
Explanation: 각각의 코드 실행시에 IPython의 %time 매직 명령을 이용하여 실행 시간을 측정한 결과 ndarray의 유니버설 연산 실행 속도가 리스트 반복문 보다 빠른 것을 볼 수 있다. ndarray의 메모리 할당을 한 번에 하는 것도 빨라진 이유의 하나이고 유니버설 연산을 사용하게 되면 NumPy 내부적으로 구현된 반복문을 사용하기 때문에 반복문 실행 자체도 빨라진다.
따라서 Python의 성능 개선을 위해 반드시 지켜야하는 코딩 관례 중의 하나가 NumPy의 ndarray의 벡터화 연산으로 대체할 수 있는 경우에는 Python 자체의 반복문을 사용하지 않는다는 점이다.
Python 리스트
여러가지 타입의 원소
linked List 구현
메모리 용량이 크고 속도가 느림
벡터화 연산 불가
NumPy ndarray
동일 타입의 원소
contiguous memory layout
메모리 최적화, 계산 속도 향상
벡터화 연산 가능
참고로 일반적인 리스트 객체에 정수를 곱하면 객체의 크기가 정수배 만큼으로 증가한다.
End of explanation
a = np.array([0,1,2])
a
b = np.array([[0, 1, 2], [3, 4, 5]]) # 2 x 3 array
b
c = np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) # 2 x 2 x 2 array
c
Explanation: 다차원 행렬의 생성
ndarray 는 N-dimensional Array의 약자이다. 이름 그대로 ndarray 클래스는 단순 리스트와 유사한 1차원 행렬 이외에도 2차원 행렬, 3차원 행렬 등의 다차원 행렬 자료 구조를 지원한다.
예를 들어 다음과 같이 리스트의 리스트를 이용하여 2차원 행렬을 생성하거나 리스트의 리스트의 리스트를 이용하여 3차원 행렬을 생성할 수 있다.
End of explanation
print(a.ndim)
print(a.shape)
print(b.ndim)
print(b.shape)
print(c.ndim)
print(c.shape)
Explanation: 행렬의 차원 및 크기는 ndim 속성과 shape 속성으로 알 수 있다.
End of explanation
a = np.array([[0, 1, 2], [3, 4, 5]])
a
a[0,0] # 첫번째 행의 첫번째 열
a[0,1] # 첫번째 행의 두번째 열
a[-1, -1] # 마지막 행의 마지막 열
Explanation: 다차원 행렬의 인덱싱
ndarray 클래스로 구현한 다차원 행렬의 원소 하나 하나는 다음과 같이 콤마(comma ,)를 사용하여 접근할 수 있다. 콤마로 구분된 차원을 축(axis)이라고도 한다. 플롯의 x축과 y축을 떠올리면 될 것이다.
End of explanation
a = np.array([[0, 1, 2, 3], [4, 5, 6, 7]])
a
a[0, :] # 첫번째 행 전체
a[:, 1] # 두번째 열 전체
a[1, 1:] # 두번째 행의 두번째 열부터 끝열까지
Explanation: 다차원 행렬의 슬라이싱
ndarray 클래스로 구현한 다차원 행렬의 원소 중 복수 개를 접근하려면 일반적인 파이썬 슬라이싱(slicing)과 comma(,)를 함께 사용하면 된다.
End of explanation
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
idx = np.array([True, False, True, False, True, False, True, False, True, False])
a[idx]
Explanation: 행렬 인덱싱
NumPy ndarray 클래스의 또다른 강력한 기능은 행렬 인덱싱(fancy indexing)이라고도 부르는 행렬 인덱싱(array indexing) 방법이다. 인덱싱이라는 이름이 붙었지만 사실은 데이터베이스의 질의(Query) 기능을 수행한다.
행렬 인덱싱에서는 대괄호(Bracket, [])안의 인덱스 정보로 숫자나 슬라이스가 아닌 ndarray 행렬을 받을 수 있다. 여기에서는 이 행렬을 편의상 인덱스 행렬이라고 부르겠다. 행렬 인덱싱의 방식에는 불리안(Boolean) 행렬 방식과 정수 행렬 방식 두가지가 있다.
먼저 불리안 행렬 인덱싱 방식은 인덱스 행렬의 원소가 True, False 두 값으로만 구성되며 인덱스 행렬의 크기가 원래 ndarray 객체의 크기와 같아야 한다.
예를 들어 다음과 같은 1차원 ndarray에서 홀수인 원소만 골라내려면 홀수인 원소에 대응하는 인덱스 값이 True이고 짝수인 원소에 대응하는 인덱스 값이 False인 인덱스 행렬을 사용한다.
End of explanation
a[a % 2 == 0]
Explanation: 이는 다음과 같이 간단하게 쓸 수도 있다.
End of explanation
a = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]])
a[a % 2 == 0]
Explanation: 2차원 이상의 인덱스인 경우에는 다음과 같이
End of explanation
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) * 10
idx = np.array([0, 2, 4, 6, 8])
a[idx]
Explanation: 정수 행렬 인덱싱에서는 인덱스 행렬의 원소 각각이 원래 ndarray 객체 원소 하나를 가리키는 인덱스 정수이여야 한다.
예를 들어 1차원 행렬에서 홀수번째 원소만 골라내려만 다음과 같다
End of explanation
a = np.array([0, 1, 2, 3]) * 10
idx = np.array([0, 0, 0, 1, 1, 2, 2, 2, 2, 3, 3])
a[idx]
Explanation: 정수 행렬 인덱스의 크기는 원래의 행렬 크기와 달라도 상관없다. 같은 원소를 반복해서 가리키는 경우에는 원래의 행렬보다 더 커지기도 한다.
End of explanation |
9,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST digit recognition using SVC with poly kernel in scikit-learn
polynomial
Step1: Where's the data?
Step2: How much of the data will we use?
Step3: Read the training images and labels
Step4: Read the test images and labels
Step5: Use the smaller, fewer images for testing
Print a sample
Step6: SVC Default Parameter Settings
Step7: RANDOMIZED grid search
Step8: Analyze the results of the parameter pairs randomly selected
Step10: Heatmap of the accuracy of the C and gamma pairs chosen in the grid search
see http
Step12: Predict the test set and analyze the result
Step14: Learning Curves
see http
Step15: Validation Curves | Python Code:
from __future__ import division
import os, time, math, csv
import cPickle as pickle
import matplotlib.pyplot as plt
import numpy as np
from print_imgs import print_imgs # my own function to print a grid of square images
from sklearn.preprocessing import StandardScaler
from sklearn.utils import shuffle
from sklearn.svm import SVC
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import RandomizedSearchCV
from sklearn.metrics import classification_report, confusion_matrix
np.random.seed(seed=1009)
%matplotlib inline
#%qtconsole
Explanation: MNIST digit recognition using SVC with poly kernel in scikit-learn
polynomial: ($\gamma \langle x, x'\rangle + r)^d$
keywords ... $\gamma$: gamma, $d$: degree, $r$: coef0
> Using RANDOMIZED grid search, find optimal parameters
See Comparing randomized search and grid search for hyperparameter estimation for a discussion of using a randomized grid search rather than an exhaustive one. The statement is made The result in parameter settings is quite similar, while the run time for randomized search is dramatically lower. The performance is slightly worse for the randomized search, though this is most likely a noise effect and would not carry over to a held-out test set.
My process was to iteratively narrow the bounds of the grid search. Narrowing the end points and increasing the density can improve precision but I'm not sure at what point greater precision no longer matters in a stochastic domain nor am I certain that the C/gamma tradeoff is strictly monotone linear.
End of explanation
file_path = '../data/'
DESKEWED = True
if DESKEWED:
train_img_filename = 'train-images_deskewed.csv'
test_img_filename = 't10k-images_deskewed.csv'
else:
train_img_filename = 'train-images.csv'
test_img_filename = 't10k-images.csv'
train_label_filename = 'train-labels.csv'
test_label_filename = 't10k-labels.csv'
Explanation: Where's the data?
End of explanation
portion = 0.10 # set to 1.0 for all of it, less than 1.0 for less
Explanation: How much of the data will we use?
End of explanation
# read trainX
with open(file_path + train_img_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainX = np.ascontiguousarray(data, dtype = np.float64)
# scale trainX
scaler = StandardScaler()
scaler.fit(trainX) # find mean/std for trainX
trainX = scaler.transform(trainX) # scale trainX with trainX mean/std
# read trainY
with open(file_path + train_label_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainY = np.ascontiguousarray(data, dtype = np.int8).ravel()
# shuffle trainX & trainY
trainX, trainY = shuffle(trainX, trainY, random_state=0)
# select a subset
if portion < 1.0:
trainX = trainX[:portion*trainX.shape[0]]
trainY = trainY[:portion*trainY.shape[0]]
print("trainX shape: {0}".format(trainX.shape))
print("trainY shape: {0}\n".format(trainY.shape))
print(trainX.flags)
Explanation: Read the training images and labels
End of explanation
# read testX
with open(file_path + test_img_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
testX = np.ascontiguousarray(data, dtype = np.float64)
# scale testX
testX = scaler.transform(testX) # scale testX with trainX mean/std
# read testY
with open(file_path + test_label_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
testY = np.ascontiguousarray(data, dtype = np.int8).ravel()
# shuffle testX, testY
testX, testY = shuffle(testX, testY, random_state=0)
# select a subset
if portion < 1.0:
testX = testX[:portion*testX.shape[0]]
testY = testY[:portion*testY.shape[0]]
print("testX shape: {0}".format(testX.shape))
print("testY shape: {0}".format(testY.shape))
Explanation: Read the test images and labels
End of explanation
print_imgs(images = trainX,
actual_labels = trainY,
predicted_labels = trainY,
starting_index = np.random.randint(0, high=trainY.shape[0]-36, size=1)[0],
size = 6)
Explanation: Use the smaller, fewer images for testing
Print a sample
End of explanation
# default parameters for SVC
# ==========================
default_svc_params = {}
default_svc_params['C'] = 1.0 # penalty
default_svc_params['class_weight'] = None # Set the parameter C of class i to class_weight[i]*C
# set to 'auto' for unbalanced classes
default_svc_params['gamma'] = 0.0 # Kernel coefficient for 'rbf', 'poly' and 'sigmoid'
default_svc_params['kernel'] = 'rbf' # 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed' or a callable
default_svc_params['shrinking'] = True # Whether to use the shrinking heuristic.
default_svc_params['probability'] = False # Whether to enable probability estimates.
default_svc_params['tol'] = 0.001 # Tolerance for stopping criterion.
default_svc_params['cache_size'] = 200 # size of the kernel cache (in MB).
default_svc_params['max_iter'] = -1 # limit on iterations within solver, or -1 for no limit.
default_svc_params['random_state'] = 1009
default_svc_params['verbose'] = False
default_svc_params['degree'] = 3 # 'poly' only
default_svc_params['coef0'] = 0.0 # 'poly' and 'sigmoid' only
# set parameters for the classifier
# =================================
svc_params = dict(default_svc_params)
svc_params['cache_size'] = 2000
#svc_params['probability'] = True
svc_params['kernel'] = 'poly'
svc_params['C'] = 1.0
svc_params['gamma'] = 0.0
svc_params['degree'] = 3
svc_params['coef0'] = 1
# the classifier
# ==============
svc_clf = SVC(**svc_params)
Explanation: SVC Default Parameter Settings
End of explanation
t0 = time.time()
# search grid
# ===========
search_grid = dict(C = np.logspace( 0, 5, 50),
gamma = np.logspace(-5, -1, 50),
coef0 = [1], #[0, 1],
degree = [2, 3, 4, 5, 6, 7, 8, 9])
# for coef0, see http://stackoverflow.com/questions/21390570/scikit-learn-svc-coef0-parameter-range
# but also see http://www.eric-kim.net/eric-kim-net/posts/1/kernel_trick.html
# stratified K-Fold indices
# =========================
SKFolds = StratifiedKFold(y = trainY,
n_folds = 5,
indices = None,
shuffle = True,
random_state = 1009)
# default parameters for RandomizedSearchCV
# =========================================
default_random_params = {}
default_random_params['scoring'] = None
default_random_params['fit_params'] = None # dict of parameters to pass to the fit method
default_random_params['n_jobs'] = 1 # Number of jobs to run in parallel (-1 => all cores)
default_random_params['pre_dispatch'] = '2*n_jobs' # memory is copied this many times
# reduce if you're running into memory problems
default_random_params['iid'] = True # assume the folds are iid
default_random_params['refit'] = True # Refit the best estimator with the entire dataset
default_random_params['cv'] = None
default_random_params['verbose'] = 0
default_random_params['random_state'] = None
default_random_params['n_iter'] = 10
# set parameters for the randomized grid search
# =============================================
random_params = dict(default_random_params)
random_params['verbose'] = 1
random_params['random_state'] = 1009
random_params['cv'] = SKFolds
random_params['n_jobs'] = -1 # -1 => use all available cores
# one core per fold
# for each point in the grid
random_params['n_iter'] = 100 # choose this many random combinations of parameters
# from 'search_grid'
# perform the randomized parameter grid search
# ============================================
random_search = RandomizedSearchCV(estimator = svc_clf,
param_distributions = search_grid,
**random_params)
random_search.fit(trainX, trainY)
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
Explanation: RANDOMIZED grid search
End of explanation
from collections import Counter
from operator import itemgetter
# how many duds?
# ==============
mean_score_list = [score.mean_validation_score for score in random_search.grid_scores_]
print("\nProportion of random scores below 98%: {0:.2f}\n".format(sum(np.array(mean_score_list)<0.98)/len(mean_score_list)))
# find the most-common coef0 and degree among the top ten
# =======================================================
random_search.grid_scores_.sort(reverse=True, key=itemgetter(1)) # descending mean score
coef0_s = []
degree_s = []
for score in random_search.grid_scores_[:10]:
print score
coef0_s.append(score.parameters['coef0'])
degree_s.append(score.parameters['degree'])
most_common_degree = Counter(degree_s).most_common()[0][0]
most_common_coef0 = Counter(coef0_s).most_common()[0][0]
print ("\nmost-common degree: {0}; coef0: {1}".format(most_common_degree, most_common_coef0))
Explanation: Analyze the results of the parameter pairs randomly selected
End of explanation
from matplotlib.colors import Normalize
class MidpointNormalize(Normalize):
Utility function to move the midpoint of a colormap to be around the values of interest.
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
# --------------------------------------------------------------------------------
# skip this many parameter values on the display axes
tick_step_size_C = math.ceil(len(search_grid['C']) / 15)
tick_step_size_gamma = math.ceil(len(search_grid['gamma']) / 15)
# create 'heatmap'
# ================
# a C x gamma matrix; initially all zeros (black)
heatmap = np.zeros((len(search_grid['C']), len(search_grid['gamma'])))
# for each score, find the index in 'heatmap' of the 'C' and 'gamma' values
# at that index intersection put the mean score
for score in random_search.grid_scores_:
# index of C and gamma in 'search_grid'
ceeinx = search_grid['C'].tolist().index(score[0]['C'])
gaminx = search_grid['gamma'].tolist().index(score[0]['gamma'])
heatmap[ceeinx, gaminx] = score[1]
# display the heatmap
# ===================
plt.figure(figsize=(10, 8))
plt.subplots_adjust(left=.2, right=0.95, bottom=0.15, top=0.95)
plt.imshow(heatmap, interpolation='nearest', cmap=plt.cm.hot,
norm=MidpointNormalize(vmin=0.2, midpoint=0.92))
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
# label the axes
plt.xticks(np.arange(0, len(search_grid['gamma']), tick_step_size_gamma),
search_grid['gamma'][::tick_step_size_gamma],
rotation=45)
plt.yticks(np.arange(0, len(search_grid['C']), tick_step_size_C),
search_grid['C'][::tick_step_size_C])
# cross hairs
ceeinx = search_grid['C'].tolist().index(random_search.best_params_['C'])
plt.axhline(y=ceeinx)
gaminx = search_grid['gamma'].tolist().index(random_search.best_params_['gamma'])
plt.axvline(x=gaminx)
plt.title('Parameter-pair accuracy')
plt.show()
print("\nThe best parameters are %s\nwith a score of %0.4f, misclass of %0.4f"
% (random_search.best_params_, random_search.best_score_, 1-random_search.best_score_))
Explanation: Heatmap of the accuracy of the C and gamma pairs chosen in the grid search
see http://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html
This script was extensively modified to work with the score results from RandomizedSearchCV
End of explanation
target_names = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
predicted_values = random_search.predict(testX)
y_true, y_pred = testY.ravel(), predicted_values
print(classification_report(y_true, y_pred, target_names=target_names))
def plot_confusion_matrix(cm,
target_names,
title='Proportional Confusion matrix',
cmap=plt.cm.Paired):
given a confusion matrix (cm), make a nice plot
see the skikit-learn documentation for the original done for the iris dataset
plt.figure(figsize=(8, 6))
plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# --------------------------------------------------------------------------------------------
cm = confusion_matrix(y_true, y_pred)
print(cm)
model_accuracy = sum(cm.diagonal())/len(testY)
model_misclass = 1 - model_accuracy
print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass))
plot_confusion_matrix(cm, target_names)
Explanation: Predict the test set and analyze the result
End of explanation
t0 = time.time()
from sklearn.learning_curve import learning_curve
from sklearn.cross_validation import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
Generate a simple plot of the test and training learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : integer, cross-validation generator, optional
If an integer is passed, it is the number of folds (defaults to 3).
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
plt.figure(figsize=(8, 6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
# --------------------------------------------------------------------------------
C_gamma = "C=" + str(np.round(random_search.best_params_['C'],4)) + \
", gam=" + str(np.round(random_search.best_params_['gamma'],6)) + \
"\ncoef0=" + str(np.round(random_search.best_params_['coef0'],0)) + \
", deg=" + str(np.round(random_search.best_params_['degree'],0))
plot_learning_curve(estimator = random_search.best_estimator_,
title = "Learning Curves (SVM, poly, " + C_gamma + ")",
X = trainX,
y = trainY.ravel(),
ylim = (0.85, 1.01),
cv = ShuffleSplit(n = trainX.shape[0],
n_iter = 5,
test_size = 0.2,
random_state = 0),
n_jobs = 8)
plt.show()
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
Explanation: Learning Curves
see http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html
The red line shows how well we fit the training data. The larger the score, the lower the bias. We expect the red line to start very near to 1.0 since we ought to be able to fit just a few points very well. We expect the red line to decline slightly since more points to fit requires a more complex model.
The green line shows the accuracy of the predictions of the test set. We expect it to start much lower than the red line but to increase continuously as the amount of training data used to create the model grows. An appropriate algorithm, correctly parameterized should push the green line higher and higher as we train with more training data. The best case is for the red line to decline only very slightly from 1.0 and for the green line to rise to intersect the red line.
A red line that starts below 1.0 and/or declines steeply indicates bias, a model that does not even fit the data it already knows the answer for. In addition to reviewing whether the algorithm is appropriate and whether it is optimally parameterized you may consider ways to increase the number of useful predictor variables.
A red line that hugs the top but for which the green line does not rise to meet it indicates overfitting.
End of explanation
t0 = time.time()
from sklearn.learning_curve import validation_curve
from sklearn.cross_validation import ShuffleSplit
for param_name, param_range in zip(["gamma","C","degree","coef0"],
[np.linspace(search_grid['gamma'][0],search_grid['gamma'][-1],10),
np.linspace(search_grid['C'][0],search_grid['C'][-1],10),
search_grid['degree'],
[0,1]]):
train_scores, test_scores = validation_curve(estimator = random_search.best_estimator_,
X = trainX,
y = trainY,
param_name = param_name,
param_range = param_range,
cv = ShuffleSplit(n = trainX.shape[0],
n_iter = 5,
test_size = 0.2,
random_state = 0),
scoring = 'accuracy',
n_jobs = -1,
pre_dispatch = '2*n_jobs',
verbose = 0)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.figure(figsize=(10,6))
plt.title("Validation Curve for " + param_name)
plt.xlabel(param_name)
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
plt.semilogx(param_range,
train_scores_mean,
label="training score", color="r")
plt.fill_between(param_range,
train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std,
alpha=0.2, color="r")
plt.semilogx(param_range,
test_scores_mean,
label="test score", color="g")
plt.fill_between(param_range,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha=0.2, color="g")
plt.legend(loc="best")
plt.show()
print("\nThe best parameters chosen were %s\nwith a score of %0.2f, misclass of %0.4f"
% (random_search.best_params_, random_search.best_score_, 1-random_search.best_score_))
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
Explanation: Validation Curves
End of explanation |
9,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supervised Learning
Step1: Imports for plotting
Step2: Now import dataset from scikit learn as well as the linear_model module. Note
Step3: Next we'll download the data set
Step4: Let's see what the data set contains
Step5: Step 2
Step6: Interesting, now let's see a scatter plot of one feature, versus the target. In this case we'll use the housing price versus the number of rooms in the dwelling.
Step7: Great! Now we can make out a slight trend that price increases along with the number of rooms in that house, which intuitively makes sense! Now let's use scikit learn to see if we can fit the data linearly.
Let's try to do the following
Step8: Now let's add the target of the boston data set, the price. We'll create a new column in our DataFrame.
Step9: Now let's see the resultign DataFrame!
Step10: Now, you might be reminded of the seaborn lmplot function we used during the visualization lectures. You could use it here to do a linear fit automatically!
Step11: However, we won't be able to do this when we move to more complicated regression models, so we'll stay focused on using the scikit learn library!
Step 3
Step12: Now as before, we're labeling each green line as having a distance D, and each red point as having a coordinate of (X,Y). Then we can define our best fit line as the line having the property were
Step13: Now that we have our X and Y, let's go ahead and use numpy to create the single variable linear regression.
We know that a line has the equation
Step14: Great! Now we can get the best fit values!
Step15: Finally let's plot it all together! Note that we use the original format of the boston information. We only did our matrix transformations to utilize the numpy least square method.
Step16: Step 5
Step17: Since the root mean square error (RMSE) corresponds approximately to the standard deviation we can now say that the price of a house won't vary more than 2 times the RMSE 95% of the time. Note
Step18: Next, we create a LinearRegression object, afterwards, type lm. then press tab to see the list of methods availble on this object.
Step19: The functions we will be using are
Step20: Finally, we're ready to pass the X and Y using the linear regression object.
Step21: Let's go ahead check the intercept and number of coefficients.
Step22: Great! So we have basically made an equation for a line, but instead of just oneo coefficient m and an intercept b, we now have 13 coefficients. To get an idea of what this looks like check out the documentation for this equation
Step23: Just like we initially plotted out, it seems the highest correlation between a feature and a house price was the number of rooms.
Now let's move on to Predicting prices!
Step 7
Step24: Let's go ahead and see what the output of the train_test_split was
Step25: Great! Now that we have our training and testing sets we can continue on to predicint gprices based on the multiple variables.
Step 8
Step26: Now run a prediction on both the X training set and the testing set.
Step27: Now we will get the mean square error
Step28: It looks like our mean square error between our training and testing was pretty close. But how do we actually visualize this?
Step 9 | Python Code:
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
Explanation: Supervised Learning: Linear Regression
In this section we will be going over LINEAR REGRESSION. We'll be going over how to use the scikit-learn regression model, as well as how to train the regressor using the fit() method, and how to predict new labels using the predict() method. We'll be analyzing a data set consisting of house prices in Boston. We'll start off with a single variable linear regression using numpy and then move on to using scikit learn. We'll do an overview of the mathematics behind the method we're using, but mostly we'll dive deeper into pratical "hands-on" coding lessons.
In this section we will be working through linear regression with the following steps:
Step 1: Getting and setting up the data.
Step 2: Visualizing current data.
Step 3: The mathematics behind the Least Squares Method.
Step 4: Using Numpy for a Univariate Linear Regression.
Step 5: Getting the error.
Step 6: Using scikit learn to implement a multivariate regression.
Step 7: Using Training and Validation.
Step 8: Predicting Prices
Step 9 : Residual Plots
Step 1: Getting and setting up the data.
We'll start by looking a an example of a dataset from scikit-learn. First we'll import our usual data analysis imports, then sklearn's built-in boston dataset.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
Explanation: Imports for plotting
End of explanation
from sklearn.datasets import load_boston
Explanation: Now import dataset from scikit learn as well as the linear_model module. Note: You may have to run a download, scikit learn will read an error and prompt you to if you don't have the datasets.
End of explanation
# Load the housing dataset
boston = load_boston()
Explanation: Next we'll download the data set
End of explanation
print(boston.DESCR)
Explanation: Let's see what the data set contains
End of explanation
# Histogram of prices (this is the target of our dataset)
plt.hist(boston.target,bins=50)
#label
plt.xlabel('Price in $1000s')
plt.ylabel('Number of houses')
Explanation: Step 2: Visualizing current data
You should always try to do a quick visualization fo the data you have. Let's go ahead an make a histogram of the prices.
End of explanation
# Plot the column at the 5 index (Labeled RM)
plt.scatter(boston.data[:,5],boston.target)
#label
plt.ylabel('Price in $1000s')
plt.xlabel('Number of rooms')
Explanation: Interesting, now let's see a scatter plot of one feature, versus the target. In this case we'll use the housing price versus the number of rooms in the dwelling.
End of explanation
# reset data as pandas DataFrame
boston_df = DataFrame(boston.data)
# label columns
boston_df.columns = boston.feature_names
#show
boston_df.head()
Explanation: Great! Now we can make out a slight trend that price increases along with the number of rooms in that house, which intuitively makes sense! Now let's use scikit learn to see if we can fit the data linearly.
Let's try to do the following:
1.) Use pandas to transform the boston dataset into a DataFrame:
2.) Then use seaborn to perform an lmplot on that DataFrame to reproduce the scatter plot with a linear fit line.
End of explanation
# Set price column for target
boston_df['Price'] = boston.target
Explanation: Now let's add the target of the boston data set, the price. We'll create a new column in our DataFrame.
End of explanation
# Show result
boston_df.head()
Explanation: Now let's see the resultign DataFrame!
End of explanation
# Using seabron to create a linear fit
sns.lmplot('RM','Price',data = boston_df)
Explanation: Now, you might be reminded of the seaborn lmplot function we used during the visualization lectures. You could use it here to do a linear fit automatically!
End of explanation
# Quick display of image form wikipedia
from IPython.display import Image
url = 'http://upload.wikimedia.org/wikipedia/commons/thumb/b/b0/Linear_least_squares_example2.svg/220px-Linear_least_squares_example2.svg.png'
Image(url)
Explanation: However, we won't be able to do this when we move to more complicated regression models, so we'll stay focused on using the scikit learn library!
Step 3: The mathematics behind the Least Squares Method.
In this particular lecture we'll use the least squares method as the way to estimate the coefficients. Here's a quick breakdown of how this method works mathematically:
Take a quick look at the plot we created above using seaborn. Now consider each point, and know that they each have a coordinate in the form (X,Y). Now draw an imaginary line between each point and our current "best-fit" line. We'll call the distanace between each point and our current best-fit line, D. To get a quick image of what we're currently trying to visualize, take a look at the picture below:
End of explanation
# Set up X as median room values
X = boston_df.RM
# Use v to make X two-dimensional
X = np.vstack(boston_df.RM)
# Set up Y as the target price of the houses.
Y = boston_df.Price
Explanation: Now as before, we're labeling each green line as having a distance D, and each red point as having a coordinate of (X,Y). Then we can define our best fit line as the line having the property were:
$$ D_{1}^2 + D_{2}^2 + D_{3}^2 + D_{4}^2 + ....+ D_{N}^2$$
So how do we find this line? The least-square line approximating the set of points:
$$ (X,Y){1},(X,Y){2},(X,Y){3},(X,Y){4},(X,Y)_{5}, $$
has the equation:
$$ Y = a_{0} +a_{1}X $$
this is basically just a rewritten form of the standard equation for a line:
$$Y=mx+b$$
We can solve for these constants a0 and a1 by simultaneously solving these equations:
$$ \Sigma Y = a_{0}N + a_{1}\Sigma X $$
$$ \Sigma XY = a_{0}\Sigma X + a_{1}\Sigma X^2 $$
These are called the normal equations for the least squares line. There are further steps that can be taken in rearranging these equations to solve for y, but we'll let scikit-learn do the rest of the heavy lifting here. If you want further informatino on the mathematics of the above formulas, check out this great video.
For now, we'll use numpy to do a simple single variable linear regression. Afterwards we'll unleash the power of scikit learn to do a full multivariate linear regression.
Step 4: Using Numpy for a Univariate Linear Regression
Numpy has a built in Least Square Method in its linear algebra library. We'll use this first for our Univariate regression and then move on to scikit learn for out Multi variate regression.
We will start by setting up the X and Y arrays for numpy to take in. An important note for the X array: Numpy expects a two-dimensional array, the first dimension is the different example values, and the second dimension is the attribute number. In this case we have our value as the mean number of rooms per house, and this is a single attribute so the second dimension of the array is just 1. So we'll need to create a (506,1) shape array. There are a few ways to do this, but an easy way to do this is by using numpy's built-in vertical stack tool, vstack.
End of explanation
# Create the X array in the form [X 1]
X = np.array( [ [value,1] for value in X ] )
Explanation: Now that we have our X and Y, let's go ahead and use numpy to create the single variable linear regression.
We know that a line has the equation:
$$y=mx+b$$
which we can rewrite using matrices:
$$y=Ap$$
where:
$$A = \begin{bmatrix}x & 1\end{bmatrix}$$
and
$$p= \begin{bmatrix}m \b\end{bmatrix}$$
This is the same as the first equation if you carry out the linear algebra.
So we'll start by creating the A matrix using numpy. We'll do this by creating a matrix in the form [X 1], so we'll call every value in our original X using a list comprehension and then set up an array in the form [X 1]
End of explanation
# Now get out m and b values for our best fit line
m, b = np.linalg.lstsq(X, Y)[0]
Explanation: Great! Now we can get the best fit values!
End of explanation
# First the original points, Price vs Avg Number of Rooms
plt.plot(boston_df.RM,boston_df.Price,'o')
# Next the best fit line
x= boston_df.RM
plt.plot(x, m*x + b,'r',label='Best Fit Line')
Explanation: Finally let's plot it all together! Note that we use the original format of the boston information. We only did our matrix transformations to utilize the numpy least square method.
End of explanation
# Get the resulting array
result = np.linalg.lstsq(X,Y)
# Get the total error
error_total = result[1]
# Get the root mean square error
rmse = np.sqrt(error_total/len(X) )
# Print
print("The root mean squared error was %.2f " %rmse)
Explanation: Step 5: Getting the error
Great! We've just completed a single variable regression using the least squares method with Python! Let's see if we can find the error in our fitted line. Checking out the documentation here, we see that the resulting array has the total squared error. For each element, it checks the the difference between the line and the true value (our original D value), squares it, and returns the sum of all these. This was the summed D^2 value we discussed earlier.
It's probably easier to understand the root mean squared error, which is similar to the standard deviation. In this case, to find the root mean square error we divide by the number of elements and then take the square root. There is also an issue of bias and an unbiased regression, but we'll delve into those topics later.
For now let's see how we can get the root mean squared error of the line we just fitted.
End of explanation
# Import for Linear Regression
import sklearn
from sklearn.linear_model import LinearRegression
Explanation: Since the root mean square error (RMSE) corresponds approximately to the standard deviation we can now say that the price of a house won't vary more than 2 times the RMSE 95% of the time. Note: Review the Normal Distribution Appendix lecture if this doesn't make sense to you or check out this link.
Thus we can reasonably expect a house price to be within $13,200 of our line fit.
Step 6: Using scikit learn to implement a multivariate regression
Now, we'll keep moving along with using scikit learn to do a multi variable regression. This will be a similar apporach to the above example, but sci kit learn will be able to take into account more than just a single data variable effecting the target!
We'll start by importing the linear regression library from the sklearn module.
The sklearn.linear_model.LinearRegression class is an estimator. Estimators predict a value based on the observed data. In scikit-learn, all estimators implement the fit() and predict() methods. The former method is used to learn the parameters of a model, and the latter method is used to predict the value of a response variable for an explanatory variable using the learned parameters. It is easy to experiment with different models using scikit-learn because all estimators implement the fit and predict methods.
End of explanation
# Create a LinearRegression Object
lreg = LinearRegression()
Explanation: Next, we create a LinearRegression object, afterwards, type lm. then press tab to see the list of methods availble on this object.
End of explanation
# Data Columns
X_multi = boston_df.drop('Price',1)
# Targets
Y_target = boston_df.Price
Explanation: The functions we will be using are:
lreg.fit() which fits a linear model
lreg.predict() which is used to predict Y using the linear model with estimated coefficients
lreg.score() which returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, learn more about it here
We'll start the multi variable regression analysis by seperating our boston dataframe into the data columns and the target columns:
End of explanation
# Implement Linear Regression
lreg.fit(X_multi,Y_target)
Explanation: Finally, we're ready to pass the X and Y using the linear regression object.
End of explanation
print(' The estimated intercept coefficient is %.2f ' %lreg.intercept_)
print(' The number of coefficients used was %d ' % len(lreg.coef_))
Explanation: Let's go ahead check the intercept and number of coefficients.
End of explanation
# Set a DataFrame from the Features
coeff_df = DataFrame(boston_df.columns)
coeff_df.columns = ['Features']
# Set a new column lining up the coefficients from the linear regression
coeff_df["Coefficient Estimate"] = pd.Series(lreg.coef_)
# Show
coeff_df
Explanation: Great! So we have basically made an equation for a line, but instead of just oneo coefficient m and an intercept b, we now have 13 coefficients. To get an idea of what this looks like check out the documentation for this equation:
$$ y(w,x) = w_0 + w_1 x_1 + ... + w_p x_p $$
Where $$w = (w_1, ...w_p)$$ as the coefficients and $$ w_0 $$ as the intercept
What we'll do next is set up a DataFrame showing all the Features and their estimated coefficients obtained form the linear regression.
End of explanation
# Grab the output and set as X and Y test and train data sets!
X_train, X_test, Y_train, Y_test = sklearn.cross_validation.train_test_split(X,boston_df.Price)
Explanation: Just like we initially plotted out, it seems the highest correlation between a feature and a house price was the number of rooms.
Now let's move on to Predicting prices!
Step 7: Using Training and Validation
In a dataset a training set is implemented to build up a model, while a validation set is used to validate the model built. Data points in the training set are excluded from the validation set. The correct way to pick out samples from your dataset to be part either the training or validation (also called test) set is randomly.
Fortunately, scikit learn has a built in function specifically for this called train_test_split.
The parameters passed are your X and Y, then optionally test_size parameter, representing the proportion of the dataset to include in the test split. As well a train_size parameter. ou can learn more about these parameters here
End of explanation
# Residual plot of all the dataset using seaborn
sns.residplot('RM', 'Price', data = boston_df)
# Print shapes of the training and testing data sets
print(X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)
Explanation: Let's go ahead and see what the output of the train_test_split was:
End of explanation
# Create our regression object
lreg = LinearRegression()
# Once again do a linear regression, except only on the training sets this time
lreg.fit(X_train,Y_train)
Explanation: Great! Now that we have our training and testing sets we can continue on to predicint gprices based on the multiple variables.
Step 8: Predicting Prices
Now that we have our training and testing sets, let's go ahead and try to use them to predict house prices. We'll use our training set for the prediction and then use our testing set for validation.
End of explanation
# Predictions on training and testing sets
pred_train = lreg.predict(X_train)
pred_test = lreg.predict(X_test)
Explanation: Now run a prediction on both the X training set and the testing set.
End of explanation
print("Fit a model X_train, and calculate MSE with Y_train: %.2f" % np.mean((Y_train - pred_train) ** 2))
print("Fit a model X_train, and calculate MSE with X_test and Y_test: %.2f" %np.mean((Y_test - pred_test) ** 2))
Explanation: Now we will get the mean square error
End of explanation
# Scatter plot the training data
train = plt.scatter(pred_train,(Y_train-pred_train),c='b',alpha=0.5)
# Scatter plot the testing data
test = plt.scatter(pred_test,(Y_test-pred_test),c='r',alpha=0.5)
# Plot a horizontal axis line at 0
plt.hlines(y=0,xmin=-10,xmax=50)
#Labels
plt.legend((train,test),('Training','Test'),loc='lower left')
plt.title('Residual Plots')
Explanation: It looks like our mean square error between our training and testing was pretty close. But how do we actually visualize this?
Step 9 : Residual Plots
In regression analysis, the difference between the observed value of the dependent variable (y) and the predicted value (ŷ) is called the residual (e). Each data point has one residual, so that:
$$Residual = Observed\:value - Predicted\:value $$
You can think of these residuals in the same way as the D value we discussed earlier, in this case however, there were multiple data points considered.
A residual plot is a graph that shows the residuals on the vertical axis and the independent variable on the horizontal axis. If the points in a residual plot are randomly dispersed around the horizontal axis, a linear regression model is appropriate for the data; otherwise, a non-linear model is more appropriate.
Residual plots are a good way to visualize the errors in your data. If you have done a good job then your data should be randomly scattered around line zero. If there is some strucutre or pattern, that means your model is not capturing some thing. There could be an interaction between 2 variables that you're not considering, or may be you are measuring time dependent data. If this is the case go back to your model and check your data set closely.
So now let's go ahead and create the residual plot. For more info on the residual plots check out this great link.
End of explanation |
9,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MatrixTable Tutorial
If you've gotten this far, you're probably thinking
Step1: There are a few things to note
Step2: MatrixTable operations
We belabored the operations on tables because they all have natural analogs (sometimes several) on matrix tables. For example
Step3: Let's look at the first few row keys (variants) and column keys (sample IDs).
Step4: Let's investigate the genotypes and the call rate. Let's look at the first few genotypes
Step5: All homozygous reference, which is not surprising. Let's look at the distribution of genotype calls
Step6: Let's compute the overall call rate directly, and then plot the distribution of call rate per variant.
Step7: Here's a nice trick
Step8: Exercise
Step9: Now, let's do the same thing for GQ.
The GQ field is the phred-scaled "genotype quality". The formula to convert to a linear-scale confidence (0 to 1) is 10 ** -(mt.GQ / 10). GQ is truncated to lie between 0 and 99. | Python Code:
import hail as hl
from bokeh.io import output_notebook, show
output_notebook()
hl.utils.get_1kg('data/')
mt = hl.read_matrix_table('data/1kg.mt')
mt.describe()
Explanation: MatrixTable Tutorial
If you've gotten this far, you're probably thinking:
"Can't I do all of this in pandas or R?"
"What does this have to do with biology?"
The two crucial features that Hail adds are scalability and the domain-specific primitives needed to work easily with biological data. Fear not! You've learned most of the basic concepts of Hail and now are ready for the bit that makes it possible to represent and compute on genetic matrices: the MatrixTable.
In the last example of the Table Joins Tutorial, the ratings table had a compound key: movie_id and user_id. The ratings were secretly a movie-by-user matrix!
However, since this matrix is very sparse, it is reasonably represented in a so-called "coordinate form" Table, where each row of the table is an entry of the sparse matrix. For large and dense matrices (like sequencing data), the per-row overhead of coordinate reresentations is untenable. That's why we built MatrixTable, a 2-dimensional generalization of Table.
MatrixTable Anatomy
Recall that Table has two kinds of fields:
global fields
row fields
MatrixTable has four kinds of fields:
global fields
row fields
column fields
entry fields
Row fields are fields that are stored once per row. These can contain information about the rows, or summary data calculated per row.
Column fields are stored once per column. These can contain information about the columns, or summary data calculated per column.
Entry fields are the piece that makes this structure a matrix -- there is an entry for each (row, column) pair.
Importing and Reading
Like tables, matrix tables can be imported from a variety of formats: VCF, (B)GEN, PLINK, TSV, etc. Matrix tables can also be read from a "native" matrix table format. Let's read a sample of prepared 1KG data.
End of explanation
mt.s.describe()
mt.GT.describe()
Explanation: There are a few things to note:
There is a single column field s. This is the sample ID from the VCF. It is also the column key.
There is a compound row key: locus and alleles.
locus has type locus<GRCh37>
alleles has type array<str>
GT has type call. That's a genotype call!
Whereas table expressions could be indexed by nothing or indexed by rows, matrix table expression have four options: nothing, indexed by row, indexed by column, or indexed by row and column (the entries). Let's see some examples.
End of explanation
mt.count() # (rows, cols)
Explanation: MatrixTable operations
We belabored the operations on tables because they all have natural analogs (sometimes several) on matrix tables. For example:
count => count_{rows, cols} (and count which returns both)
filter => filter_{rows, cols, entries}
annotate => annotate_{rows, cols, entries} (and globals for both)
select => select_{rows, cols, entries} (and globals for both)
transmute => transmute_{rows, cols, entries} (and globals for both)
group_by => group_{rows, cols}_by
explode => expode_{rows, cols}
aggregate => aggregate_{rows, cols, entries}
Some operations are unique to MatrixTable:
The row fields can be accessed as a Table with rows
The column fields can be accessed as a Table with cols.
The entire field space of a MatrixTable can be accessed as a coordinate-form Table with entries. Be careful with this! While it's fast to aggregate or query, trying to write this Table to disk could produce files thousands of times larger than the corresponding MatrixTable.
Let's explore mt using these tools. Let's get the size of the dataset.
End of explanation
mt.rows().select().show()
mt.s.show()
Explanation: Let's look at the first few row keys (variants) and column keys (sample IDs).
End of explanation
mt.GT.show()
Explanation: Let's investigate the genotypes and the call rate. Let's look at the first few genotypes:
End of explanation
mt.aggregate_entries(hl.agg.counter(mt.GT.n_alt_alleles()))
Explanation: All homozygous reference, which is not surprising. Let's look at the distribution of genotype calls:
End of explanation
mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT)))
Explanation: Let's compute the overall call rate directly, and then plot the distribution of call rate per variant.
End of explanation
mt2 = mt.annotate_rows(call_rate = hl.agg.fraction(hl.is_defined(mt.GT)))
mt2.describe()
p = hl.plot.histogram(mt2.call_rate, range=(0,1.0), bins=100,
title='Variant Call Rate Histogram', legend='Call Rate')
show(p)
Explanation: Here's a nice trick: you can use an aggregator inside annotate_rows and it will aggregate over columns, that is, summarize the values in the row using the aggregator. Let's compute and plot call rate per variant.
End of explanation
p = hl.plot.histogram(mt.DP, range=(0,40), bins=40, title='DP Histogram', legend='DP')
show(p)
Explanation: Exercise: GQ vs DP
In this exercise, you'll use Hail to investigate a strange property of sequencing datasets.
The DP field is the sequencing depth (the number of reads).
Let's first plot a histogram of DP:
End of explanation
p = hl.plot.histogram(mt.GQ, range=(0,100), bins=100, title='GQ Histogram', legend='GQ')
show(p)
Explanation: Now, let's do the same thing for GQ.
The GQ field is the phred-scaled "genotype quality". The formula to convert to a linear-scale confidence (0 to 1) is 10 ** -(mt.GQ / 10). GQ is truncated to lie between 0 and 99.
End of explanation |
9,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting a Grammar into <span style="font-variant
Step1: The file c-grammar.g contains a context-free grammar for the language C.
Step2: Our goal is to convert this grammar into an <span style="font-variant
Step3: The function grammar_2_string takes a list of grammar rules as its input and renders these rules as an <span style="font-variant
Step4: The function rule_2_string takes a grammar rule $r$ as its input and transforms this rule into an <span style="font-variant
Step5: The function body_2_string takes a list of grammar items as its inputs and turns them into an <span style="font-variant
Step6: The function item_2_string takes a grammar item as its inputs and turns the item into an <span style="font-variant
Step7: The command below cleans the directory. If you are running windows, you have to replace rmwith del. | Python Code:
!cat Grammar.g4
!type Grammar.g4
Explanation: Converting a Grammar into <span style="font-variant:small-caps;">Html</span>
You should store the grammar in the file Grammar.g4. This grammar should describe the lexical structure of the grammar for the language
C that is contained in the file
<a href="https://github.com/karlstroetmann/Formal-Languages/blob/master/Exercises/Grammar2HTML-Antlr/c-grammar.g"><tt>c-grammar.g</tt></a>.
Your grammar <b style="color:red">must not</b> use the string rule as a variable name. The reason is that rule is a variable that is already used in the parser generated by
<span style="font-variant:small-caps;">Antlr</span>.
You grammar should generate an abstract syntax tree that conforms to the following type specification:
Grammar: List<Rule>
Rule: Pair<String, List<Body>>
Body: List<Item>
Item: Pair<'var', String> + Pair<'token', String> + Pair<'literal', String>
End of explanation
!cat c-grammar.g
!type c-grammar.g
Explanation: The file c-grammar.g contains a context-free grammar for the language C.
End of explanation
!antlr4 -Dlanguage=Python3 Grammar.g4
from GrammarLexer import GrammarLexer
from GrammarParser import GrammarParser
import antlr4
Explanation: Our goal is to convert this grammar into an <span style="font-variant:small-caps;">Html</span> <a href="c-grammar.html">file</a>.
We start by generating both scanner and parser.
End of explanation
def grammar_2_string(grammar):
result = ''
result += '<html>\n'
result += '<head>\n'
result += '<title>Grammar</title>\n'
result += '</head>\n'
result += '<body>\n'
result += '<table>\n'
for rule in grammar:
result += rule_2_string(rule)
result += '</table>\n'
result += '</body>\n'
result += '</html>\n'
return result
Explanation: The function grammar_2_string takes a list of grammar rules as its input and renders these rules as an <span style="font-variant:small-caps;">Html</span> file.
End of explanation
def rule_2_string(rule):
head, body = rule
result = ''
result += '<tr>\n'
result += '<td style="text-align:right"><a name="' + head + '"><em>' + head + '<em></a></td>\n'
result += '<td><code>:</code></td>\n'
result += '<td>' + body_2_string(body[0]) + '</td>'
result += '</tr>\n'
for i in range(1, len(body)):
result += '<tr><td></td><td><code>|</code></td><td>'
result += body_2_string(body[i])
result += '</td></tr>\n'
result += '<tr><td></td><td><code>;</code></td><tr>\n\n'
return result
Explanation: The function rule_2_string takes a grammar rule $r$ as its input and transforms this rule into an <span style="font-variant:small-caps;">Html</span>
string. Here the grammar rule $r$ has the form
$$ r = (V, L) $$
where $V$ is the name of the variable defined by $r$ and $L$ is a list of <em style="color:blue">grammar rule bodies</em>. A single grammar rule
body is a list of <em style="color:blue">grammar items</em>. A grammar item is either a non-terminal, a token or a literal.
End of explanation
def body_2_string(body):
result = ''
if len(body) > 0:
for item in body:
result += item_2_string(item) + ' '
else:
result += '<code>/* empty */</code>'
return result
Explanation: The function body_2_string takes a list of grammar items as its inputs and turns them into an <span style="font-variant:small-caps;">Html</span> string.
End of explanation
def item_2_string(item):
kind, contend = item
if kind == 'var':
return '<a href="#' + contend + '"><em>' + contend + '</em></a>'
else:
return '<code>' + contend + '</code>'
def main():
input_stream = antlr4.FileStream('c-grammar.g')
lexer = GrammarLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser = GrammarParser(token_stream)
grammar = parser.start()
result = grammar_2_string(grammar.result)
file = open('c-grammar.html', 'w')
file.write(result)
main()
!open c-grammar.html
!explorer c-grammar.html
Explanation: The function item_2_string takes a grammar item as its inputs and turns the item into an <span style="font-variant:small-caps;">Html</span> string.
An item represents either a non-terminal or a terminal. If it represents a non-terminal it has the form
$$(\texttt{'var'}, \textrm{name}) $$
where $\textrm{name}$ is the name of the variable. Otherwise it has the form
$$(\textrm{kind}, \textrm{name}), $$
where $\textrm{kind}$ is either token or literal.
End of explanation
!rm *.py *.tokens *.interp
Explanation: The command below cleans the directory. If you are running windows, you have to replace rmwith del.
End of explanation |
9,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating a Cutout with the SARAH-2 dataset
This walkthrough describes the process of creating a cutout using the SARAH-2 dataset by EUMETSAT.
The SARAH-2 dataset contains extensive information on solar radiation variables, like surface incoming direct radiation (SID) or surface incoming shortwave radiation (SIS).
It serves as an addition to the ERA5 dataset and as such requires the cdsapi to be setup properly.
Recommendation
This is a reduced version for cutout creation. Creating cutouts with ERA-5 is simpler and explained in more details.
We therefore recommend you have a look at this example first.
Note
Step1: Let's see what the available features that is the available weather data variables are.
Step2: Preparing the Cutout
No matter which dataset you use, this is where all the work actually happens.
This can be fast or take some or a lot of time and resources, among others depending on
your computer ressources (especially memory for SARAH-2).
Step3: Querying the cutout gives us basic information on which data is contained and can already be used.
Inspecting the Cutout | Python Code:
import atlite
import logging
logging.basicConfig(level=logging.INFO)
cutout = atlite.Cutout(path="western-europe-2011-01.nc",
module=["sarah", "era5"],
sarah_dir="/home/vres-climate/data/sarah_v2",
x=slice(-13.6913, 1.7712),
y=slice(49.9096, 60.8479),
time="2013-01",
chunks={'time': 100}
)
Explanation: Creating a Cutout with the SARAH-2 dataset
This walkthrough describes the process of creating a cutout using the SARAH-2 dataset by EUMETSAT.
The SARAH-2 dataset contains extensive information on solar radiation variables, like surface incoming direct radiation (SID) or surface incoming shortwave radiation (SIS).
It serves as an addition to the ERA5 dataset and as such requires the cdsapi to be setup properly.
Recommendation
This is a reduced version for cutout creation. Creating cutouts with ERA-5 is simpler and explained in more details.
We therefore recommend you have a look at this example first.
Note:
For creating a cutout from this dataset, you need to download large files and your computers memory needs to be able to handle these as well.
Downloading the data set
To download the dataset, head to the EUMETSTATs website (the link points to the current 2.1 edition)
https://wui.cmsaf.eu/safira/action/viewDoiDetails?acronym=SARAH_V002_01
On the bottom, select the products you want to include in the cutout, i.e. for us:
| variable | time span | time resolution |
| --- | --- | --- |
| Surface incoming direct radiation (SID) | 2013 | Instantaneous |
| Surface incoming shortwave radiation (SIS) | 2013 | Instantaneous |
Add each product to your cart and register with the website.
Follow the instructions to activate your account, confirm your order and wait for the download to be ready.
You will be notified by email with the download instructions.
Download the ordered files of your order into a directory, e.g. sarah-2.
Extract the tar files (e.g. for linux systems tar -xvf * or with 7zip for windows) into the same folder
You are now ready to create cutouts using the SARAH-2 dataset.
Specifying the cutout
Import the package and set recommended logging settings:
End of explanation
cutout.available_features.to_frame()
Explanation: Let's see what the available features that is the available weather data variables are.
End of explanation
cutout.prepare()
Explanation: Preparing the Cutout
No matter which dataset you use, this is where all the work actually happens.
This can be fast or take some or a lot of time and resources, among others depending on
your computer ressources (especially memory for SARAH-2).
End of explanation
cutout # basic information
cutout.data.attrs # cutout meta data
cutout.prepared_features # included weather variables
cutout.data # access to underlying xarray data
Explanation: Querying the cutout gives us basic information on which data is contained and can already be used.
Inspecting the Cutout
End of explanation |
9,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Exponential model
Last time we proposed two candidate models
Step2: Look at both fits together
Which is better? | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import minimize
# assign data to arrays
T = np.array([1, 3, 6, 9, 12, 18])
Y = np.array([0.94, 0.77, 0.40, 0.26, 0.24, 0.16])
X = 100*Y
# plot raw data
plt.plot(T, Y, 'o')
plt.xlabel('Retention interval (sec.)')
plt.ylabel('Proportion recalled')
# negative log likelihood
def nllP(pars):
a, b = pars
tmp1 = X*np.log(a*T**b)
tmp2 = (100-X)*np.log(1-a*T**b)
return(-1*np.sum(tmp1+tmp2))
# minimize the NLL
a_init = np.random.uniform()
b_init = -np.random.uniform()
inits = np.array([a_init, b_init])
mleP = minimize(nllP,
inits,
method="nelder-mead")
#
def power(t,pars):
a, b = pars
return(a*t**b)
fitParsP = mleP.x
print(f"a={fitParsP[0]:.3f}, b={fitParsP[1]:.3f}")
x = np.linspace(0.5,18,100)
plt.plot(x, power(x,fitParsP))
plt.show()
Explanation: <a href="https://colab.research.google.com/github/tomfaulkenberry/courses/blob/master/summer2019/mathpsychREU/lecture3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lecture 3 - Fitting a "forgetting curve"
Recall from last time...
Murdock (1961) presented subjects with a set of memory items (i.e., words or letters) and asked them to recall the items after six different retention intervals: $t=1,3,6,9,12,18$ (in seconds). He recorded the proportion recalled at each retention interval (based on 100 independent trials for each $t$). These data were (respectively)
$$
y=0.94, 0.77, 0.40, 0.26, 0.24, 0.16
$$
Last time, we fit a power function to these data
Power function
End of explanation
def nllE(pars):
a, b = pars
tmp1 = X*np.log(a*b**T)
tmp2 = (100-X)*np.log(1-a*b**T)
return(-1*np.sum(tmp1+tmp2))
# check some examples
a = 0.6
b = 0.1
pars = np.array([a,b])
nllE(pars)
a_init = np.random.uniform()
b_init = np.random.uniform()
inits = np.array([a_init, b_init])
mleE= minimize(nllE,
inits,
method="nelder-mead")
print(mleE)
def expon(t,pars):
a, b = pars
return(a*b**t)
fitParsE = mleE.x
print(f"a={fitPars[0]:.3f}, b={fitPars[1]:.3f}")
x = np.linspace(0.5,18,100)
plt.plot(T,Y,'o')
plt.plot(x, expon(x,fitParsE))
plt.show()
Explanation: Exponential model
Last time we proposed two candidate models:
Power function model: $y=ax^b$
Exponential model: $y=ab^x$
Let's now fit an exponential model.
Step 1 - compute likelihood function
Let's assume each of these 100 trials is independent of the others, and consider each trial a success if item is correctly recalled.
Then the probability of correctly recalling $x$ items is:
$$
f(x\mid\theta) = \binom{100}{x}\theta^x(1-\theta)^{100-x}
$$
The critical parameter here is $\theta$ -- the probability of success on any one trial. How do we determine $\theta$?
Let's assume that probability of recall is governed by a exponential function. That is, assume
$$
\theta(t) = ab^t
$$
for constants $a,b$.
Then we can write
$$
f(x,t\mid a,b) = \binom{100}{x}(ab^t)^x(1-ab^t)^{100-x}
$$
which we cast as a likelihood
$$
L(a,b\mid x,t) = \binom{100}{x}(ab^t)^x(1-ab^t)^{100-x}
$$
Step 2 -- compute log likelihood
This gives us:
$$
\ln L = \ln \Biggl[ \binom{100}{x}\Biggr] + x\ln(ab^t) + (100-x)\ln(1-ab^t)
$$
Step 3 -- extend to multiple observations
Note that the formula above is for a single observation $(x,t)$. But we have 6 observations!
If we assume each is independent from the others, then we can multiply the likelihoods:
$$
L = \prod_{i=1}^6 L(a,b\mid x_i, t_i)
$$
Thus we have
$$
\ln L = \ln\Biggl(\prod_{i=1}^6 L(a,b\mid x_i,t_i)\Biggr )
$$
But since logs turn products into sums, we can write
$$ \ln L = \sum_{i=1}^6 \ln L(a,b\mid x_i,t_i) = \sum_{i=1}^6 \Biggl(\ln \binom{100}{x_i} + x_i\ln(ab^{t_i}) + (100-x_i)\ln(1-ab^{t_i})\Biggr)$$
As before, we'll ignore the binomial term in our code
End of explanation
plt.plot(T,Y,'o')
plt.plot(x, power(x,fitParsP), color='red', label='Power')
plt.plot(x, expon(x,fitParsE), color='blue', label='Exponential')
plt.legend()
plt.show()
Explanation: Look at both fits together
Which is better?
End of explanation |
9,552 | Given the following text description, write Python code to implement the functionality described.
Description:
Return 2^n modulo p (be aware of numerics).
This is how the function will work:
modp(3, 5)
3
This is how the function will work:
modp(1101, 101)
2
This is how the function will work:
modp(0, 101)
1
This is how the function will work:
modp(3, 11)
8
This is how the function will work:
modp(100, 101)
1
| Python Code:
def modp(n: int, p: int):
ret = 1
for i in range(n):
ret = (2 * ret) % p
return ret |
9,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
First encountering a new dataset can sometimes feel overwhelming. You might be presented with hundreds or thousands of features without even a description to go by. Where do you even begin?
A great first step is to construct a ranking with a feature utility metric, a function measuring associations between a feature and the target. Then you can choose a smaller set of the most useful features to develop initially and have more confidence that your time will be well spent.
The metric we'll use is called "mutual information". Mutual information is a lot like correlation in that it measures a relationship between two quantities. The advantage of mutual information is that it can detect any kind of relationship, while correlation only detects linear relationships.
Mutual information is a great general-purpose metric and especially useful at the start of feature development when you might not know what model you'd like to use yet. It is
Step1: The scikit-learn algorithm for MI treats discrete features differently from continuous features. Consequently, you need to tell it which are which. As a rule of thumb, anything that must have a float dtype is not discrete. Categoricals (object or categorial dtype) can be treated as discrete by giving them a label encoding. (You can review label encodings in our Categorical Variables lesson.)
Step2: Scikit-learn has two mutual information metrics in its feature_selection module
Step3: And now a bar plot to make comparisions easier
Step4: Data visualization is a great follow-up to a utility ranking. Let's take a closer look at a couple of these.
As we might expect, the high-scoring curb_weight feature exhibits a strong relationship with price, the target.
Step5: The fuel_type feature has a fairly low MI score, but as we can see from the figure, it clearly separates two price populations with different trends within the horsepower feature. This indicates that fuel_type contributes an interaction effect and might not be unimportant after all. Before deciding a feature is unimportant from its MI score, it's good to investigate any possible interaction effects -- domain knowledge can offer a lot of guidance here. | Python Code:
#$HIDE_INPUT$
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
plt.style.use("seaborn-whitegrid")
df = pd.read_csv("../input/fe-course-data/autos.csv")
df.head()
Explanation: Introduction
First encountering a new dataset can sometimes feel overwhelming. You might be presented with hundreds or thousands of features without even a description to go by. Where do you even begin?
A great first step is to construct a ranking with a feature utility metric, a function measuring associations between a feature and the target. Then you can choose a smaller set of the most useful features to develop initially and have more confidence that your time will be well spent.
The metric we'll use is called "mutual information". Mutual information is a lot like correlation in that it measures a relationship between two quantities. The advantage of mutual information is that it can detect any kind of relationship, while correlation only detects linear relationships.
Mutual information is a great general-purpose metric and especially useful at the start of feature development when you might not know what model you'd like to use yet. It is:
- easy to use and interpret,
- computationally efficient,
- theoretically well-founded,
- resistant to overfitting, and,
- able to detect any kind of relationship
Mutual Information and What it Measures
Mutual information describes relationships in terms of uncertainty. The mutual information (MI) between two quantities is a measure of the extent to which knowledge of one quantity reduces uncertainty about the other. If you knew the value of a feature, how much more confident would you be about the target?
Here's an example from the Ames Housing data. The figure shows the relationship between the exterior quality of a house and the price it sold for. Each point represents a house.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/X12ARUK.png" width=400, alt="Four categories of ExterQual: Fair, Typical, Good, Excellent. A scatter plot of SalePrice within each category.">
<figcaption style="textalign: center; font-style: italic"><center>Knowing the exterior quality of a house reduces uncertainty about its sale price.
</center></figcaption>
</figure>
From the figure, we can see that knowing the value of ExterQual should make you more certain about the corresponding SalePrice -- each category of ExterQual tends to concentrate SalePrice to within a certain range. The mutual information that ExterQual has with SalePrice is the average reduction of uncertainty in SalePrice taken over the four values of ExterQual. Since Fair occurs less often than Typical, for instance, Fair gets less weight in the MI score.
(Technical note: What we're calling uncertainty is measured using a quantity from information theory known as "entropy". The entropy of a variable means roughly: "how many yes-or-no questions you would need to describe an occurance of that variable, on average." The more questions you have to ask, the more uncertain you must be about the variable. Mutual information is how many questions you expect the feature to answer about the target.)
Interpreting Mutual Information Scores
The least possible mutual information between quantities is 0.0. When MI is zero, the quantities are independent: neither can tell you anything about the other. Conversely, in theory there's no upper bound to what MI can be. In practice though values above 2.0 or so are uncommon. (Mutual information is a logarithmic quantity, so it increases very slowly.)
The next figure will give you an idea of how MI values correspond to the kind and degree of association a feature has with the target.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/Dt75E1f.png" width=800, alt="">
<figcaption style="textalign: center; font-style: italic"><center><strong>Left:</strong> Mutual information increases as the dependence between feature and target becomes tighter. <strong>Right:</strong> Mutual information can capture any kind of association (not just linear, like correlation.)
</center></figcaption>
</figure>
Here are some things to remember when applying mutual information:
- MI can help you to understand the relative potential of a feature as a predictor of the target, considered by itself.
- It's possible for a feature to be very informative when interacting with other features, but not so informative all alone. MI can't detect interactions between features. It is a univariate metric.
- The actual usefulness of a feature depends on the model you use it with. A feature is only useful to the extent that its relationship with the target is one your model can learn. Just because a feature has a high MI score doesn't mean your model will be able to do anything with that information. You may need to transform the feature first to expose the association.
Example - 1985 Automobiles
The Automobile dataset consists of 193 cars from the 1985 model year. The goal for this dataset is to predict a car's price (the target) from 23 of the car's features, such as make, body_style, and horsepower. In this example, we'll rank the features with mutual information and investigate the results by data visualization.
This hidden cell imports some libraries and loads the dataset.
End of explanation
X = df.copy()
y = X.pop("price")
# Label encoding for categoricals
for colname in X.select_dtypes("object"):
X[colname], _ = X[colname].factorize()
# All discrete features should now have integer dtypes (double-check this before using MI!)
discrete_features = X.dtypes == int
Explanation: The scikit-learn algorithm for MI treats discrete features differently from continuous features. Consequently, you need to tell it which are which. As a rule of thumb, anything that must have a float dtype is not discrete. Categoricals (object or categorial dtype) can be treated as discrete by giving them a label encoding. (You can review label encodings in our Categorical Variables lesson.)
End of explanation
from sklearn.feature_selection import mutual_info_regression
def make_mi_scores(X, y, discrete_features):
mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features)
mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns)
mi_scores = mi_scores.sort_values(ascending=False)
return mi_scores
mi_scores = make_mi_scores(X, y, discrete_features)
mi_scores[::3] # show a few features with their MI scores
Explanation: Scikit-learn has two mutual information metrics in its feature_selection module: one for real-valued targets (mutual_info_regression) and one for categorical targets (mutual_info_classif). Our target, price, is real-valued. The next cell computes the MI scores for our features and wraps them up in a nice dataframe.
End of explanation
def plot_mi_scores(scores):
scores = scores.sort_values(ascending=True)
width = np.arange(len(scores))
ticks = list(scores.index)
plt.barh(width, scores)
plt.yticks(width, ticks)
plt.title("Mutual Information Scores")
plt.figure(dpi=100, figsize=(8, 5))
plot_mi_scores(mi_scores)
Explanation: And now a bar plot to make comparisions easier:
End of explanation
sns.relplot(x="curb_weight", y="price", data=df);
Explanation: Data visualization is a great follow-up to a utility ranking. Let's take a closer look at a couple of these.
As we might expect, the high-scoring curb_weight feature exhibits a strong relationship with price, the target.
End of explanation
sns.lmplot(x="horsepower", y="price", hue="fuel_type", data=df);
Explanation: The fuel_type feature has a fairly low MI score, but as we can see from the figure, it clearly separates two price populations with different trends within the horsepower feature. This indicates that fuel_type contributes an interaction effect and might not be unimportant after all. Before deciding a feature is unimportant from its MI score, it's good to investigate any possible interaction effects -- domain knowledge can offer a lot of guidance here.
End of explanation |
9,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note
Step1: Suppose we want to get from A to B. Where can we go from the start state, A?
Step2: We see that from A we can get to any of the three cities ['Z', 'T', 'S']. Which should we choose? We don't know. That's the whole point of search
Step3: A couple of things to note
Step4: Now let's try a different kind of problem that can be solved with the same search function.
Word Ladders Problem
A word ladder problem is this
Step5: We can assign WORDS to be the set of all the words in this file
Step6: And define neighboring_words to return the set of all words that are a one-letter change away from a given word
Step7: For example
Step8: Now we can create word_neighbors as a dict of {word
Step9: Now the breadth_first function can be used to solve a word ladder problem
Step10: More General Search Algorithms
Now we'll embelish the breadth_first algorithm to make a family of search algorithms with more capabilities
Step11: Next is uniform_cost_search, in which each step can have a different cost, and we still consider first one os the states with minimum cost so far.
Step12: Finally, astar_search in which the cost includes an estimate of the distance to the goal as well as the distance travelled so far.
Step14: Search Tree Nodes
The solution to a search problem is now a linked list of Nodes, where each Node
includes a state and the path_cost of getting to the state. In addition, for every Node except for the first (root) Node, there is a previous Node (indicating the state that lead to this Node) and an action (indicating the action taken to get here).
Step16: Frontiers
A frontier is a collection of Nodes that acts like both a Queue and a Set. A frontier, f, supports these operations
Step19: Search Problems
Problem is the abstract class for all search problems. You can define your own class of problems as a subclass of Problem. You will need to override the actions and result method to describe how your problem works. You will also have to either override is_goal or pass a collection of goal states to the initialization method. If actions have different costs, you should override the step_cost method.
Step21: Two Location Vacuum World
Step26: Water Pouring Problem
Here is another problem domain, to show you how to define one. The idea is that we have a number of water jugs and a water tap and the goal is to measure out a specific amount of water (in, say, ounces or liters). You can completely fill or empty a jug, but because the jugs don't have markings on them, you can't partially fill them with a specific amount. You can, however, pour one jug into another, stopping when the seconfd is full or the first is empty.
Step27: Visualization Output
Step29: Random Grid
An environment where you can move in any of 4 directions, unless there is an obstacle there.
Step33: Finding a hard PourProblem
What solvable two-jug PourProblem requires the most steps? We can define the hardness as the number of steps, and then iterate over all PourProblems with capacities up to size M, keeping the hardest one. | Python Code:
romania = {
'A': ['Z', 'T', 'S'],
'B': ['F', 'P', 'G', 'U'],
'C': ['D', 'R', 'P'],
'D': ['M', 'C'],
'E': ['H'],
'F': ['S', 'B'],
'G': ['B'],
'H': ['U', 'E'],
'I': ['N', 'V'],
'L': ['T', 'M'],
'M': ['L', 'D'],
'N': ['I'],
'O': ['Z', 'S'],
'P': ['R', 'C', 'B'],
'R': ['S', 'C', 'P'],
'S': ['A', 'O', 'F', 'R'],
'T': ['A', 'L'],
'U': ['B', 'V', 'H'],
'V': ['U', 'I'],
'Z': ['O', 'A']}
Explanation: Note: This is not yet ready, but shows the direction I'm leaning in for Fourth Edition Search.
State-Space Search
This notebook describes several state-space search algorithms, and how they can be used to solve a variety of problems. We start with a simple algorithm and a simple domain: finding a route from city to city. Later we will explore other algorithms and domains.
The Route-Finding Domain
Like all state-space search problems, in a route-finding problem you will be given:
- A start state (for example, 'A' for the city Arad).
- A goal state (for example, 'B' for the city Bucharest).
- Actions that can change state (for example, driving from 'A' to 'S').
You will be asked to find:
- A path from the start state, through intermediate states, to the goal state.
We'll use this map:
<img src="http://robotics.cs.tamu.edu/dshell/cs625/images/map.jpg" height="366" width="603">
A state-space search problem can be represented by a graph, where the vertexes of the graph are the states of the problem (in this case, cities) and the edges of the graph are the actions (in this case, driving along a road).
We'll represent a city by its single initial letter.
We'll represent the graph of connections as a dict that maps each city to a list of the neighboring cities (connected by a road). For now we don't explicitly represent the actions, nor the distances
between cities.
End of explanation
romania['A']
Explanation: Suppose we want to get from A to B. Where can we go from the start state, A?
End of explanation
from collections import deque # Doubly-ended queue: pop from left, append to right.
def breadth_first(start, goal, neighbors):
"Find a shortest sequence of states from start to the goal."
frontier = deque([start]) # A queue of states
previous = {start: None} # start has no previous state; other states will
while frontier:
s = frontier.popleft()
if s == goal:
return path(previous, s)
for s2 in neighbors[s]:
if s2 not in previous:
frontier.append(s2)
previous[s2] = s
def path(previous, s):
"Return a list of states that lead to state s, according to the previous dict."
return [] if (s is None) else path(previous, previous[s]) + [s]
Explanation: We see that from A we can get to any of the three cities ['Z', 'T', 'S']. Which should we choose? We don't know. That's the whole point of search: we don't know which immediate action is best, so we'll have to explore, until we find a path that leads to the goal.
How do we explore? We'll start with a simple algorithm that will get us from A to B. We'll keep a frontier—a collection of not-yet-explored states—and expand the frontier outward until it reaches the goal. To be more precise:
Initially, the only state in the frontier is the start state, 'A'.
Until we reach the goal, or run out of states in the frontier to explore, do the following:
Remove the first state from the frontier. Call it s.
If s is the goal, we're done. Return the path to s.
Otherwise, consider all the neighboring states of s. For each one:
If we have not previously explored the state, add it to the end of the frontier.
Also keep track of the previous state that led to this new neighboring state; we'll need this to reconstruct the path to the goal, and to keep us from re-visiting previously explored states.
A Simple Search Algorithm: breadth_first
The function breadth_first implements this strategy:
End of explanation
breadth_first('A', 'B', romania)
breadth_first('L', 'N', romania)
breadth_first('N', 'L', romania)
breadth_first('E', 'E', romania)
Explanation: A couple of things to note:
We always add new states to the end of the frontier queue. That means that all the states that are adjacent to the start state will come first in the queue, then all the states that are two steps away, then three steps, etc.
That's what we mean by breadth-first search.
We recover the path to an end state by following the trail of previous[end] pointers, all the way back to start.
The dict previous is a map of {state: previous_state}.
When we finally get an s that is the goal state, we know we have found a shortest path, because any other state in the queue must correspond to a path that is as long or longer.
Note that previous contains all the states that are currently in frontier as well as all the states that were in frontier in the past.
If no path to the goal is found, then breadth_first returns None. If a path is found, it returns the sequence of states on the path.
Some examples:
End of explanation
from search import *
sgb_words = DataFile("EN-text/sgb-words.txt")
Explanation: Now let's try a different kind of problem that can be solved with the same search function.
Word Ladders Problem
A word ladder problem is this: given a start word and a goal word, find the shortest way to transform the start word into the goal word by changing one letter at a time, such that each change results in a word. For example starting with green we can reach grass in 7 steps:
green → greed → treed → trees → tress → cress → crass → grass
We will need a dictionary of words. We'll use 5-letter words from the Stanford GraphBase project for this purpose. Let's get that file from aimadata.
End of explanation
WORDS = set(sgb_words.read().split())
len(WORDS)
Explanation: We can assign WORDS to be the set of all the words in this file:
End of explanation
def neighboring_words(word):
"All words that are one letter away from this word."
neighbors = {word[:i] + c + word[i+1:]
for i in range(len(word))
for c in 'abcdefghijklmnopqrstuvwxyz'
if c != word[i]}
return neighbors & WORDS
Explanation: And define neighboring_words to return the set of all words that are a one-letter change away from a given word:
End of explanation
neighboring_words('hello')
neighboring_words('world')
Explanation: For example:
End of explanation
word_neighbors = {word: neighboring_words(word)
for word in WORDS}
Explanation: Now we can create word_neighbors as a dict of {word: {neighboring_word, ...}}:
End of explanation
breadth_first('green', 'grass', word_neighbors)
breadth_first('smart', 'brain', word_neighbors)
breadth_first('frown', 'smile', word_neighbors)
Explanation: Now the breadth_first function can be used to solve a word ladder problem:
End of explanation
def breadth_first_search(problem):
"Search for goal; paths with least number of steps first."
if problem.is_goal(problem.initial):
return Node(problem.initial)
frontier = FrontierQ(Node(problem.initial), LIFO=False)
explored = set()
while frontier:
node = frontier.pop()
explored.add(node.state)
for action in problem.actions(node.state):
child = node.child(problem, action)
if child.state not in explored and child.state not in frontier:
if problem.is_goal(child.state):
return child
frontier.add(child)
Explanation: More General Search Algorithms
Now we'll embelish the breadth_first algorithm to make a family of search algorithms with more capabilities:
We distinguish between an action and the result of an action.
We allow different measures of the cost of a solution (not just the number of steps in the sequence).
We search through the state space in an order that is more likely to lead to an optimal solution quickly.
Here's how we do these things:
Instead of having a graph of neighboring states, we instead have an object of type Problem. A Problem
has one method, Problem.actions(state) to return a collection of the actions that are allowed in a state,
and another method, Problem.result(state, action) that says what happens when you take an action.
We keep a set, explored of states that have already been explored. We also have a class, Frontier, that makes it efficient to ask if a state is on the frontier.
Each action has a cost associated with it (in fact, the cost can vary with both the state and the action).
The Frontier class acts as a priority queue, allowing the "best" state to be explored next.
We represent a sequence of actions and resulting states as a linked list of Node objects.
The algorithm breadth_first_search is basically the same as breadth_first, but using our new conventions:
End of explanation
def uniform_cost_search(problem, costfn=lambda node: node.path_cost):
frontier = FrontierPQ(Node(problem.initial), costfn)
explored = set()
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
explored.add(node.state)
for action in problem.actions(node.state):
child = node.child(problem, action)
if child.state not in explored and child not in frontier:
frontier.add(child)
elif child in frontier and frontier.cost[child] < child.path_cost:
frontier.replace(child)
Explanation: Next is uniform_cost_search, in which each step can have a different cost, and we still consider first one os the states with minimum cost so far.
End of explanation
def astar_search(problem, heuristic):
costfn = lambda node: node.path_cost + heuristic(node.state)
return uniform_cost_search(problem, costfn)
Explanation: Finally, astar_search in which the cost includes an estimate of the distance to the goal as well as the distance travelled so far.
End of explanation
class Node(object):
A node in a search tree. A search tree is spanning tree over states.
A Node contains a state, the previous node in the tree, the action that
takes us from the previous state to this state, and the path cost to get to
this state. If a state is arrived at by two paths, then there are two nodes
with the same state.
def __init__(self, state, previous=None, action=None, step_cost=1):
"Create a search tree Node, derived from a previous Node by an action."
self.state = state
self.previous = previous
self.action = action
self.path_cost = 0 if previous is None else (previous.path_cost + step_cost)
def __repr__(self): return "<Node {}: {}>".format(self.state, self.path_cost)
def __lt__(self, other): return self.path_cost < other.path_cost
def child(self, problem, action):
"The Node you get by taking an action from this Node."
result = problem.result(self.state, action)
return Node(result, self, action,
problem.step_cost(self.state, action, result))
Explanation: Search Tree Nodes
The solution to a search problem is now a linked list of Nodes, where each Node
includes a state and the path_cost of getting to the state. In addition, for every Node except for the first (root) Node, there is a previous Node (indicating the state that lead to this Node) and an action (indicating the action taken to get here).
End of explanation
from collections import OrderedDict
import heapq
class FrontierQ(OrderedDict):
"A Frontier that supports FIFO or LIFO Queue ordering."
def __init__(self, initial, LIFO=False):
Initialize Frontier with an initial Node.
If LIFO is True, pop from the end first; otherwise from front first.
self.LIFO = LIFO
self.add(initial)
def add(self, node):
"Add a node to the frontier."
self[node.state] = node
def pop(self):
"Remove and return the next Node in the frontier."
(state, node) = self.popitem(self.LIFO)
return node
def replace(self, node):
"Make this node replace the nold node with the same state."
del self[node.state]
self.add(node)
class FrontierPQ:
"A Frontier ordered by a cost function; a Priority Queue."
def __init__(self, initial, costfn=lambda node: node.path_cost):
"Initialize Frontier with an initial Node, and specify a cost function."
self.heap = []
self.states = {}
self.costfn = costfn
self.add(initial)
def add(self, node):
"Add node to the frontier."
cost = self.costfn(node)
heapq.heappush(self.heap, (cost, node))
self.states[node.state] = node
def pop(self):
"Remove and return the Node with minimum cost."
(cost, node) = heapq.heappop(self.heap)
self.states.pop(node.state, None) # remove state
return node
def replace(self, node):
"Make this node replace a previous node with the same state."
if node.state not in self:
raise ValueError('{} not there to replace'.format(node.state))
for (i, (cost, old_node)) in enumerate(self.heap):
if old_node.state == node.state:
self.heap[i] = (self.costfn(node), node)
heapq._siftdown(self.heap, 0, i)
return
def __contains__(self, state): return state in self.states
def __len__(self): return len(self.heap)
Explanation: Frontiers
A frontier is a collection of Nodes that acts like both a Queue and a Set. A frontier, f, supports these operations:
f.add(node): Add a node to the Frontier.
f.pop(): Remove and return the "best" node from the frontier.
f.replace(node): add this node and remove a previous node with the same state.
state in f: Test if some node in the frontier has arrived at state.
f[state]: returns the node corresponding to this state in frontier.
len(f): The number of Nodes in the frontier. When the frontier is empty, f is false.
We provide two kinds of frontiers: One for "regular" queues, either first-in-first-out (for breadth-first search) or last-in-first-out (for depth-first search), and one for priority queues, where you can specify what cost function on nodes you are trying to minimize.
End of explanation
class Problem(object):
The abstract class for a search problem.
def __init__(self, initial=None, goals=(), **additional_keywords):
Provide an initial state and optional goal states.
A subclass can have additional keyword arguments.
self.initial = initial # The initial state of the problem.
self.goals = goals # A collection of possibe goal states.
self.__dict__.update(**additional_keywords)
def actions(self, state):
"Return a list of actions executable in this state."
raise NotImplementedError # Override this!
def result(self, state, action):
"The state that results from executing this action in this state."
raise NotImplementedError # Override this!
def is_goal(self, state):
"True if the state is a goal."
return state in self.goals # Optionally override this!
def step_cost(self, state, action, result=None):
"The cost of taking this action from this state."
return 1 # Override this if actions have different costs
def action_sequence(node):
"The sequence of actions to get to this node."
actions = []
while node.previous:
actions.append(node.action)
node = node.previous
return actions[::-1]
def state_sequence(node):
"The sequence of states to get to this node."
states = [node.state]
while node.previous:
node = node.previous
states.append(node.state)
return states[::-1]
Explanation: Search Problems
Problem is the abstract class for all search problems. You can define your own class of problems as a subclass of Problem. You will need to override the actions and result method to describe how your problem works. You will also have to either override is_goal or pass a collection of goal states to the initialization method. If actions have different costs, you should override the step_cost method.
End of explanation
dirt = '*'
clean = ' '
class TwoLocationVacuumProblem(Problem):
A Vacuum in a world with two locations, and dirt.
Each state is a tuple of (location, dirt_in_W, dirt_in_E).
def actions(self, state): return ('W', 'E', 'Suck')
def is_goal(self, state): return dirt not in state
def result(self, state, action):
"The state that results from executing this action in this state."
(loc, dirtW, dirtE) = state
if action == 'W': return ('W', dirtW, dirtE)
elif action == 'E': return ('E', dirtW, dirtE)
elif action == 'Suck' and loc == 'W': return (loc, clean, dirtE)
elif action == 'Suck' and loc == 'E': return (loc, dirtW, clean)
else: raise ValueError('unknown action: ' + action)
problem = TwoLocationVacuumProblem(initial=('W', dirt, dirt))
result = uniform_cost_search(problem)
result
action_sequence(result)
state_sequence(result)
problem = TwoLocationVacuumProblem(initial=('E', clean, dirt))
result = uniform_cost_search(problem)
action_sequence(result)
Explanation: Two Location Vacuum World
End of explanation
class PourProblem(Problem):
Problem about pouring water between jugs to achieve some water level.
Each state is a tuples of levels. In the initialization, provide a tuple of
capacities, e.g. PourProblem(capacities=(8, 16, 32), initial=(2, 4, 3), goals={7}),
which means three jugs of capacity 8, 16, 32, currently filled with 2, 4, 3 units of
water, respectively, and the goal is to get a level of 7 in any one of the jugs.
def actions(self, state):
The actions executable in this state.
jugs = range(len(state))
return ([('Fill', i) for i in jugs if state[i] != self.capacities[i]] +
[('Dump', i) for i in jugs if state[i] != 0] +
[('Pour', i, j) for i in jugs for j in jugs if i != j])
def result(self, state, action):
The state that results from executing this action in this state.
result = list(state)
act, i, j = action[0], action[1], action[-1]
if act == 'Fill': # Fill i to capacity
result[i] = self.capacities[i]
elif act == 'Dump': # Empty i
result[i] = 0
elif act == 'Pour':
a, b = state[i], state[j]
result[i], result[j] = ((0, a + b)
if (a + b <= self.capacities[j]) else
(a + b - self.capacities[j], self.capacities[j]))
else:
raise ValueError('unknown action', action)
return tuple(result)
def is_goal(self, state):
True if any of the jugs has a level equal to one of the goal levels.
return any(level in self.goals for level in state)
p7 = PourProblem(initial=(2, 0), capacities=(5, 13), goals={7})
p7.result((2, 0), ('Fill', 1))
result = uniform_cost_search(p7)
action_sequence(result)
Explanation: Water Pouring Problem
Here is another problem domain, to show you how to define one. The idea is that we have a number of water jugs and a water tap and the goal is to measure out a specific amount of water (in, say, ounces or liters). You can completely fill or empty a jug, but because the jugs don't have markings on them, you can't partially fill them with a specific amount. You can, however, pour one jug into another, stopping when the seconfd is full or the first is empty.
End of explanation
def showpath(searcher, problem):
"Show what happens when searcvher solves problem."
problem = Instrumented(problem)
print('\n{}:'.format(searcher.__name__))
result = searcher(problem)
if result:
actions = action_sequence(result)
state = problem.initial
path_cost = 0
for steps, action in enumerate(actions, 1):
path_cost += problem.step_cost(state, action, 0)
result = problem.result(state, action)
print(' {} =={}==> {}; cost {} after {} steps'
.format(state, action, result, path_cost, steps,
'; GOAL!' if problem.is_goal(result) else ''))
state = result
msg = 'GOAL FOUND' if result else 'no solution'
print('{} after {} results and {} goal checks'
.format(msg, problem._counter['result'], problem._counter['is_goal']))
from collections import Counter
class Instrumented:
"Instrument an object to count all the attribute accesses in _counter."
def __init__(self, obj):
self._object = obj
self._counter = Counter()
def __getattr__(self, attr):
self._counter[attr] += 1
return getattr(self._object, attr)
showpath(uniform_cost_search, p7)
p = PourProblem(initial=(0, 0), capacities=(7, 13), goals={2})
showpath(uniform_cost_search, p)
class GreenPourProblem(PourProblem):
def step_cost(self, state, action, result=None):
"The cost is the amount of water used in a fill."
if action[0] == 'Fill':
i = action[1]
return self.capacities[i] - state[i]
return 0
p = GreenPourProblem(initial=(0, 0), capacities=(7, 13), goals={2})
showpath(uniform_cost_search, p)
def compare_searchers(problem, searchers=None):
"Apply each of the search algorithms to the problem, and show results"
if searchers is None:
searchers = (breadth_first_search, uniform_cost_search)
for searcher in searchers:
showpath(searcher, problem)
compare_searchers(p)
Explanation: Visualization Output
End of explanation
import random
N, S, E, W = DIRECTIONS = [(0, 1), (0, -1), (1, 0), (-1, 0)]
def Grid(width, height, obstacles=0.1):
A 2-D grid, width x height, with obstacles that are either a collection of points,
or a fraction between 0 and 1 indicating the density of obstacles, chosen at random.
grid = {(x, y) for x in range(width) for y in range(height)}
if isinstance(obstacles, (float, int)):
obstacles = random.sample(grid, int(width * height * obstacles))
def neighbors(x, y):
for (dx, dy) in DIRECTIONS:
(nx, ny) = (x + dx, y + dy)
if (nx, ny) not in obstacles and 0 <= nx < width and 0 <= ny < height:
yield (nx, ny)
return {(x, y): list(neighbors(x, y))
for x in range(width) for y in range(height)}
Grid(5, 5)
class GridProblem(Problem):
"Create with a call like GridProblem(grid=Grid(10, 10), initial=(0, 0), goal=(9, 9))"
def actions(self, state): return DIRECTIONS
def result(self, state, action):
#print('ask for result of', state, action)
(x, y) = state
(dx, dy) = action
r = (x + dx, y + dy)
return r if r in self.grid[state] else state
gp = GridProblem(grid=Grid(5, 5, 0.3), initial=(0, 0), goals={(4, 4)})
showpath(uniform_cost_search, gp)
Explanation: Random Grid
An environment where you can move in any of 4 directions, unless there is an obstacle there.
End of explanation
def hardness(problem):
L = breadth_first_search(problem)
#print('hardness', problem.initial, problem.capacities, problem.goals, L)
return len(action_sequence(L)) if (L is not None) else 0
hardness(p7)
action_sequence(breadth_first_search(p7))
C = 9 # Maximum capacity to consider
phard = max((PourProblem(initial=(a, b), capacities=(A, B), goals={goal})
for A in range(C+1) for B in range(C+1)
for a in range(A) for b in range(B)
for goal in range(max(A, B))),
key=hardness)
phard.initial, phard.capacities, phard.goals
showpath(breadth_first_search, PourProblem(initial=(0, 0), capacities=(7, 9), goals={8}))
showpath(uniform_cost_search, phard)
class GridProblem(Problem):
A Grid.
def actions(self, state): return ['N', 'S', 'E', 'W']
def result(self, state, action):
The state that results from executing this action in this state.
(W, H) = self.size
if action == 'N' and state > W: return state - W
if action == 'S' and state + W < W * W: return state + W
if action == 'E' and (state + 1) % W !=0: return state + 1
if action == 'W' and state % W != 0: return state - 1
return state
compare_searchers(GridProblem(initial=0, goals={44}, size=(10, 10)))
def test_frontier():
#### Breadth-first search with FIFO Q
f = FrontierQ(Node(1), LIFO=False)
assert 1 in f and len(f) == 1
f.add(Node(2))
f.add(Node(3))
assert 1 in f and 2 in f and 3 in f and len(f) == 3
assert f.pop().state == 1
assert 1 not in f and 2 in f and 3 in f and len(f) == 2
assert f
assert f.pop().state == 2
assert f.pop().state == 3
assert not f
#### Depth-first search with LIFO Q
f = FrontierQ(Node('a'), LIFO=True)
for s in 'bcdef': f.add(Node(s))
assert len(f) == 6 and 'a' in f and 'c' in f and 'f' in f
for s in 'fedcba': assert f.pop().state == s
assert not f
#### Best-first search with Priority Q
f = FrontierPQ(Node(''), lambda node: len(node.state))
assert '' in f and len(f) == 1 and f
for s in ['book', 'boo', 'bookie', 'bookies', 'cook', 'look', 'b']:
assert s not in f
f.add(Node(s))
assert s in f
assert f.pop().state == ''
assert f.pop().state == 'b'
assert f.pop().state == 'boo'
assert {f.pop().state for _ in '123'} == {'book', 'cook', 'look'}
assert f.pop().state == 'bookie'
#### Romania: Two paths to Bucharest; cheapest one found first
S = Node('S')
SF = Node('F', S, 'S->F', 99)
SFB = Node('B', SF, 'F->B', 211)
SR = Node('R', S, 'S->R', 80)
SRP = Node('P', SR, 'R->P', 97)
SRPB = Node('B', SRP, 'P->B', 101)
f = FrontierPQ(S)
f.add(SF); f.add(SR), f.add(SRP), f.add(SRPB); f.add(SFB)
def cs(n): return (n.path_cost, n.state) # cs: cost and state
assert cs(f.pop()) == (0, 'S')
assert cs(f.pop()) == (80, 'R')
assert cs(f.pop()) == (99, 'F')
assert cs(f.pop()) == (177, 'P')
assert cs(f.pop()) == (278, 'B')
return 'test_frontier ok'
test_frontier()
%matplotlib inline
import matplotlib.pyplot as plt
p = plt.plot([i**2 for i in range(10)])
plt.savefig('destination_path.eps', format='eps', dpi=1200)
import itertools
import random
# http://stackoverflow.com/questions/10194482/custom-matplotlib-plot-chess-board-like-table-with-colored-cells
from matplotlib.table import Table
def main():
grid_table(8, 8)
plt.axis('scaled')
plt.show()
def grid_table(nrows, ncols):
fig, ax = plt.subplots()
ax.set_axis_off()
colors = ['white', 'lightgrey', 'dimgrey']
tb = Table(ax, bbox=[0,0,2,2])
for i,j in itertools.product(range(ncols), range(nrows)):
tb.add_cell(i, j, 2./ncols, 2./nrows, text='{:0.2f}'.format(0.1234),
loc='center', facecolor=random.choice(colors), edgecolor='grey') # facecolors=
ax.add_table(tb)
#ax.plot([0, .3], [.2, .2])
#ax.add_line(plt.Line2D([0.3, 0.5], [0.7, 0.7], linewidth=2, color='blue'))
return fig
main()
import collections
class defaultkeydict(collections.defaultdict):
Like defaultdict, but the default_factory is a function of the key.
>>> d = defaultkeydict(abs); d[-42]
42
def __missing__(self, key):
self[key] = self.default_factory(key)
return self[key]
Explanation: Finding a hard PourProblem
What solvable two-jug PourProblem requires the most steps? We can define the hardness as the number of steps, and then iterate over all PourProblems with capacities up to size M, keeping the hardest one.
End of explanation |
9,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, let's make a random binding network. We use the same structure as the circular convolution network
Step1: This seems to give us something like binding. But, can we now unbind?
To do this, we build anothe binding network, feed in the bound result and one of the two inputs, and then do PES to learn the function that decodes out the other input.
Step2: Here are the norms of the vectors, just to make sure we're in the right ranges
Step3: Here's one of the dimensions plotted at various points in the learning process
Step4: And here's the cosine of the angle between the output and the ideal output | Python Code:
D = 16
D_bind = 32
scaling_fudge_factor = 2.0
model = spa.Network()
model.config[nengo.Ensemble].neuron_type=nengo.LIFRate()
with model:
in1 = spa.State(D)
in2 = spa.State(D)
out = spa.State(D)
bind = nengo.networks.Product(n_neurons=50, dimensions=D_bind)
T1 = np.random.normal(size=(D_bind, D))
T2 = np.random.normal(size=(D_bind, D))
T3 = np.random.normal(size=(D, D_bind))
T1 = T1 / np.linalg.norm(T1, axis=1)[:, None]*np.sqrt(D)
T2 = T2 / np.linalg.norm(T2, axis=1)[:, None]*np.sqrt(D)
T3 = T3 / np.linalg.norm(T3, axis=1)[:, None]*scaling_fudge_factor/np.sqrt(D)
nengo.Connection(in1.output, bind.input_a, transform=T1)
nengo.Connection(in2.output, bind.input_b, transform=T2)
nengo.Connection(bind.output, out.input, transform=T3)
p_in1 = nengo.Probe(in1.output, synapse=0.01)
p_in2 = nengo.Probe(in2.output, synapse=0.01)
p_bind = nengo.Probe(bind.output, synapse=0.01)
p_bind_in1 = nengo.Probe(bind.input_a, synapse=0.01)
p_out = nengo.Probe(out.output, synapse=0.01)
stim1 = nengo.Node(nengo.processes.WhiteSignal(high=0.5, period=10.0, rms=1.0/np.sqrt(D)), size_out=D)
nengo.Connection(stim1, in1.input)
stim2 = nengo.Node(nengo.processes.WhiteSignal(high=0.5, period=10.0, rms=1.0/np.sqrt(D)), size_out=D)
nengo.Connection(stim2, in2.input)
sim = nengo.Simulator(model)
sim.run(10)
pylab.plot(sim.trange(), np.linalg.norm(sim.data[p_in1], axis=1), label='in1')
pylab.plot(sim.trange(), np.linalg.norm(sim.data[p_in2], axis=1), label='in2')
pylab.plot(sim.trange(), np.linalg.norm(sim.data[p_out], axis=1), label='out')
pylab.legend(loc='best')
pylab.show()
Explanation: First, let's make a random binding network. We use the same structure as the circular convolution network: a hidden layer optimized to do pairwise products, and linear transforms into and out of it. But, instead of using the DFT matrix, we randomly generate the matrices.
End of explanation
D = 2
D_bind = 16
scaling_fudge_factor = 2.0
T = 500.0
learning_rate = 1e-5 / D_bind
model = spa.Network()
model.config[nengo.Ensemble].neuron_type=nengo.LIFRate()
with model:
in1 = spa.State(D, subdimensions=D)
in2 = spa.State(D, subdimensions=D)
out = spa.State(D, subdimensions=D)
bind = nengo.networks.Product(n_neurons=50, dimensions=D_bind)
unbind = nengo.networks.Product(n_neurons=50, dimensions=D_bind)
unbind_out = nengo.Node(None, size_in=D)
error = nengo.Node(None, size_in=D)
T1 = np.random.normal(size=(D_bind, D))
T2 = np.random.normal(size=(D_bind, D))
T3 = np.random.normal(size=(D, D_bind))
T1 = T1 / np.linalg.norm(T1, axis=1)[:, None]*np.sqrt(D)
T2 = T2 / np.linalg.norm(T2, axis=1)[:, None]*np.sqrt(D)
T3 = T3 / np.linalg.norm(T3, axis=1)[:, None]*scaling_fudge_factor/np.sqrt(D)
nengo.Connection(in1.output, bind.input_a, transform=T1)
nengo.Connection(in2.output, bind.input_b, transform=T2)
nengo.Connection(bind.output, out.input, transform=T3)
T4 = np.random.normal(size=(D_bind, D))
T5 = np.random.normal(size=(D_bind, D))
T4 = T4 / np.linalg.norm(T4, axis=1)[:, None]*np.sqrt(D)
T5 = T5 / np.linalg.norm(T5, axis=1)[:, None]*np.sqrt(D)
nengo.Connection(out.output, unbind.input_a, transform=T4)
nengo.Connection(in2.output, unbind.input_b, transform=T5)
for ens in unbind.all_ensembles:
c = nengo.Connection(ens, unbind_out, learning_rule_type=nengo.PES(learning_rate=learning_rate),
function=lambda x: np.zeros(D))
nengo.Connection(error, c.learning_rule)
nengo.Connection(unbind_out, error)
nengo.Connection(in1.output, error, transform=-1)
p_in1 = nengo.Probe(in1.output, synapse=0.01)
p_error = nengo.Probe(error, synapse=0.01)
p_unbind_out = nengo.Probe(unbind_out, synapse=0.01)
stim1 = nengo.Node(nengo.processes.WhiteSignal(high=0.5, period=T, rms=1.0/np.sqrt(D)), size_out=D)
nengo.Connection(stim1, in1.input)
stim2 = nengo.Node(nengo.processes.WhiteSignal(high=0.5, period=T, rms=1.0/np.sqrt(D)), size_out=D)
nengo.Connection(stim2, in2.input)
sim = nengo.Simulator(model)
sim.run(T)
Explanation: This seems to give us something like binding. But, can we now unbind?
To do this, we build anothe binding network, feed in the bound result and one of the two inputs, and then do PES to learn the function that decodes out the other input.
End of explanation
pylab.plot(sim.trange(), np.linalg.norm(sim.data[p_in1], axis=1), label='in1')
pylab.plot(sim.trange(), np.linalg.norm(sim.data[p_error], axis=1), label='error')
pylab.plot(sim.trange(), np.linalg.norm(sim.data[p_unbind_out], axis=1), label='unbind_out')
pylab.legend(loc='best')
pylab.xlim(450,500)
pylab.show()
Explanation: Here are the norms of the vectors, just to make sure we're in the right ranges
End of explanation
i = 0
T_window = 10
pylab.figure(figsize=(12,4))
pylab.subplot(1, 3, 1)
pylab.plot(sim.trange(), sim.data[p_in1][:,i], label='in1[%d]'%i)
pylab.plot(sim.trange(), sim.data[p_unbind_out][:,i], label='unbind_out[%d]'%i)
pylab.legend(loc='best')
pylab.xlim(0,T_window)
pylab.subplot(1, 3, 2)
pylab.plot(sim.trange(), sim.data[p_in1][:,i], label='in1[%d]'%i)
pylab.plot(sim.trange(), sim.data[p_unbind_out][:,i], label='unbind_out[%d]'%i)
pylab.legend(loc='best')
pylab.xlim((T-T_window)/2,(T+T_window)/2)
pylab.subplot(1, 3, 3)
pylab.plot(sim.trange(), sim.data[p_in1][:,i], label='in1[%d]'%i)
pylab.plot(sim.trange(), sim.data[p_unbind_out][:,i], label='unbind_out[%d]'%i)
pylab.legend(loc='best')
pylab.xlim(T-T_window,T)
pylab.show()
Explanation: Here's one of the dimensions plotted at various points in the learning process
End of explanation
ideal = sim.data[p_in1]
actual = sim.data[p_unbind_out]
ideal = ideal / np.linalg.norm(ideal, axis=1)[:,None]
actual = actual / np.linalg.norm(actual, axis=1)[:,None]
prod = ideal*actual
cos_a = np.sum(prod, axis=1)
cos_a[np.isnan(cos_a)] = 0
pylab.plot(sim.trange(), nengo.synapses.Lowpass(10.0).filt(cos_a, dt=0.001))
pylab.show()
Explanation: And here's the cosine of the angle between the output and the ideal output
End of explanation |
9,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Neural Network Potentials
An area of significant recent interest is the use of neural networks to model quantum mechanics. Since directly (or approximately) solving Schrodinger's equation is extremely expensive, these techniques offers the tantalizing possibility of conducting large-scale and high-fidelity experiments of materials as well as chemical and biochemical systems.
\
Usually, neural networks are fit to energies computed from Density Functional Theory (DFT). DFT is a ubiquitous ab initio formalism for approximating solutions to Schrodinger's equation. It offers a balance between accuracy and speed; DFT is much faster than more precise solutions to quantum systems, but is fast enough to use on systems of hundreds of atoms. Nonetheless, DFT calculations scale as $\mathcal O(N^3)$ and so they are prohibitively expensive to run on large systems or for long simulation trajectories.
\
As with many areas of machine learning, early efforts to fit quantum mechanical interactions with neural networks relied on fixed feature methods with shallow neural network potentials. Lately, however, these networks have been replaced by deeper graph neural network architectures that learn salient features. JAX MD includes both popular fixed-feature methods as well as graph neural networks.
\
Here we will use JAX MD to fit a state-of-the-art graph neural network to open-source DFT data from a 64-atom Silicon system that accompanied a recent paper. This Silicon system was simulated at several different temperatures. We will uniformly sample data from these trajectories to construct training and test sets. We will follow modern best-practices and fit to both energies and forces computed using DFT. We will then use this network to run a simulation using JAX MDs simulation environments. To start with we first download the data. This might take a little a minute or two.
Step2: We will then load the data using a small utility function into training and test sets. Each split will include particle positions, whole-system energies, and per-particle forces. To assist in training we will compute the mean and standard deviation of the data and use this to set the initial scale for our neural network.
Step3: Next we create a space for our systems to live in using periodic boundary conditions.
Step4: We can now instantiate a graph neural network using the energy.graph_network_neighbor_list command. This neural network is based on recent work modelling defects in disordered solids. See that paper or the review by Battaglia et al. for details. We will add edges between all neighbors that are separated by less than a cutoff of 3 Angstroms. In JAX MD neural network potentials are specified by a triple of functions
Step5: To start with, we construct an initial neighbor list which will be used to estimate the maximum number of neighbors. This is necessary since XLA needs to have static shapes to enable JIT compilation. See here for details.
Step6: Using this neighbor prototype we will write a wrapper around our neural network energy function that will construct a neighbor list for a given state and then compute the energy. This is helpful because it allows us to use JAX's automatic vectorization via vmap along with our neighbor lists. Using JAX's automatic differentiation we can also write down a function that computes the force due to our neural network potential.
Note that if we were running a simulation using this energy, we would only rebuild the neighbor list when necessary.
Step7: Next we will initialize the parameters of the graph network. This is done by providing the init_fn with a random key as well as an example input. As with the neighbor lists, this example input is used to deduce the shape of the various parameters.
Step8: Now, we can use JAX's automatic vectorization via vmap to compute predicted energies for all of the states using the untrained network.
Step9: Despite the fact that the neural network is untrained we see that the outputs of the graph network correlate strongly with the labels. This hints that perhaps graph networks provide some sort of "deep molecular prior".
Next, we define losses for the energy and the force as well as a total loss that combines the two terms. We fit both the force and the energy using Mean-Squared-Error (MSE) loss.
Step10: Now we create an optimizer using ADAM with gradient clipping. We will also write helper functions to perform a single update step and perform an entire epochs worth of updates.
Step11: Finally, we will write a function that creates an epoch's worth of batches given a lookup table that shuffles all of the states in the training set.
Step12: We're now ready to train our network. We'll start by training for twenty epochs to make sure it starts training.
Step13: While we see that the network has begun to learn the energies, we also see that it has a long way to go before the predictions get good enough to use in a simulation. As such we're going to take inspiration from cooking shows, and take a ready-made GNN out of the fridge where it has been training overnight for 12,000 epochs on a V100 GPU.
Step14: Using our trained model we plot the predicted energies and forces against the labels.
Step15: We see that the model prediction for the energy is extremely accurate and the force prediction is reasonable. To make this a bit more quantitative, we can compute the RMSE of the energy and convert it to meV / atom.
Step16: We see that we get an error of about $2$ meV / atom, which is comparable to previous work on this system.
Now that we have a well-performing neural network, we can see how easily this network can be used to run a simulation approximating Silicon. We will run a constant temperature simulation using a Nose-Hoover thermostat. First, we "bake" the params into the energy function using partial evaluation.
Step17: Then, we setup the parameters of the simulation and create the simulation environment.
Step18: Finally we run the simulation for 10000 steps while writing the energy and temperature throughout.
Step19: We see that the energy of the simulation is reasonable and the temperature is stable. Of course, if we were validating this model for use in a research setting there are many measurements that one would like to perform to check its fidelity.
We can now draw the simulation to see what is happening. | Python Code:
#@title Imports & Utils
!pip install -q git+https://www.github.com/deepmind/haiku
!pip install -q git+https://www.github.com/deepmind/optax
!pip install -q --upgrade git+https://www.github.com/google/jax-md
# Imports
import os
import numpy as onp
import pickle
import jax
from jax import lax
from jax import jit, vmap, grad
# TODO: Re-enable x64 mode after XLA bug fix.
# from jax.config import config ; config.update('jax_enable_x64', True)
import warnings
warnings.simplefilter('ignore')
import jax.numpy as np
from jax import random
import optax
from jax_md import energy, space, simulate, quantity
# Plotting.
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import pylab as pl
from IPython import display
from functools import partial
sns.set_style(style='white')
sns.set(font_scale=1.6)
def format_plot(x, y):
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_facecolor('white')
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
def draw_training(params):
display.clear_output(wait=True)
display.display(plt.gcf())
plt.subplot(1, 2, 1)
plt.semilogy(train_energy_error)
plt.semilogy(test_energy_error)
plt.xlim([0, train_epochs])
format_plot('Epoch', '$L$')
plt.subplot(1, 2, 2)
predicted = vectorized_energy_fn(params, example_positions)
plt.plot(example_energies, predicted, 'o')
plt.plot(np.linspace(-400, -300, 10), np.linspace(-400, -300, 10), '--')
format_plot('$E_{label}$', '$E_{prediction}$')
finalize_plot((2, 1))
plt.show()
# Data Loading.
def MD_trajectory_reader(f, no_skip=20):
filename = os.path.join('Supplementary/', f)
fo = open(filename, 'r')
samples = fo.read().split('iter= ')[1:]
steps = []
lattice_vectors = []
positions = []
forces = []
temperatures = []
energies = []
for sample in samples[::no_skip]:
entries = sample.split('\n')
steps.append(int(entries[0]))
lattice_vectors.append(onp.array([list(map(float, lv.split())) for lv in entries[1:4]]))
assert entries[4]=='64'
temp = onp.array([list(map(float, lv.split()[1:])) for lv in entries[5:69]])
positions.append(temp[:,:3])
forces.append(temp[:,3:])
remaining_lines = entries[69:]
temperatures.append(float([entry for entry in entries[69:] if 'Temp' in entry ][0].split('=')[1].split()[0]))
energies.append(float([entry for entry in entries[69:] if 'el-ion E' in entry ][0].split('=')[1].split()[0]))
assert (len(set(steps))-(steps[-1]-steps[0]+1)/no_skip) < 1
return np.array(positions), np.array(energies), np.array(forces)
def build_dataset():
no_skip = 15
data300, energies300, forces300 = MD_trajectory_reader(
'MD_DATA.cubic_300K', no_skip=no_skip)
data600, energies600, forces600 = MD_trajectory_reader(
'MD_DATA.cubic_600K', no_skip=no_skip)
data900, energies900, forces900 = MD_trajectory_reader(
'MD_DATA.cubic_900K', no_skip=no_skip)
dataliq, energiesliq, forcesliq = MD_trajectory_reader(
'MD_DATA.liq_1', no_skip=no_skip)
all_data = np.vstack((data300, data600, data900))
all_energies = np.hstack((energies300, energies600, energies900))
all_forces = np.vstack((forces300, forces600, forces900))
noTotal = all_data.shape[0]
onp.random.seed(0)
II = onp.random.permutation(range(noTotal))
all_data = all_data[II]
all_energies = all_energies[II]
all_forces = all_forces[II]
noTr = int(noTotal * 0.65)
noTe = noTotal - noTr
train_data = all_data[:noTr]
test_data = all_data[noTr:]
train_energies = all_energies[:noTr]
test_energies = all_energies[noTr:]
train_forces = all_forces[:noTr]
test_forces = all_forces[noTr:]
return ((train_data, train_energies, train_forces),
(test_data, test_energies, test_forces))
Explanation: <a href="https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/neural_networks.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
#@title Download Data
!wget https://aip.scitation.org/doi/suppl/10.1063/1.4990503/suppl_file/supplementary.zip
!wget https://raw.githubusercontent.com/google/jax-md/main/examples/models/si_gnn.pickle
!unzip supplementary.zip
Explanation: Neural Network Potentials
An area of significant recent interest is the use of neural networks to model quantum mechanics. Since directly (or approximately) solving Schrodinger's equation is extremely expensive, these techniques offers the tantalizing possibility of conducting large-scale and high-fidelity experiments of materials as well as chemical and biochemical systems.
\
Usually, neural networks are fit to energies computed from Density Functional Theory (DFT). DFT is a ubiquitous ab initio formalism for approximating solutions to Schrodinger's equation. It offers a balance between accuracy and speed; DFT is much faster than more precise solutions to quantum systems, but is fast enough to use on systems of hundreds of atoms. Nonetheless, DFT calculations scale as $\mathcal O(N^3)$ and so they are prohibitively expensive to run on large systems or for long simulation trajectories.
\
As with many areas of machine learning, early efforts to fit quantum mechanical interactions with neural networks relied on fixed feature methods with shallow neural network potentials. Lately, however, these networks have been replaced by deeper graph neural network architectures that learn salient features. JAX MD includes both popular fixed-feature methods as well as graph neural networks.
\
Here we will use JAX MD to fit a state-of-the-art graph neural network to open-source DFT data from a 64-atom Silicon system that accompanied a recent paper. This Silicon system was simulated at several different temperatures. We will uniformly sample data from these trajectories to construct training and test sets. We will follow modern best-practices and fit to both energies and forces computed using DFT. We will then use this network to run a simulation using JAX MDs simulation environments. To start with we first download the data. This might take a little a minute or two.
End of explanation
train, test = build_dataset()
positions, energies, forces = train
test_positions, test_energies, test_forces = test
energy_mean = np.mean(energies)
energy_std = np.std(energies)
print('positions.shape = {}'.format(positions.shape))
print('<E> = {}'.format(energy_mean))
Explanation: We will then load the data using a small utility function into training and test sets. Each split will include particle positions, whole-system energies, and per-particle forces. To assist in training we will compute the mean and standard deviation of the data and use this to set the initial scale for our neural network.
End of explanation
box_size = 10.862 # The size of the simulation region.
displacement, shift = space.periodic(box_size)
Explanation: Next we create a space for our systems to live in using periodic boundary conditions.
End of explanation
neighbor_fn, init_fn, energy_fn = energy.graph_network_neighbor_list(
displacement, box_size, r_cutoff=3.0, dr_threshold=0.0)
Explanation: We can now instantiate a graph neural network using the energy.graph_network_neighbor_list command. This neural network is based on recent work modelling defects in disordered solids. See that paper or the review by Battaglia et al. for details. We will add edges between all neighbors that are separated by less than a cutoff of 3 Angstroms. In JAX MD neural network potentials are specified by a triple of functions: a neighbor_fn that creates a list of neighbors that reside within the cutoff, an init_fn that initializes the parameters of the network, and an energy_fn that evaluates the model.
End of explanation
neighbor = neighbor_fn.allocate(positions[0], extra_capacity=6)
print('Allocating space for at most {} edges'.format(neighbor.idx.shape[1]))
Explanation: To start with, we construct an initial neighbor list which will be used to estimate the maximum number of neighbors. This is necessary since XLA needs to have static shapes to enable JIT compilation. See here for details.
End of explanation
@jit
def train_energy_fn(params, R):
_neighbor = neighbor.update(R)
return energy_fn(params, R, _neighbor)
# Vectorize over states, not parameters.
vectorized_energy_fn = vmap(train_energy_fn, (None, 0))
grad_fn = grad(train_energy_fn, argnums=1)
force_fn = lambda params, R, **kwargs: -grad_fn(params, R)
vectorized_force_fn = vmap(force_fn, (None, 0))
Explanation: Using this neighbor prototype we will write a wrapper around our neural network energy function that will construct a neighbor list for a given state and then compute the energy. This is helpful because it allows us to use JAX's automatic vectorization via vmap along with our neighbor lists. Using JAX's automatic differentiation we can also write down a function that computes the force due to our neural network potential.
Note that if we were running a simulation using this energy, we would only rebuild the neighbor list when necessary.
End of explanation
key = random.PRNGKey(0)
params = init_fn(key, positions[0], neighbor)
Explanation: Next we will initialize the parameters of the graph network. This is done by providing the init_fn with a random key as well as an example input. As with the neighbor lists, this example input is used to deduce the shape of the various parameters.
End of explanation
n_predictions = 500
example_positions = positions[:n_predictions]
example_energies = energies[:n_predictions]
example_forces = forces[:n_predictions]
predicted = vmap(train_energy_fn, (None, 0))(params, example_positions)
plt.plot(example_energies, predicted, 'o')
format_plot('$E_{label}$', '$E_{predicted}$')
finalize_plot((1, 1))
Explanation: Now, we can use JAX's automatic vectorization via vmap to compute predicted energies for all of the states using the untrained network.
End of explanation
@jit
def energy_loss(params, R, energy_targets):
return np.mean((vectorized_energy_fn(params, R) - energy_targets) ** 2)
@jit
def force_loss(params, R, force_targets):
dforces = vectorized_force_fn(params, R) - force_targets
return np.mean(np.sum(dforces ** 2, axis=(1, 2)))
@jit
def loss(params, R, targets):
return energy_loss(params, R, targets[0]) + force_loss(params, R, targets[1])
Explanation: Despite the fact that the neural network is untrained we see that the outputs of the graph network correlate strongly with the labels. This hints that perhaps graph networks provide some sort of "deep molecular prior".
Next, we define losses for the energy and the force as well as a total loss that combines the two terms. We fit both the force and the energy using Mean-Squared-Error (MSE) loss.
End of explanation
opt = optax.chain(optax.clip_by_global_norm(1.0),
optax.adam(1e-3))
@jit
def update_step(params, opt_state, R, labels):
updates, opt_state = opt.update(grad(loss)(params, R, labels),
opt_state)
return optax.apply_updates(params, updates), opt_state
@jit
def update_epoch(params_and_opt_state, batches):
def inner_update(params_and_opt_state, batch):
params, opt_state = params_and_opt_state
b_xs, b_labels = batch
return update_step(params, opt_state, b_xs, b_labels), 0
return lax.scan(inner_update, params_and_opt_state, batches)[0]
Explanation: Now we create an optimizer using ADAM with gradient clipping. We will also write helper functions to perform a single update step and perform an entire epochs worth of updates.
End of explanation
dataset_size = positions.shape[0]
batch_size = 128
lookup = onp.arange(dataset_size)
onp.random.shuffle(lookup)
@jit
def make_batches(lookup):
batch_Rs = []
batch_Es = []
batch_Fs = []
for i in range(0, len(lookup), batch_size):
if i + batch_size > len(lookup):
break
idx = lookup[i:i + batch_size]
batch_Rs += [positions[idx]]
batch_Es += [energies[idx]]
batch_Fs += [forces[idx]]
return np.stack(batch_Rs), np.stack(batch_Es), np.stack(batch_Fs)
batch_Rs, batch_Es, batch_Fs = make_batches(lookup)
Explanation: Finally, we will write a function that creates an epoch's worth of batches given a lookup table that shuffles all of the states in the training set.
End of explanation
train_epochs = 20
opt_state = opt.init(params)
train_energy_error = []
test_energy_error = []
for iteration in range(train_epochs):
train_energy_error += [float(np.sqrt(energy_loss(params, batch_Rs[0], batch_Es[0])))]
test_energy_error += [float(np.sqrt(energy_loss(params, test_positions, test_energies)))]
draw_training(params)
params, opt_state = update_epoch((params, opt_state),
(batch_Rs, (batch_Es, batch_Fs)))
onp.random.shuffle(lookup)
batch_Rs, batch_Es, batch_Fs = make_batches(lookup)
Explanation: We're now ready to train our network. We'll start by training for twenty epochs to make sure it starts training.
End of explanation
with open('si_gnn.pickle', 'rb') as f:
params = pickle.load(f)
Explanation: While we see that the network has begun to learn the energies, we also see that it has a long way to go before the predictions get good enough to use in a simulation. As such we're going to take inspiration from cooking shows, and take a ready-made GNN out of the fridge where it has been training overnight for 12,000 epochs on a V100 GPU.
End of explanation
plt.subplot(1, 2, 1)
predicted_energies = vectorized_energy_fn(params, example_positions)
plt.plot(example_energies, predicted_energies, 'o')
format_plot('$E_{label}$', '$E_{predicted}$')
plt.subplot(1, 2, 2)
predicted_forces = vectorized_force_fn(params, test_positions[:300])
plt.plot(test_forces[:300].reshape((-1,)),
predicted_forces.reshape((-1,)),
'o')
plt.plot(np.linspace(-6, 6, 20), np.linspace(-6, 6, 20), '--')
plt.xlim([-5, 5])
plt.ylim([-5, 5])
format_plot('$F_{label}$', '$F_{predicted}$')
finalize_plot((2, 1))
Explanation: Using our trained model we plot the predicted energies and forces against the labels.
End of explanation
rmse = energy_loss(params, test_positions, test_energies) * 1000 / 64
print('RMSE Error of {:.02f} meV / atom'.format(rmse))
Explanation: We see that the model prediction for the energy is extremely accurate and the force prediction is reasonable. To make this a bit more quantitative, we can compute the RMSE of the energy and convert it to meV / atom.
End of explanation
E_fn = partial(energy_fn, params)
Explanation: We see that we get an error of about $2$ meV / atom, which is comparable to previous work on this system.
Now that we have a well-performing neural network, we can see how easily this network can be used to run a simulation approximating Silicon. We will run a constant temperature simulation using a Nose-Hoover thermostat. First, we "bake" the params into the energy function using partial evaluation.
End of explanation
K_B = 8.617e-5
dt = 1e-3
kT = K_B * 300
Si_mass = 2.91086E-3
init_fn, apply_fn = simulate.nvt_nose_hoover(E_fn, shift, dt, kT)
apply_fn = jit(apply_fn)
Explanation: Then, we setup the parameters of the simulation and create the simulation environment.
End of explanation
# Define the simulation.
total_steps = 10000
steps_per_recording = 25
total_records = total_steps // steps_per_recording
positions = []
@jit
def sim(state, nbrs):
def step(i, state_nbrs):
state, nbrs = state_nbrs
nbrs = nbrs.update(state.position)
return apply_fn(state, neighbor=nbrs), nbrs
return lax.fori_loop(0, steps_per_recording, step, (state, nbrs))
# Initialize the simulation
nbrs = neighbor_fn(test_positions[0])
state = init_fn(key, test_positions[0], Si_mass, neighbor=nbrs)
# Run the simulation.
print('Energy (eV)\tTemperature (K)')
for i in range(total_records):
state, nbrs = sim(state, nbrs)
positions += [state.position]
if i % 40 == 0:
print('{:.02f}\t\t\t{:.02f}'.format(
E_fn(state.position, neighbor=nbrs),
quantity.temperature(state.velocity, Si_mass) / K_B))
positions = np.stack(positions)
Explanation: Finally we run the simulation for 10000 steps while writing the energy and temperature throughout.
End of explanation
from jax_md.colab_tools import renderer
nbrs = neighbor_fn(state.position)
renderer.render(box_size,
{
'atom': renderer.Sphere(positions),
'bonds': renderer.Bond('atom', nbrs.idx),
},
resolution=[512, 512])
Explanation: We see that the energy of the simulation is reasonable and the temperature is stable. Of course, if we were validating this model for use in a research setting there are many measurements that one would like to perform to check its fidelity.
We can now draw the simulation to see what is happening.
End of explanation |
9,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--NAVIGATION-->
< Position Management | Contents | Streaming Prices >
Transaction History
Obtain Transaction History
get_transaction_history(self, account_id, **params)
Step1: Get Specific Transaction Information
get_transaction(self, account_id, transaction_id) | Python Code:
from datetime import datetime, timedelta
import pandas as pd
import oandapy
import configparser
config = configparser.ConfigParser()
config.read('../config/config_v1.ini')
account_id = config['oanda']['account_id']
api_key = config['oanda']['api_key']
oanda = oandapy.API(environment="practice",
access_token=api_key)
response = oanda.get_transaction_history(account_id)
print(response)
pd.DataFrame(response['transactions'])
Explanation: <!--NAVIGATION-->
< Position Management | Contents | Streaming Prices >
Transaction History
Obtain Transaction History
get_transaction_history(self, account_id, **params)
End of explanation
response = oanda.get_transaction(account_id,
transaction_id=10605643945)
print(response)
Explanation: Get Specific Transaction Information
get_transaction(self, account_id, transaction_id)
End of explanation |
9,558 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
plot scatter plot between for k
| Python Code::
import matplotlib.pyplot as plt
plt.scatter(k[:,0], k[:,1])
|
9,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
To begin with, cobrapy comes with bundled models for Salmonella and E. coli, as well as a "textbook" model of E. coli core metabolism. To load a test model, type
Step1: The reactions, metabolites, and genes attributes of the cobrapy model are a special type of list called a DictList, and each one is made up of Reaction, Metabolite and Gene objects respectively.
Step2: Just like a regular list, objects in the DictList can be retrived by index. For example, to get the 30th reaction in the model (at index 29 because of 0-indexing)
Step3: Addictionally, items can be retrived by their id using the get_by_id() function. For example, to get the cytosolic atp metabolite object (the id is "atp_c"), we can do the following
Step4: As an added bonus, users with an interactive shell such as IPython will be able to tab-complete to list elements inside a list. While this is not recommended behavior for most code because of the possibility for characters like "-" inside ids, this is very useful while in an interactive prompt
Step5: Reactions
We will consider the reaction glucose 6-phosphate isomerase, which interconverts glucose 6-phosphate and fructose 6-phosphate. The reaction id for this reaction in our test model is PGI.
Step6: We can view the full name and reaction catalyzed as strings
Step7: We can also view reaction upper and lower bounds. Because the pgi.lower_bound < 0, and pgi.upper_bound > 0, pgi is reversible
Step8: We can also ensure the reaction is mass balanced. This function will return elements which violate mass balance. If it comes back empty, then the reaction is mass balanced.
Step9: In order to add a metabolite, we pass in a dict with the metabolite object and its coefficient
Step10: The reaction is no longer mass balanced
Step11: We can remove the metabolite, and the reaction will be balanced once again.
Step12: It is also possible to build the reaction from a string. However, care must be taken when doing this to ensure reaction id's match those in the model. The direction of the arrow is also used to update the upper and lower bounds.
Step13: Metabolites
We will consider cytosolic atp as our metabolite, which has the id atp_c in our test model.
Step14: We can print out the metabolite name and compartment (cytosol in this case).
Step15: We can see that ATP is a charged molecule in our model.
Step16: We can see the chemical formula for the metabolite as well.
Step17: The reactions attribute gives a frozenset of all reactions using the given metabolite. We can use this to count the number of reactions which use atp.
Step18: A metabolite like glucose 6-phosphate will participate in fewer reactions.
Step19: Genes
The gene_reaction_rule is a boolean representation of the gene requirements for this reaction to be active as described in Schellenberger et al 2011 Nature Protocols 6(9)
Step20: Corresponding gene objects also exist. These objects are tracked by the reactions itself, as well as by the model
Step21: Each gene keeps track of the reactions it catalyzes
Step22: Altering the gene_reaction_rule will create new gene objects if necessary and update all relationships.
Step23: Newly created genes are also added to the model
Step24: The delete_model_genes function will evaluate the gpr and set the upper and lower bounds to 0 if the reaction is knocked out. This function can preserve existing deletions or reset them using the cumulative_deletions flag.
Step25: The undelete_model_genes can be used to reset a gene deletion | Python Code:
from __future__ import print_function
import cobra.test
# "ecoli" and "salmonella" are also valid arguments
model = cobra.test.create_test_model("textbook")
Explanation: Getting Started
To begin with, cobrapy comes with bundled models for Salmonella and E. coli, as well as a "textbook" model of E. coli core metabolism. To load a test model, type
End of explanation
print(len(model.reactions))
print(len(model.metabolites))
print(len(model.genes))
Explanation: The reactions, metabolites, and genes attributes of the cobrapy model are a special type of list called a DictList, and each one is made up of Reaction, Metabolite and Gene objects respectively.
End of explanation
model.reactions[29]
Explanation: Just like a regular list, objects in the DictList can be retrived by index. For example, to get the 30th reaction in the model (at index 29 because of 0-indexing):
End of explanation
model.metabolites.get_by_id("atp_c")
Explanation: Addictionally, items can be retrived by their id using the get_by_id() function. For example, to get the cytosolic atp metabolite object (the id is "atp_c"), we can do the following:
End of explanation
model.reactions.EX_glc__D_e.lower_bound
Explanation: As an added bonus, users with an interactive shell such as IPython will be able to tab-complete to list elements inside a list. While this is not recommended behavior for most code because of the possibility for characters like "-" inside ids, this is very useful while in an interactive prompt:
End of explanation
pgi = model.reactions.get_by_id("PGI")
pgi
Explanation: Reactions
We will consider the reaction glucose 6-phosphate isomerase, which interconverts glucose 6-phosphate and fructose 6-phosphate. The reaction id for this reaction in our test model is PGI.
End of explanation
print(pgi.name)
print(pgi.reaction)
Explanation: We can view the full name and reaction catalyzed as strings
End of explanation
print(pgi.lower_bound, "< pgi <", pgi.upper_bound)
print(pgi.reversibility)
Explanation: We can also view reaction upper and lower bounds. Because the pgi.lower_bound < 0, and pgi.upper_bound > 0, pgi is reversible
End of explanation
pgi.check_mass_balance()
Explanation: We can also ensure the reaction is mass balanced. This function will return elements which violate mass balance. If it comes back empty, then the reaction is mass balanced.
End of explanation
pgi.add_metabolites({model.metabolites.get_by_id("h_c"): -1})
pgi.reaction
Explanation: In order to add a metabolite, we pass in a dict with the metabolite object and its coefficient
End of explanation
pgi.check_mass_balance()
Explanation: The reaction is no longer mass balanced
End of explanation
pgi.pop(model.metabolites.get_by_id("h_c"))
print(pgi.reaction)
print(pgi.check_mass_balance())
Explanation: We can remove the metabolite, and the reaction will be balanced once again.
End of explanation
pgi.reaction = "g6p_c --> f6p_c + h_c + green_eggs + ham"
pgi.reaction
pgi.reaction = "g6p_c <=> f6p_c"
pgi.reaction
Explanation: It is also possible to build the reaction from a string. However, care must be taken when doing this to ensure reaction id's match those in the model. The direction of the arrow is also used to update the upper and lower bounds.
End of explanation
atp = model.metabolites.get_by_id("atp_c")
atp
Explanation: Metabolites
We will consider cytosolic atp as our metabolite, which has the id atp_c in our test model.
End of explanation
print(atp.name)
print(atp.compartment)
Explanation: We can print out the metabolite name and compartment (cytosol in this case).
End of explanation
atp.charge
Explanation: We can see that ATP is a charged molecule in our model.
End of explanation
print(atp.formula)
Explanation: We can see the chemical formula for the metabolite as well.
End of explanation
len(atp.reactions)
Explanation: The reactions attribute gives a frozenset of all reactions using the given metabolite. We can use this to count the number of reactions which use atp.
End of explanation
model.metabolites.get_by_id("g6p_c").reactions
Explanation: A metabolite like glucose 6-phosphate will participate in fewer reactions.
End of explanation
gpr = pgi.gene_reaction_rule
gpr
Explanation: Genes
The gene_reaction_rule is a boolean representation of the gene requirements for this reaction to be active as described in Schellenberger et al 2011 Nature Protocols 6(9):1290-307.
The GPR is stored as the gene_reaction_rule for a Reaction object as a string.
End of explanation
pgi.genes
pgi_gene = model.genes.get_by_id("b4025")
pgi_gene
Explanation: Corresponding gene objects also exist. These objects are tracked by the reactions itself, as well as by the model
End of explanation
pgi_gene.reactions
Explanation: Each gene keeps track of the reactions it catalyzes
End of explanation
pgi.gene_reaction_rule = "(spam or eggs)"
pgi.genes
pgi_gene.reactions
Explanation: Altering the gene_reaction_rule will create new gene objects if necessary and update all relationships.
End of explanation
model.genes.get_by_id("spam")
Explanation: Newly created genes are also added to the model
End of explanation
cobra.manipulation.delete_model_genes(model, ["spam"],
cumulative_deletions=True)
print("after 1 KO: %4d < flux_PGI < %4d" %
(pgi.lower_bound, pgi.upper_bound))
cobra.manipulation.delete_model_genes(model, ["eggs"],
cumulative_deletions=True)
print("after 2 KO: %4d < flux_PGI < %4d" %
(pgi.lower_bound, pgi.upper_bound))
Explanation: The delete_model_genes function will evaluate the gpr and set the upper and lower bounds to 0 if the reaction is knocked out. This function can preserve existing deletions or reset them using the cumulative_deletions flag.
End of explanation
cobra.manipulation.undelete_model_genes(model)
print(pgi.lower_bound, "< pgi <", pgi.upper_bound)
Explanation: The undelete_model_genes can be used to reset a gene deletion
End of explanation |
9,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lista de Exercícios - SEU NOME
Os exercícios valem 30% da nota final.
Data Entrega
Step1: Teste para as seguintes situações
Step2: Exercício 2
(0.5 ponto) Crie uma função chamda qtde_caracteres que receba um parâmetro e retorne a quantidade de caracteres. Se o valor recebido pelo parâmetro não for uma String, utilizar a função str() para converter o argumento.
Teste as seguintes situações
Step3: Exercicío 3
(1.5 ponto) Carregue o arquivo chamado funcionarios.txt. Esse arquivo contém nome e o salário anual de cada funcionário.
alexandre,42000
rose,50000
anderson,30000
antonio,60000
maria,120000
carlos,86000
cesar,48000
Faça os seguintes exercícios
Step4: b) Crie uma função chamada calcular_salario_mensal que irá calcular quanto o funcionário ganha por mês. Verifique o tipo do campo do valor do dicionário. Utilize a função float() para converter o valor.
Step5: c) Por fim, imprima para cada funcionário o nome, salário anual e mensal no seguinte formato
Step6: Faça o teste para as seguintes situações
Step7: Exercício 6
(2 pontos) Utilizando o exemplo dado em aula sobre a Streaming API (Desafio 3 da Aula 5), recupere os tweets durante 10 minutos com os seguintes parâmetros
Step8: Realize a autorização e defina a token de acesso
Crie a classe DadosPublicosTwitter herdando da classe tweepy.StreamListener para rodar durante 10 minutos e salvar os tweets no arquivo.
Step9: Crie a instância da classe, o fluxo e realize a filtragem com os parâmetros definidos no enunciado.
Exercício 7
(3 pontos) Com os dados salvos no tweets_10min.json, crie um DataFrame pandas com as seguintes colunas
Step10: Crie um DataFrame auxiliar passando as colunas por parâmetro
Crie a função de pegar_lat_long(local)
Step11: Crie a função para salvar_hashtags(texto)
Step12: Certifique-se que o DataFrame auxiliar tem as 14 colunas necessárias | Python Code:
def soma_tres_num(x,y,z=10):
return x+y+z
Explanation: Lista de Exercícios - SEU NOME
Os exercícios valem 30% da nota final.
Data Entrega: 18/09/2016
Formato da Entrega: .ipynb - Clique em File -> Download as -> IPython Notebook (.ipynb)
Enviar por email até a data de entrega, onde o assunto do email deve ser: Exercícios PosMBA Turma2 - SEU NOME
O cálculo da nota final é dado por: NF = (lista * 0.3) + (prova * 0.7)
Exercício 1
(0.5 ponto) Crie uma função chamada soma_tres_num que irá receber 3 parâmetros(sendo o último com valor padrão 10) e retorne a soma desses três valores.
End of explanation
print(soma_tres_num(0, 10))
print(soma_tres_num(1,2,3))
print(soma_tres_num(10, 10, 0))
Explanation: Teste para as seguintes situações:
End of explanation
print(qtde_caracteres(1234))
print(qtde_caracteres(', -'))
print(qtde_caracteres('python'))
print(qtde_caracteres('fia e big data'))
Explanation: Exercício 2
(0.5 ponto) Crie uma função chamda qtde_caracteres que receba um parâmetro e retorne a quantidade de caracteres. Se o valor recebido pelo parâmetro não for uma String, utilizar a função str() para converter o argumento.
Teste as seguintes situações:
End of explanation
def carregar_dados_dic(arquivo):
# seu código aqui
salarios = carregar_dados_dic('funcionarios.txt')
print(salarios)
Explanation: Exercicío 3
(1.5 ponto) Carregue o arquivo chamado funcionarios.txt. Esse arquivo contém nome e o salário anual de cada funcionário.
alexandre,42000
rose,50000
anderson,30000
antonio,60000
maria,120000
carlos,86000
cesar,48000
Faça os seguintes exercícios:
a) Crie uma função chamda carregar_dados_dic que deve receber um parâmetro(no caso o arquivo funcionarios.txt) e retorne um dicionário, onde a chave será o nome e o valor será o salário anual.
End of explanation
def calcular_salario_mensal(salario_anual):
# seu código aqui
Explanation: b) Crie uma função chamada calcular_salario_mensal que irá calcular quanto o funcionário ganha por mês. Verifique o tipo do campo do valor do dicionário. Utilize a função float() para converter o valor.
End of explanation
def palavras_frequentes(arquivo, palavras_freq):
# seu código aqui
Explanation: c) Por fim, imprima para cada funcionário o nome, salário anual e mensal no seguinte formato:
Rose --- R$ 50000 --- R$ 4166.66
Dica: Lembre-se do .format para formatar uma string.
Utilize a função round para arredondar para 2 casas decimais.
```python
round(7166.67555, 2)
7166.67
```
Exercício 4
(1 ponto) Crie um programa que deve ler as informações do usuário (nome, idade e salário) através da função input. É necessário validar as informações com os seguintes critérios:
* O nome deve ter tamanho maior que 3
* A idade deve ser entre 18 e 65
* O salário deve ser maior que R$ 788
Caso as informações não estejam nos critérios definidos anteriormente, será necessário solicitar ao usuário para digitar novamente. Por fim o programa deverá imprimir o seguinte texto:
NOME tem YY anos, recebe R$ SSS e seu nome tem CC caracteres.
Onde,
NOME deve ser substituído pelo nome digitado.
YY deve ser a idade
SSS deve ser substituído pelo valor do salário.
CC deve ser substituído pela quantidade de caracteres.
Lembre-se de formatar o salário para duas casas decimais.
Exercicio 5
(1.5 ponto) Crie uma função que deverá receber dois parâmetros, sendo um o arquivo e o outro a quantidade de palavras que aparecem com mais frequência. O retorno deverá ser uma lista, sendo que cada elemento dessa lista, deve ser uma tupla (chave-valor), sendo chave o nome da palavra e o valor a quantidade de vezes que ela apareceu no texto.
Por exemplo,
palavras_frequentes('texto1.txt', 10) - pesquisa no arquivo texto1.txt as 10 palavras mais frequentes.
palavras_frequentes('texto2.txt', 5) - pesquisa no arquivo texto2.txt as 5 palavras mais frequentes.
<span style="color:red">Lembre-se de tratar possíveis erros!</span>
Exemplo de uso e saida:
```python
palavras10mais = palavras_frequentes('texto1.txt', 10)
print(palavras10mais)
[('programas', 662), ('codigos', 661), ('dinheiro', 661), ('fia', 586), ('python', 491), ('data', 434), ('big', 434), ('velocidade', 133), ('Moneyball', 113), ('dados', 95)]
```
Dica: Verifique o funcionamento da função sorted para ordernar um dicionário com base no valor.
End of explanation
palavras_frequentes('texto1.txt', 10)
palavras_frequentes('texto2.txt', 10)
palavras_frequentes('texto1.txt', 5)
palavras_frequentes('texto2.txt', 5)
palavras_frequentes('texto1.txt', 30)
Explanation: Faça o teste para as seguintes situações:
End of explanation
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''
Explanation: Exercício 6
(2 pontos) Utilizando o exemplo dado em aula sobre a Streaming API (Desafio 3 da Aula 5), recupere os tweets durante 10 minutos com os seguintes parâmetros:
fluxo.filter(track=['Big Data', 'Hadoop', 'Spark', 'Python', 'Data Science'], languages=['en', 'pt'])
Salve os tweets em um arquivo chamado tweets_10min.json.
Importe os módulos necessários
Crie as chaves de acesso
End of explanation
class DadosPublicosTwitter(tweepy.StreamListener):
# seu código aqui
Explanation: Realize a autorização e defina a token de acesso
Crie a classe DadosPublicosTwitter herdando da classe tweepy.StreamListener para rodar durante 10 minutos e salvar os tweets no arquivo.
End of explanation
colunas = ['text', 'created_at', 'coordinates', 'retweet_count',
'favorite_count', 'screen_name', 'location', 'lang',
'followers_count', 'geo_enabled', 'statuses_count',
'lat', 'long', 'hashtags']
Explanation: Crie a instância da classe, o fluxo e realize a filtragem com os parâmetros definidos no enunciado.
Exercício 7
(3 pontos) Com os dados salvos no tweets_10min.json, crie um DataFrame pandas com as seguintes colunas:
text - Texto do Tweet
created_at - Data da criação do Tweet
coordinates - Coordenadas
retweet_count - Quantidade de vezes que o tweet foi "retweetado".
favorite_count - Quantidade de vezes que o tweet foi "favoritado".
screen_name - Nome na tela (exemplo: @prof_dinomagri)
location - Localização
lang - Idioma
followers_count - Quantidade de seguidores
geo_enabled - Se tem a geolocalização habilitade
statuses_count - Quantidade de tweets postados.
lat - Recuperada através da função desenvolvida em sala
long - Recuperada através da função desenvolvida em sala
hashtags - Recuperada através da função desenvolvida em sala
Lembre-se de utilizar as funções desenvolvidas em sala de aula para recuperar a latitude, longitude e hastags.
No final salve o arquivo no formato CSV, utilizando o separador ponto e virgula (;) e com a codificação 'utf-8'.
Importe os módulos necessários
Utilize o comando with para abrir o arquivo 'tweets_10min.json' e salvar em uma lista
Crie o DataFrame com os dados e imprima apenas os 3 primeiros tweets (linhas)
Crie uma lista, onde cada elemento será o nome da coluna
End of explanation
from geopy.geocoders import Nominatim
def pegar_lat_long(local):
# seu código aqui
Explanation: Crie um DataFrame auxiliar passando as colunas por parâmetro
Crie a função de pegar_lat_long(local)
End of explanation
def salvar_hashtags(texto):
# seu código aqui
Explanation: Crie a função para salvar_hashtags(texto)
End of explanation
len(df_aux.columns)
Explanation: Certifique-se que o DataFrame auxiliar tem as 14 colunas necessárias
End of explanation |
9,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment 2
Implement the search algorithm you came up with in pseudocode with Python
Test the search algorithm with a list of 10,100,1000 random numbers (sorted with your sorting algorithm) and compare the result using the %time to time your code and submit your results in code comments
Pseudocode
Get a sorted list and a search term.
Loop through the sorted list
Step1: The algorithm
Step2: Testing the algorithm
Step3: Result of %time | Python Code:
import random
Explanation: Assignment 2
Implement the search algorithm you came up with in pseudocode with Python
Test the search algorithm with a list of 10,100,1000 random numbers (sorted with your sorting algorithm) and compare the result using the %time to time your code and submit your results in code comments
Pseudocode
Get a sorted list and a search term.
Loop through the sorted list:
Compare the item to the search term.
If the search term matches the item:
return the couter value
Add one to the counter
End of explanation
# Simple brute force search engine
def ownsearch(sorted_list, search_term):
counter = 0
for i in sorted_list:
if search_term == i:
return "I found the term " + str(search_term) + " the first time at index " + str(counter)
counter += 1
sorted_list = ['Alma', 'Annus', 'Mater', 'mirabilis']
search_term = 'Mater'
print(ownsearch(sorted_list,search_term))
Explanation: The algorithm
End of explanation
# The algorithm from the previous assignment
def ownsort(int_list):
for i in range(len(int_list) - 1):
k = i
for j in range(i + 1, len(int_list)):
if int_list[j] < int_list[k]:
k = j
int_list[i], int_list[k] = int_list[k], int_list[i]
return int_list
# Generate Lists with Random numbers
int_list_10 = []
for i in range(0,10):
int_list_10.append(random.randint(1,100))
int_list_100 = []
for i in range(0,100):
int_list_100.append(random.randint(1,100))
int_list_1000 = []
for i in range(0,1000):
int_list_1000.append(random.randint(1,100))
# Generating random int for search term
search_term = random.randint(1,100)
# Generating sorted list
int_list_10 = ownsort(int_list_10)
print(ownsearch(int_list_10, search_term))
%time
Explanation: Testing the algorithm
End of explanation
# Generating random int for search term
search_term = random.randint(1,100)
# Generating sorted list
int_list_100 = ownsort(int_list_100)
print(ownsearch(int_list_100, search_term))
%time
Result of %time:
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 5.72 µs
# Generating random int for search term
search_term = random.randint(1,100)
# Generating sorted list
int_list_1000 = ownsort(int_list_1000)
print(ownsearch(int_list_1000, search_term))
%time
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 5.01 µs
Explanation: Result of %time:
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 5.01 µs
End of explanation |
9,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Properties
Step4: Comparision bewteen sparse and mixed graph
Step5: Varying sigma in the sparse part of the mixed graph
Step6: Varying tau in the sparse part of the mixed graph
Step7: Varying sigma in the dense part of the mixed graph
Step8: Varying tau in the dense part of the mixed graph | Python Code:
mdest = '../result/random_network/mixture/'
sdest = '../result/random_network/sparse/'
m_f = '%d_%.2f_%.2f_%.2f_%.2f_%.2f_%.2f.pkl'
s_f = '%d_%.2f_%.2f_%.2f.pkl'
colors = cm.rainbow(np.linspace(0, 1, 7))
np.random.shuffle(colors)
colors = itertools.cycle(colors)
def degree_dist_list(graph, ddist):
_ddict = nx.degree(graph)
_ddist = defaultdict(int)
for k, v in _ddict.items():
_ddist[v] += 1
for k, v in _ddist.items():
ddist[k].append(v)
del _ddict, _ddist
return ddist
def avg_degree_dist(path_list):
Compute average degree distribution over repeated simulations
ddist = defaultdict(list)
for path in path_list:
sample = pickle.load(open(path, 'rb'))
G = sparse_to_networkx(sample[0])
degree_dist_list(G, ddist)
del G, sample
avg_dist = dict()
for k, v in ddist.items():
avg_dist[k] = sum(ddist[k])/len(ddist[k])
return avg_dist
def scatter(_ddist, path, color=None):
print scatter plot of given degree distribution dictionary
plt.scatter(list(_ddist.keys()), list(_ddist.values()), label=os.path.basename(path), color=color)
def degree_dist(graph):
Compute digree distribution of given graph
_ddict = nx.degree(graph)
_ddist = defaultdict(int)
for k, v in _ddict.items():
_ddist[v] += 1
return _ddist
Explanation: Properties:
* The number of nodes increases as tau decreases (minimum > 0).
* The number of nodes increases as alpha increases
* Expected number of dense node is : -alpha / sigma * tau ^ sigma
Basic parameter config (sparse alpha, sigma, tau + dense alpha, sigma tau):
* 100, 0.5, 1, 100, -1, 0.1 (generate the largest graph among basic configurations)
* 100, 0.5, 1, 100, -1, 1
Additional parameter configurations
* 100, 0, 1 + 100, -1, 1
* 100, 0.5, 0.1 + 100, -1, 0.1
End of explanation
alpha = 100
sigma = 0.5
tau = 1
d_alpha = 100
d_sigma = -1
d_taus = [0.1, 1]
n_samples = 5
plt.figure(figsize=(12, 8))
for d_tau in d_taus:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
alphas = [100, 150]
for alpha in alphas:
path_list = list()
for i in range(n_samples):
path_list.append(os.path.join(sdest, s_f % (i, alpha, sigma, tau)))
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
Explanation: Comparision bewteen sparse and mixed graph
End of explanation
sigmas = [0, 0.5, 0.9]
alpha = 100
tau = 1
d_alpha = 100
d_sigma = -1
d_tau = 1
plt.figure(figsize=(12, 8))
for sigma in sigmas:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
Explanation: Varying sigma in the sparse part of the mixed graph
End of explanation
alpha = 100
sigma = 0.5
taus = [0.1, 0.5, 1]
d_alpha = 100
d_sigma = -1
d_tau = 1
plt.figure(figsize=(12, 8))
for tau in taus:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
Explanation: Varying tau in the sparse part of the mixed graph
End of explanation
alpha = 100
sigma = 0.5
tau = 1
d_alpha = 100
d_tau = 1
sigmas = [-0.5, -1, -2]
plt.figure(figsize=(12, 8))
plt.figure(figsize=(12, 8))
for d_sigma in sigmas:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
Explanation: Varying sigma in the dense part of the mixed graph
End of explanation
alpha = 100
sigma = 0.5
tau = 1
d_alpha = 100
d_sigma = -1
taus = [0.1, 0.5, 1]
plt.figure(figsize=(12, 8))
for d_tau in taus:
path_list = [os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau)) for i in range(n_samples)]
ddist = avg_degree_dist(path_list)
scatter(ddist, path_list[0], next(colors))
# for d_tau in taus:
# mfile = os.path.join(mdest, m_f % (i, alpha, sigma, tau, d_alpha, d_sigma, d_tau))
# if os.path.exists(mfile):
# sample = pickle.load(open(mfile, 'rb'))
# G = sparse_to_networkx(sample[0])
# ddist = degree_dist(G)
# scatter(ddist, mfile, next(colors))
ax = plt.subplot()
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylabel('# of nodes')
plt.xlabel('Node degree')
plt.ylim(0.5); plt.xlim(0.5); plt.show()
Explanation: Varying tau in the dense part of the mixed graph
End of explanation |
9,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Density estimation using Real NVP
Authors
Step1: Load the data
Step2: Affine coupling layer
Step4: Real NVP
Step5: Model training
Step6: Performance evaluation | Python Code:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from sklearn.datasets import make_moons
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_probability as tfp
Explanation: Density estimation using Real NVP
Authors: Mandolini Giorgio Maria, Sanna Daniele, Zannini Quirini Giorgio<br>
Date created: 2020/08/10<br>
Last modified: 2020/08/10<br>
Description: Estimating the density distribution of the "double moon" dataset.
Introduction
The aim of this work is to map a simple distribution - which is easy to sample
and whose density is simple to estimate - to a more complex one learned from the data.
This kind of generative model is also known as "normalizing flow".
In order to do this, the model is trained via the maximum
likelihood principle, using the "change of variable" formula.
We will use an affine coupling function. We create it such that its inverse, as well as
the determinant of the Jacobian, are easy to obtain (more details in the referenced paper).
Requirements:
Tensorflow 2.3
Tensorflow probability 0.11.0
Reference:
Density estimation using Real NVP
Setup
End of explanation
data = make_moons(3000, noise=0.05)[0].astype("float32")
norm = layers.Normalization()
norm.adapt(data)
normalized_data = norm(data)
Explanation: Load the data
End of explanation
# Creating a custom layer with keras API.
output_dim = 256
reg = 0.01
def Coupling(input_shape):
input = keras.layers.Input(shape=input_shape)
t_layer_1 = keras.layers.Dense(
output_dim, activation="relu", kernel_regularizer=regularizers.l2(reg)
)(input)
t_layer_2 = keras.layers.Dense(
output_dim, activation="relu", kernel_regularizer=regularizers.l2(reg)
)(t_layer_1)
t_layer_3 = keras.layers.Dense(
output_dim, activation="relu", kernel_regularizer=regularizers.l2(reg)
)(t_layer_2)
t_layer_4 = keras.layers.Dense(
output_dim, activation="relu", kernel_regularizer=regularizers.l2(reg)
)(t_layer_3)
t_layer_5 = keras.layers.Dense(
input_shape, activation="linear", kernel_regularizer=regularizers.l2(reg)
)(t_layer_4)
s_layer_1 = keras.layers.Dense(
output_dim, activation="relu", kernel_regularizer=regularizers.l2(reg)
)(input)
s_layer_2 = keras.layers.Dense(
output_dim, activation="relu", kernel_regularizer=regularizers.l2(reg)
)(s_layer_1)
s_layer_3 = keras.layers.Dense(
output_dim, activation="relu", kernel_regularizer=regularizers.l2(reg)
)(s_layer_2)
s_layer_4 = keras.layers.Dense(
output_dim, activation="relu", kernel_regularizer=regularizers.l2(reg)
)(s_layer_3)
s_layer_5 = keras.layers.Dense(
input_shape, activation="tanh", kernel_regularizer=regularizers.l2(reg)
)(s_layer_4)
return keras.Model(inputs=input, outputs=[s_layer_5, t_layer_5])
Explanation: Affine coupling layer
End of explanation
class RealNVP(keras.Model):
def __init__(self, num_coupling_layers):
super(RealNVP, self).__init__()
self.num_coupling_layers = num_coupling_layers
# Distribution of the latent space.
self.distribution = tfp.distributions.MultivariateNormalDiag(
loc=[0.0, 0.0], scale_diag=[1.0, 1.0]
)
self.masks = np.array(
[[0, 1], [1, 0]] * (num_coupling_layers // 2), dtype="float32"
)
self.loss_tracker = keras.metrics.Mean(name="loss")
self.layers_list = [Coupling(2) for i in range(num_coupling_layers)]
@property
def metrics(self):
List of the model's metrics.
We make sure the loss tracker is listed as part of `model.metrics`
so that `fit()` and `evaluate()` are able to `reset()` the loss tracker
at the start of each epoch and at the start of an `evaluate()` call.
return [self.loss_tracker]
def call(self, x, training=True):
log_det_inv = 0
direction = 1
if training:
direction = -1
for i in range(self.num_coupling_layers)[::direction]:
x_masked = x * self.masks[i]
reversed_mask = 1 - self.masks[i]
s, t = self.layers_list[i](x_masked)
s *= reversed_mask
t *= reversed_mask
gate = (direction - 1) / 2
x = (
reversed_mask
* (x * tf.exp(direction * s) + direction * t * tf.exp(gate * s))
+ x_masked
)
log_det_inv += gate * tf.reduce_sum(s, [1])
return x, log_det_inv
# Log likelihood of the normal distribution plus the log determinant of the jacobian.
def log_loss(self, x):
y, logdet = self(x)
log_likelihood = self.distribution.log_prob(y) + logdet
return -tf.reduce_mean(log_likelihood)
def train_step(self, data):
with tf.GradientTape() as tape:
loss = self.log_loss(data)
g = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(g, self.trainable_variables))
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def test_step(self, data):
loss = self.log_loss(data)
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
Explanation: Real NVP
End of explanation
model = RealNVP(num_coupling_layers=6)
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0001))
history = model.fit(
normalized_data, batch_size=256, epochs=300, verbose=2, validation_split=0.2
)
Explanation: Model training
End of explanation
plt.figure(figsize=(15, 10))
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.legend(["train", "validation"], loc="upper right")
plt.ylabel("loss")
plt.xlabel("epoch")
# From data to latent space.
z, _ = model(normalized_data)
# From latent space to data.
samples = model.distribution.sample(3000)
x, _ = model.predict(samples)
f, axes = plt.subplots(2, 2)
f.set_size_inches(20, 15)
axes[0, 0].scatter(normalized_data[:, 0], normalized_data[:, 1], color="r")
axes[0, 0].set(title="Inference data space X", xlabel="x", ylabel="y")
axes[0, 1].scatter(z[:, 0], z[:, 1], color="r")
axes[0, 1].set(title="Inference latent space Z", xlabel="x", ylabel="y")
axes[0, 1].set_xlim([-3.5, 4])
axes[0, 1].set_ylim([-4, 4])
axes[1, 0].scatter(samples[:, 0], samples[:, 1], color="g")
axes[1, 0].set(title="Generated latent space Z", xlabel="x", ylabel="y")
axes[1, 1].scatter(x[:, 0], x[:, 1], color="g")
axes[1, 1].set(title="Generated data space X", label="x", ylabel="y")
axes[1, 1].set_xlim([-2, 2])
axes[1, 1].set_ylim([-2, 2])
Explanation: Performance evaluation
End of explanation |
9,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Tabular datasets
The UCI ML repository contains many smallish datasets, mostly tabular.
Kaggle also hosts many interesting datasets.
Sklearn has many small datasets builtin, making them easy to use for prototyping, as we illustrate below.
Step2: Tensorflow datasets
TFDS is a handy way to handle large datasets as a stream of minibatches, suitable for large scale training and parallel evaluation. It can be used by tensorflow and JAX code, as we illustrate below. (See the official colab for details.)
Step3: Minibatching without using TFDS
We first illustrate how to make streams of minibatches using vanilla numpy code. TFDS will then let us eliminate a lot of this boilerplate. As an example, let's package some small labeled datasets into two dictionaries, for train and test.
Step4: Now we make one pass (epoch) over the data, computing random minibatches of size 30. There are 100 examples total, but with a batch size of 30,
we don't use all the data. We can solve such "boundary effects" later.
Step5: Minibatching with TFDS
Below we show how to convert a numpy array into a TFDS.
We shuffle the records and convert to minibatches, and then repeat these batches indefinitely to create an infinite stream,
which we can convert to a python iterator. We pass this iterator of batches to our training loop.
Step6: Preprocessing the data
We can process the data before creating minibatches.
We can also use pre-fetching to speed things up (see
this TF tutorial for details.)
We illustrate this below for MNIST.
Step7: Vision datasets
MNIST
There are many standard versions of MNIST,
some of which are available from https
Step8: CIFAR
The CIFAR dataset is commonly used for prototyping.
The CIFAR-10 version consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. There is also a 100 class version.
An easy way to get this data is to use TFDS, as we show below.
Step9: Imagenet
A lot of vision experiments use the Imagenet dataset, with 1000 classes and ~1M images.
However, this takes a long time to download and process.
The FastAI team made a smaller version called ImageNette, that only has 10 classes of size 160 or 320 pixels (largest dimension). This is good for prototyping, and the images tend to be easier to interpret that CIFAR. A version of the raw data, in a more convenient format (all images 224x224, no dependence on FastAI library) can be found here. It is also bundled into TFDS, as we show below.
Step10: Language datasets
Various datasets are used in the natural language processing (NLP) communities.
TODO | Python Code:
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
import seaborn as sns
sns.set(style="ticks", color_codes=True)
import pandas as pd
pd.set_option("precision", 2) # 2 decimal places
pd.set_option("display.max_rows", 20)
pd.set_option("display.max_columns", 30)
pd.set_option("display.width", 100) # wide windows
Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/intro/datasets.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Manipulating datasets
In this colab, we briefly discuss ways to access and manipulate common datasets that are used in the ML literature. Most of these are used for supervised learning experiments.
End of explanation
from sklearn import datasets
iris = datasets.load_iris()
print(iris.keys())
X = iris["data"]
y = iris["target"] # class labels
print(X.shape)
print(iris["feature_names"]) # meaning of each feature
print(iris["target_names"]) # meaning of each class
Explanation: Tabular datasets
The UCI ML repository contains many smallish datasets, mostly tabular.
Kaggle also hosts many interesting datasets.
Sklearn has many small datasets builtin, making them easy to use for prototyping, as we illustrate below.
End of explanation
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
from typing import Any, Iterator, Mapping, NamedTuple, Sequence, Tuple
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
import tensorflow_datasets as tfds
print("tf version {}".format(tf.__version__))
import jax
from typing import Any, Callable, Sequence, Optional, Dict, Tuple
import jax.numpy as jnp
rng = jax.random.PRNGKey(0)
# Useful type aliases
Array = jnp.ndarray
PRNGKey = Array
Batch = Mapping[str, np.ndarray]
OptState = Any
Explanation: Tensorflow datasets
TFDS is a handy way to handle large datasets as a stream of minibatches, suitable for large scale training and parallel evaluation. It can be used by tensorflow and JAX code, as we illustrate below. (See the official colab for details.)
End of explanation
import sklearn
import sklearn.datasets
from sklearn.model_selection import train_test_split
def get_datasets_iris():
iris = sklearn.datasets.load_iris()
X = iris["data"]
y = iris["target"]
N, D = X.shape # 150, 4
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
train_ds = {"X": X_train, "y": y_train}
test_ds = {"X": X_test, "y": y_test}
return train_ds, test_ds
train_ds, test_ds = get_datasets_iris()
print(train_ds["X"].shape)
print(train_ds["y"].shape)
iris = sklearn.datasets.load_iris()
print(iris.feature_names)
print(iris.target_names)
Explanation: Minibatching without using TFDS
We first illustrate how to make streams of minibatches using vanilla numpy code. TFDS will then let us eliminate a lot of this boilerplate. As an example, let's package some small labeled datasets into two dictionaries, for train and test.
End of explanation
def extract_batch(ds, ndx):
batch = {k: v[ndx, ...] for k, v in ds.items()}
# batch = {'X': ds['X'][ndx,:], 'y': ds['y'][ndx]}
return batch
def process_epoch(train_ds, batch_size, rng):
train_ds_size = len(train_ds["X"])
steps_per_epoch = train_ds_size // batch_size
perms = jax.random.permutation(rng, len(train_ds["X"]))
perms = perms[: steps_per_epoch * batch_size] # skip incomplete batch
perms = perms.reshape((steps_per_epoch, batch_size)) # perms[i,:] is list of data indices for step i
for step, perm in enumerate(perms):
batch = extract_batch(train_ds, perm)
print("processing batch {} X shape {}, y shape {}".format(step, batch["X"].shape, batch["y"].shape))
batch_size = 30
process_epoch(train_ds, batch_size, rng)
Explanation: Now we make one pass (epoch) over the data, computing random minibatches of size 30. There are 100 examples total, but with a batch size of 30,
we don't use all the data. We can solve such "boundary effects" later.
End of explanation
def load_dataset_iris(split: str, batch_size: int) -> Iterator[Batch]:
train_ds, test_ds = get_datasets_iris()
if split == tfds.Split.TRAIN:
ds = tf.data.Dataset.from_tensor_slices({"X": train_ds["X"], "y": train_ds["y"]})
elif split == tfds.Split.TEST:
ds = tf.data.Dataset.from_tensor_slices({"X": test_ds["X"], "y": test_ds["y"]})
ds = ds.shuffle(buffer_size=1 * batch_size)
ds = ds.batch(batch_size)
ds = ds.cache()
ds = ds.repeat() # make infinite stream of batches
return iter(tfds.as_numpy(ds)) # python iterator
batch_size = 30
train_ds = load_dataset_iris(tfds.Split.TRAIN, batch_size)
valid_ds = load_dataset_iris(tfds.Split.TEST, batch_size)
print(train_ds)
training_steps = 5
for step in range(training_steps):
batch = next(train_ds)
print("processing batch {} X shape {}, y shape {}".format(step, batch["X"].shape, batch["y"].shape))
Explanation: Minibatching with TFDS
Below we show how to convert a numpy array into a TFDS.
We shuffle the records and convert to minibatches, and then repeat these batches indefinitely to create an infinite stream,
which we can convert to a python iterator. We pass this iterator of batches to our training loop.
End of explanation
def process_record(batch):
image = batch["image"]
label = batch["label"]
# reshape image to standard size, just for fun
image = tf.image.resize(image, (32, 32))
# flatten image to vector
shape = image.get_shape().as_list()
D = np.prod(shape) # no batch dimension
image = tf.reshape(image, (D,))
# rescale to -1..+1
image = tf.cast(image, dtype=tf.float32)
image = ((image / 255.0) - 0.5) * 2.0
# convert to standard names
return {"X": image, "y": label}
def load_mnist(split, batch_size):
dataset, info = tfds.load("mnist", split=split, with_info=True)
dataset = dataset.map(process_record)
if split == "train":
dataset = dataset.shuffle(10 * batch_size, seed=0)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
dataset = dataset.cache()
dataset = dataset.repeat()
dataset = tfds.as_numpy(dataset) # leave TF behind
num_examples = info.splits[split].num_examples
return iter(dataset), num_examples
batch_size = 100
train_iter, num_train = load_mnist("train", batch_size)
test_iter, num_test = load_mnist("test", batch_size)
num_epochs = 3
num_steps = num_train // batch_size
print(f"{num_epochs} epochs with batch size {batch_size} will take {num_steps} steps")
batch = next(train_iter)
print(batch["X"].shape)
print(batch["y"].shape)
Explanation: Preprocessing the data
We can process the data before creating minibatches.
We can also use pre-fetching to speed things up (see
this TF tutorial for details.)
We illustrate this below for MNIST.
End of explanation
ds, info = tfds.load("binarized_mnist", split=tfds.Split.TRAIN, shuffle_files=True, with_info=True)
print(ds)
print(info)
train_ds, info = tfds.load("mnist", split=tfds.Split.TRAIN, shuffle_files=True, with_info=True)
print(train_ds)
print(info)
ds = tfds.load("mnist", split="train")
print(type(ds))
ds = ds.take(1) # Only take a single example
print(type(ds))
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
ds, info = tfds.load("mnist", split="train", with_info=True)
fig = tfds.show_examples(ds, info, rows=2, cols=5)
# This function is not well documented. But source code for show_examples is here:
# https://github.com/tensorflow/datasets/blob/v4.2.0/tensorflow_datasets/core/visualization/image_visualizer.py
Explanation: Vision datasets
MNIST
There are many standard versions of MNIST,
some of which are available from https://www.tensorflow.org/datasets. We give some examples below.
End of explanation
ds, info = tfds.load("cifar10", split="train", with_info=True)
fig = tfds.show_examples(ds, info, rows=2, cols=5)
Explanation: CIFAR
The CIFAR dataset is commonly used for prototyping.
The CIFAR-10 version consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. There is also a 100 class version.
An easy way to get this data is to use TFDS, as we show below.
End of explanation
import tensorflow_datasets as tfds
imagenette_builder = tfds.builder("imagenette/full-size")
imagenette_info = imagenette_builder.info
print(imagenette_info)
imagenette_builder.download_and_prepare()
datasets = imagenette_builder.as_dataset(as_supervised=True)
train_examples = imagenette_info.splits["train"].num_examples
validation_examples = imagenette_info.splits["validation"].num_examples
print("ntrain", train_examples, "nvalidation", validation_examples)
train, test = datasets["train"], datasets["validation"]
import tensorflow as tf
batch_size = 32
train_batch = (
train.map(lambda image, label: (tf.image.resize(image, (448, 448)), label)).shuffle(100).batch(batch_size).repeat()
)
validation_batch = (
test.map(lambda image, label: (tf.image.resize(image, (448, 448)), label)).shuffle(100).batch(batch_size).repeat()
)
i = 0
for X, y in train_batch:
# print(b)
# X = b['image']
# y = b['label']
print("image {}, X shape {}, y shape {}".format(i, X.shape, y.shape))
i += 1
if i > 1:
break
fig = tfds.show_examples(train, imagenette_info, rows=2, cols=5)
Explanation: Imagenet
A lot of vision experiments use the Imagenet dataset, with 1000 classes and ~1M images.
However, this takes a long time to download and process.
The FastAI team made a smaller version called ImageNette, that only has 10 classes of size 160 or 320 pixels (largest dimension). This is good for prototyping, and the images tend to be easier to interpret that CIFAR. A version of the raw data, in a more convenient format (all images 224x224, no dependence on FastAI library) can be found here. It is also bundled into TFDS, as we show below.
End of explanation
def get_datasets_mnist():
ds_builder = tfds.builder("mnist")
ds_builder.download_and_prepare()
train_ds_all = tfds.as_numpy(ds_builder.as_dataset(split="train", batch_size=-1))
test_ds_all = tfds.as_numpy(ds_builder.as_dataset(split="test", batch_size=-1))
num_train = len(train_ds_all["image"])
train_ds["X"] = jnp.reshape(jnp.float32(train_ds_all["image"]) / 255.0, (num_train, -1))
train_ds["y"] = train_ds_all["label"]
num_test = len(test_ds_all["image"])
test_ds["X"] = jnp.reshape(jnp.float32(test_ds["image"]) / 255.0, (num_test, -1))
test_ds["y"] = test_ds_all["label"]
return train_ds, test_ds
dataset = load_dataset_iris(tfds.Split.TRAIN, 30)
batches = dataset.repeat().batch(batch_size)
step = 0
num_minibatches = 5
for batch in batches:
if step >= num_minibatches:
break
X, y = batch["image"], batch["label"]
print("processing batch {} X shape {}, y shape {}".format(step, X.shape, y.shape))
step = step + 1
print("batchified version v2")
batch_stream = batches.as_numpy_iterator()
for step in range(num_minibatches):
batch = batch_stream.next()
X, y = batch["image"], batch["label"] # convert to canonical names
print("processing batch {} X shape {}, y shape {}".format(step, X.shape, y.shape))
step = step + 1
ds = tfds.as_numpy(train_ds)
print(ds)
for i, batch in enumerate(ds):
print(type(batch))
X = batch["image"]
y = batch["label"]
print(X.shape)
print(y.shape)
i += 1
if i > 2:
break
ds = tfds.load("mnist", split="train")
ds = ds.take(100)
# ds = tfds.as_numpy(ds)
batches = ds.repeat(2).batch(batch_size)
print(type(batches))
print(batches)
batch_stream = batches.as_numpy_iterator()
print(type(batch_stream))
print(batch_stream)
b = next(batch_stream)
print(type(b))
print(b["image"].shape)
b = batch_stream.next()
print(type(b))
print(b["image"].shape)
ds = tfds.load("mnist", split="train")
batches = ds.repeat().batch(batch_size)
batch_stream = batches.as_numpy_iterator()
def process_stream(stream):
b = next(stream)
X = b["image"]
y = b["label"]
d = {"X": X, "y": y}
yield d
my_stream = process_stream(batch_stream)
b = next(my_stream)
print(type(b))
print(b["X"].shape)
b = my_stream.next()
print(type(b))
print(b["X"].shape)
def sample_categorical(N, C):
p = (1 / C) * np.ones(C)
y = np.random.choice(C, size=N, p=p)
return y
def get_datasets_rnd():
Ntrain = 1000
Ntest = 1000
D = 5
C = 10
train_ds = {"X": np.random.randn(Ntrain, D), "y": sample_categorical(Ntrain, C)}
test_ds = {"X": np.random.randn(Ntest, D), "y": sample_categorical(Ntest, C)}
return train_ds, test_ds
def get_datasets_logreg(key):
Ntrain = 1000
Ntest = 1000
D = 5
C = 10
W = jax.random.normal(key, (D, C))
Xtrain = jax.random.normal(key, (Ntrain, D))
logits = jnp.dot(Xtrain, W)
ytrain = jax.random.categorical(key, logits)
Xtest = jax.random.normal(key, (Ntest, D))
logits = jnp.dot(Xtest, W)
ytest = jax.random.categorical(key, logits)
train_ds = {"X": Xtrain, "y": ytrain}
test_ds = {"X": Xtest, "y": ytest}
return train_ds, test_ds
Explanation: Language datasets
Various datasets are used in the natural language processing (NLP) communities.
TODO: fill in.
Graveyard
Here we store some scratch code that you can ignore,
End of explanation |
9,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
자료 안내
Step1: 주요 내용
모집단과 표본
모집단 분산의 점추정
주요 예제
21장에서 다룬 미국의 51개 주에서 거래되는 담배(식물)의 도매가격 데이터를 보다 상세히 분석한다.
특히, 캘리포니아 주를 예제로 하여 주(State)별로 담배(식물) 도매가 전체에 대한 거래가의 평균과 분산을 점추정(point estimation)하는 방법을 다룬다.
주요 모듈
pandas
Step2: 모집단과 표본
Weed_Price.csv 파일에 담긴 담배(식물) 도매가는 미국에서 거래된 모든 도매가 정보가 아니라 소수의 거래 정보만을 담고 있다.
이와같이 조사대상의 소수만을 모아 둔 데이터를 표본(Sample)이라 부른다.
반면에 미국에서 거래되는 모든 담배(식물) 도매가 전체는 현재 조사하고자 하는 대상들의 모집단이라 부른다.
여기서는 Weed_Price.csv 파일에 담긴 표본을 이용하여 모집단에 대한 분산과, 주별로 이루어진 거래 사이의 상관관계를 확인하고자 한다.
참고
Step3: 이제 캘리포니아 주 거래된 상품(HighQ) 담배(식물)의 거래가 전체 모집단에 대한 분산 점추정을 계산할 수 있다.
주의
Step4: 주의 | Python Code:
from GongSu21_Statistics_Averages import *
Explanation: 자료 안내: 여기서 다루는 내용은 아래 사이트의 내용을 참고하여 생성되었음.
https://github.com/rouseguy/intro2stats
모집단 분산 점추정
안내사항
지난 시간에 다룬 21장 내용을 활용하고자 한다.
따라서 아래와 같이 21장 내용을 모듈로 담고 있는 파이썬 파일을 임포트 해야 한다.
주의: GongSu21_Statistics_Averages.py 파일이 동일한 디렉토리에 있어야 한다.
End of explanation
prices_pd.head()
Explanation: 주요 내용
모집단과 표본
모집단 분산의 점추정
주요 예제
21장에서 다룬 미국의 51개 주에서 거래되는 담배(식물)의 도매가격 데이터를 보다 상세히 분석한다.
특히, 캘리포니아 주를 예제로 하여 주(State)별로 담배(식물) 도매가 전체에 대한 거래가의 평균과 분산을 점추정(point estimation)하는 방법을 다룬다.
주요 모듈
pandas: 통계분석 전용 모듈
numpy 모듈을 바탕으로 하여 통계분석에 특화된 모듈임.
마이크로소프트의 엑셀처럼 작동하는 기능을 지원함
datetime: 날짜와 시간을 적절하게 표시하도록 도와주는 기능을 지원하는 모듈
scipy: 수치계산, 공업수학 등을 지원하는 모듈
주의: 언급된 모듈은 이미 GongSu21_Statistics_Averages.py 모듈에서 임포트 되었음.
오늘 사용할 데이터
주별 담배(식물) 도매가격 및 판매일자: Weed_Price.csv
아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일를 엑셀로 읽었을 때의 일부를 보여준다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="img/weed_price.png", width=600>
</td>
</tr>
</table>
</p>
주의: 언급된 파일이 GongSu21_Statistics_Averages 모듈에서 prices_pd 라는 변수에 저장되었음.
또한 주(State)별, 거래날짜별(date) 기준으로 이미 정렬되어 있음.
따라서 아래에서 볼 수 있듯이 예를 들어, prices_pd의 첫 다섯 줄의 내용은 알파벳순으로 가장 빠른 이름을 가진 알라바마(Alabama) 주에서 거래된 데이터 중에서 가정 먼저 거래된 5개의 거래내용을 담고 있다.
End of explanation
california_pd['HighQ_dev'] = (california_pd['HighQ'] - ca_mean) ** 2
california_pd.head()
Explanation: 모집단과 표본
Weed_Price.csv 파일에 담긴 담배(식물) 도매가는 미국에서 거래된 모든 도매가 정보가 아니라 소수의 거래 정보만을 담고 있다.
이와같이 조사대상의 소수만을 모아 둔 데이터를 표본(Sample)이라 부른다.
반면에 미국에서 거래되는 모든 담배(식물) 도매가 전체는 현재 조사하고자 하는 대상들의 모집단이라 부른다.
여기서는 Weed_Price.csv 파일에 담긴 표본을 이용하여 모집단에 대한 분산과, 주별로 이루어진 거래 사이의 상관관계를 확인하고자 한다.
참고: 모집단과 표본, 점추정에 대한 보다 자세한 설명은 아래의 두 파일을 참조한다.
* GongSu22_Statistics_Sampling_a.pdf
* GongSu22_Statistics_Sampling_b.pdf
모집단 평균값과 분산의 점추정
모집단의 평균값 점추정: 표본의 평균값을 그대로 이용한다.
$$\hat{x} = \bar x = \frac{\Sigma_{i=1}^{n} x_i}{n}$$
$\hat x\,\,$는 모집단 평균값의 점추정 기호
$\bar x$는 표본 데이터들의 평균값 기호
모집단의 분산 점추정: 표본 데이터를 이용해서 모집단의 분산을 추정할 수 있다.
$$\hat\sigma\,\, {}^2 = s^2 = \frac{\Sigma_{i = 1}^{n}(x_i - \bar x)^2}{n-1}$$
$\hat \sigma\,\, {}^2$는 모집단 분산의 점추정 기호
주의:
* $s^2$을 계산할 때 $n$ 대신에 $n-1$로 나누는 것에 주의한다.
* 모집단의 분산은 일반적으로 표본의 분산보다 좀 더 크기 때문이다.
캘리포니아 주에서 거래된 HighQ 담배(식물)의 도매가 전체에 대한 분산의 점추정
먼저 prices_pd에 포함된 데이터 중에서 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물)의 가격들에 대한 연산이 필요하다.
즉, 아래 공식의 분자를 계산하기 위한 준비과정이다.
$$s^2 = \frac{\Sigma_{i = 1}^{n}(x_i - \bar x)^2}{n-1}$$
주의: 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물)의 도매가의 평균값은 ca_mean으로 이미 계산되었다.
End of explanation
ca_HighQ_variance = california_pd.HighQ_dev.sum() / (ca_count - 1)
ca_HighQ_variance
Explanation: 이제 캘리포니아 주 거래된 상품(HighQ) 담배(식물)의 거래가 전체 모집단에 대한 분산 점추정을 계산할 수 있다.
주의: 표본의 크기는 ca_count이다.
End of explanation
# 캘리포니아에서 거래된 상품(HighQ) 담배(식물) 도매가의 표준편차
ca_HighQ_SD = np.sqrt(ca_HighQ_variance)
ca_HighQ_SD
Explanation: 주의:
* DataFrame 자료형의 연산은 넘파이 어레이의 연산처럼 항목별로 실행된다.
* sum 메소드의 활용을 기억한다.
표준편차의 점추정
모집단 분산의 점추정으로 얻은 값에다가 루트를 씌우면 된다.
End of explanation |
9,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Session 8 - Source model calibration using PEST and Veneer
PEST is a highly capable system for model calibration, and for sensitivity and uncertainty analysis. PEST is independent of any particular modelling software and modellers have connected Source to PEST in a number of ways.
This tutorial looks at specific support in veneer-py for calibration Source models with PEST. Notably, you can describe the PEST job in the notebook and veneer-py will generate the required PEST configuration files.
At the time of writing, this functionality supports basic calibration of model parameters to timeseries observations.
Get PEST
PEST can be downloaded from http
Step1: Also as before, we need a copy of the Veneer client for each copy of the server
Step2: The catchment
We haven't used this project in the earlier tutorials. We can query any of the servers for the network (just as we would if Veneer were running within the Source application)
Step3: Describing the PEST 'Job'
When configuring PEST to work with a particular model, you 'teach' PEST how to communicate with the model by describing the format of one or more text input files expected by the model and one or more text output files produced by the model. PEST will then generate updated input files for each simulation and read the updated output files that result from the simulation.
veneer-py has some basic functionality for setting up PEST runs that avoids the need to directly edit the PEST configuration files.
With veneer-py, you can describe a PEST job in Python, including describing the Source model parameters that you want to calibrate and the outputs that you want to calibrate against.
veneer-py will then write out the following PEST configuration files and invoke PEST
Step4: At very least, we need to give a Case a name - which is the basis for all the filenames that will be written out.
You can also specify
Step5: PEST has many options - most of which we leave at default. One option that we currently need is to put PEST into single precision mode. This is because PEST, in double precision mode, uses a syntax for floating point literals that is not valid Python
Step6: Configuring the calibration parameters
PEST needs to be told about the calibration parameters
This is a two step process
Step7: We only want to calibrate x1-x4 - (C and k are specific to the eWater version of GR4J - they provide a baseflow filter)
Step8: We need to assign ranges to each of these. The model implementation in Source has metadata about suitable ranges - but at this stage, there isn't an easy way to interrogate that information from veneer-py. You can check in the Source user interface (Edit|Rainfall Runoff Models) to see the ranges.
Having done that, we'll construct a Python dictionary of parameter ranges
Note
Step9: Now, we can loop over each parameter and 'teach' PEST about it - ie tell PEST how to modify the parameter and tell PEST what range we want to calibrate over
Step10: Note
Step11: There are still gaps in the PTF - eg the # Compute Stats section - that will come as we describe the outputs and observations.
The PCF (PEST Control File) is also partly complete
Step12: Note
Step13: Configuring the outputs and observations
PEST needs to know what its calibrating to. In our case, that means a time series of observed data, and a corresponding modelled time series. We also need the objective function and the time period for the calibration.
PEST has a tool for processing time series data (TSPROC) but we don't use it here. Rather, we compute the objective values in Python and just pass single numbers back to PEST.
We'll start by loading the observed flow time series data into this notebook and exploring the data a bit...
Note
Step14: Note
Step15: This (synthetic) observed flow sequence relates to the (synthetic) gauge towards the bottom of the system. What was it called?
Step16: Aaah, we want 'G123456A'
Step17: Now we can tell PEST about the observations and the comparison we want.
We need to tell PEST how to load the observed data - and the pandas command we used to do a test load will help
Step18: And we can set up the comparison
Step19: veneer-py configures the observation based on the column name in the observed flow file (so that you can have multiple comparisons from different columns and files)
Step20: We also need to reference a stats function. You can write your own (but you'll need to store it in a .py file) or you can access one from veneer.stats
Step21: We need to do one more thing
Step22: If we look at the content of the PEST config files now, we'll see more details filled in
Step23: Running PEST
We can now invoke the PEST run.
When you call calibration.run(), PEST will start, and, in this case, it will start parallel PEST mode with n workers. However all output of PEST will be in files, and in the command prompt window from which you started the Jupyter notebook. You will need to look in these places for progress of the calibration.
When PEST finishes, you'll get the calibrated parameters back as well as details of the PEST run. The PEST manual covers the outputs of a PEST run in much detail
Note | Python Code:
from veneer.manage import start, create_command_line, kill_all_now
import veneer
veneer_install = 'D:\\src\\projects\\Veneer\\Compiled\\Source 4.1.1.4484 (public version)'
source_version = '4.1.1'
cmd_directory = 'E:\\temp\\veneer_cmd'
path = create_command_line(veneer_install,source_version,dest=cmd_directory)
path
catchment_project='ExampleProject/CalibrationExample.rsproj'
num_copies=20 # Important - set this to be a number ~ the number of CPU cores in your system!
first_port=9950
processes,ports = start(catchment_project,
n_instances=num_copies,
ports=first_port,
debug=True,
veneer_exe=path,
remote=False)
Explanation: Session 8 - Source model calibration using PEST and Veneer
PEST is a highly capable system for model calibration, and for sensitivity and uncertainty analysis. PEST is independent of any particular modelling software and modellers have connected Source to PEST in a number of ways.
This tutorial looks at specific support in veneer-py for calibration Source models with PEST. Notably, you can describe the PEST job in the notebook and veneer-py will generate the required PEST configuration files.
At the time of writing, this functionality supports basic calibration of model parameters to timeseries observations.
Get PEST
PEST can be downloaded from http://www.pesthomepage.org/. You want the standard PC PEST for Windows - not BEOPEST. PEST is delivered as a zip file - unzip to a directory on your system (ideally a directory without spaces or special characters in the path - something like C:\PEST is good)
This tutorial will gloss over a lot of PEST specifics in order to focus on the connection to Source and Veneer. The PEST manual is very comprehensive and it is available from the PEST homepage and included with the software.
Overview
How will it work?
Describing the PEST ‘job'
Veneer/Source end-point(s)
Getting the model ‘ready’ for optimisation
Describing the calibration parameters
Describing the objective and any observed data
What veneer-py is taking care of
Running PEST, feedback
Limitations
Which Model?
Note: This session uses ExampleProject/CalibrationExample.rsproj. You are welcome to work with your own model instead, however you will need to change the notebook text at certain points to reflect the names of nodes, links and functions in your model file.
How will it work?
PEST requires a series of configuration file that describe the parameter estimation problem, including:
What program to run and what command line arguments to use
A description of one or more text files that PEST can modify in order to create a unique run of the model
Description of one or more results text files that PEST can read to interpret the results of the model run
Optional files that control parallel processing by running multiple copies of the model in parallel.
veneer-py has functionality for describing the calibration problem in Python, and then subsequently generating the files expected by PEST. This includes generating a Python script, which becomes the program that PEST invokes. This Python program calls to a running copy of Veneer to execute the Source model and retrieve results.
When running PEST in parallel, PEST controls a number of 'slave' processes, each of which is responsible for running one copy of the simulation at a time:
In this way, you can run a PEST calibration for a Source model that is running in the main Source windows application. You can start multiple copies of Source in order to support parallel calibrations. Alternatively, you can use the Veneer Command Line tool.
Setting up the servers
As in Tutorial 7, we'll use the command line and set up a number of copies of the server using the start() function
End of explanation
vs = [veneer.Veneer(port=p) for p in ports]
Explanation: Also as before, we need a copy of the Veneer client for each copy of the server:
End of explanation
%matplotlib inline
v = vs[0]
v.network().as_dataframe().plot()
Explanation: The catchment
We haven't used this project in the earlier tutorials. We can query any of the servers for the network (just as we would if Veneer were running within the Source application)
End of explanation
from veneer.pest import Case
Explanation: Describing the PEST 'Job'
When configuring PEST to work with a particular model, you 'teach' PEST how to communicate with the model by describing the format of one or more text input files expected by the model and one or more text output files produced by the model. PEST will then generate updated input files for each simulation and read the updated output files that result from the simulation.
veneer-py has some basic functionality for setting up PEST runs that avoids the need to directly edit the PEST configuration files.
With veneer-py, you can describe a PEST job in Python, including describing the Source model parameters that you want to calibrate and the outputs that you want to calibrate against.
veneer-py will then write out the following PEST configuration files and invoke PEST:
PTF: A PEST Template File, which describes how to run the model, including setting relevant model parameters and ensuring that the required model outputs are produced. PEST will substitute model parameters into this file, based on the configuration in the PCF. In the case of veneer-py, the PTF is a template of a Python script, which uses veneer-py to connect to a Source/Veneer server, sets parameters, runs the model and then compares results using defined statistics.
PIF: A PEST Instruction File, which tells PEST where to find the observations that it needs for evaluating a simulation. In the case of veneer-py, these outputs are produced by the logic in the PTF.
PCF: A PEST Control File, which desribes the parameters to be optimised and the observations to optimise to, as well as how to run the model.
PRF: (OPTIONAL), used when performing a parallel calibration with more than one Veneer/Source server. Describes where (on the filesystem) to run the model from.
To establish all these files and run a PEST job, you can use functionality in veneer-py to describe a PEST 'Case'.
The 'Case' will ultimately know everything about the calibration
what parameters you are calibrating and how they are constrained,
what you are calibrating to,
how many Source servers you have at your disposal
any options related to PEST, such as which optimisation routine to use
End of explanation
calibration = Case('CalibrationCase',optimiser='cmaes_p',model_servers=ports)
Explanation: At very least, we need to give a Case a name - which is the basis for all the filenames that will be written out.
You can also specify:
an optimiser (the default is pest, but the PEST software also comes several others)
a list of Veneer/Source servers, described as a list of ports
a random number seed in order to either make the optimisation deterministic or allow random variation
End of explanation
calibration.options['PRECIS']='single'
Explanation: PEST has many options - most of which we leave at default. One option that we currently need is to put PEST into single precision mode. This is because PEST, in double precision mode, uses a syntax for floating point literals that is not valid Python:
End of explanation
v.model.find_model_type('GR4J')
params = v.model.find_parameters('TIME.Models.RainfallRunoff.GR4J.GR4J')
params
Explanation: Configuring the calibration parameters
PEST needs to be told about the calibration parameters
This is a two step process:
* Specify how to apply the parameter (using a statement as you would in Veneer. eg model.catchment.runoff.set_param_values('baseflowCoefficient',0.5,fus=list(fu_types)) but with the actual value (0.5) changed to the PEST parameter name, with markers (eg @v_bfCoeff@). This information forms part of a dynamically generated Python script that PEST will modify and run for each simulation
* Tell PEST about the parameter, including its range.
* This forms part of the PEST control file
* In addition to the range, we also specify the initial value.
* We set the initial value to be halfway between the min and max
We're performing a rainfall runoff calibration, with the four parameter GR4J rainfall runoff used in the model
We're going to perform a lumped calibration - ie one parameter set everywhere - but we could, alternatively, calibrate distinct parameters by functional unit type, or in different parts of the catchment.
End of explanation
params = params[2:]
params
Explanation: We only want to calibrate x1-x4 - (C and k are specific to the eWater version of GR4J - they provide a baseflow filter)
End of explanation
ranges = {
'x1':[100.0,500.0],
'x2':[1.0,5.0],
'x3':[1.0,200.0],
'x4':[0.5,3.0]
}
ranges
Explanation: We need to assign ranges to each of these. The model implementation in Source has metadata about suitable ranges - but at this stage, there isn't an easy way to interrogate that information from veneer-py. You can check in the Source user interface (Edit|Rainfall Runoff Models) to see the ranges.
Having done that, we'll construct a Python dictionary of parameter ranges
Note: These are quite narrow ranges for GR4J. This is done to keep the optimisation short in the context of the tutorial (and because the 'observed' data is actually synthetic data generated from Source)
End of explanation
for param,param_range in ranges.items():
print('Configuring %s'%param)
pest_pname = '$'+param+'$'
# 1. Tell PEST how to set the parameter
calibration.parameters.model.catchment.runoff.set_param_values(param,pest_pname)
# 2. Details of the PEST parameter. name, starting value, min, max
# Decide what to use for the initial value... half way between min and max!
initial = 0.5*(param_range[0]+param_range[1])
calibration.parameters.describe(pest_pname,initial,param_range[0],param_range[1])
Explanation: Now, we can loop over each parameter and 'teach' PEST about it - ie tell PEST how to modify the parameter and tell PEST what range we want to calibrate over:
End of explanation
print(calibration.ptf_text())
Explanation: Note: When we tell PEST how to set the parameter in the model, we use a Python statement that looks similar to statements from earlier sessions:
```python
calibration.parameters.model.catchment.runoff.set_param_values(param,pest_pname)
is similar to
v.model.catchment.runoff.set_param_values(param,value)
``
The instructions to PEST will in fact translate to instructions to a Veneer client (v). So everything after.parameter.should be something you can call on a Veneer client. The other main difference is that, instead of passing in an actual value at this point, you passpest_pname, which will be something like$x1$` and will get translated to an actual value at runtime.
The previous code has gone some way towards configuring the PEST job. We can get a preview of what has been achieved by asking to see the configuration as it stands.
First, the state of the PTF (PEST Template File). You can see that it looks very much like a Python script, except where it has references to things like $x1$ and the like
End of explanation
print(calibration.pcf_text())
Explanation: There are still gaps in the PTF - eg the # Compute Stats section - that will come as we describe the outputs and observations.
The PCF (PEST Control File) is also partly complete:
End of explanation
calibration.options
Explanation: Note: There are a lot of options in the PCF - and we are using defaults. They are very well described in the PEST Manual and can be specified using calibration.options
End of explanation
import pandas as pd
flows = pd.read_csv('SyntheticObservedFlow.csv',parse_dates=True,dayfirst=True,index_col=0)
flows[0::50] # Show every fifty days
flows.plot()
Explanation: Configuring the outputs and observations
PEST needs to know what its calibrating to. In our case, that means a time series of observed data, and a corresponding modelled time series. We also need the objective function and the time period for the calibration.
PEST has a tool for processing time series data (TSPROC) but we don't use it here. Rather, we compute the objective values in Python and just pass single numbers back to PEST.
We'll start by loading the observed flow time series data into this notebook and exploring the data a bit...
Note: Loading the data at this point serves two purposes:
1. We become more familiar with the data and can identify things like the time period we want to calibrate over, and
2. We work out the exact pandas command needed to load the data. We need to give this command to PEST later on to call as part of the simulations...
End of explanation
start,end = flows.index[[0,-1]]
start,end
Explanation: Note: If your observed data had gaps during your simulation period, this would be a good point to establish the overlapping period in order to inform the simulation/calibration dates.
In our case, the data aligns with the simulation, so the simulation dates we want are just the start and end of the time series:
End of explanation
network = v.network()
nodes = network['features'].find_by_feature_type('node')
nodes._all_values('name')
Explanation: This (synthetic) observed flow sequence relates to the (synthetic) gauge towards the bottom of the system. What was it called?
End of explanation
calibration_node = 'G123456A'
Explanation: Aaah, we want 'G123456A'
End of explanation
calibration.observations.data.read_csv('SyntheticObservedFlow.csv',parse_dates=True,dayfirst=True,index_col=0)
Explanation: Now we can tell PEST about the observations and the comparison we want.
We need to tell PEST how to load the observed data - and the pandas command we used to do a test load will help:
```python
pd.read_csv('SyntheticObservedFlow.csv',parse_dates=True,dayfirst=True,index_col=0)
will become
calibration.observations.data.read_csv('SyntheticObservedFlow.csv',parse_dates=True,dayfirst=True,index_col=0)
```
End of explanation
comparison={'NetworkElement':calibration_node,'RecordingVariable':'Downstream Flow Volume'}
Explanation: And we can set up the comparison
End of explanation
flows.columns
Explanation: veneer-py configures the observation based on the column name in the observed flow file (so that you can have multiple comparisons from different columns and files)
End of explanation
from veneer import stats
dir(stats)
help(stats.nse)
calibration.observations.compare('Flow',comparison,stat=stats.nse,aggregation='daily')
Explanation: We also need to reference a stats function. You can write your own (but you'll need to store it in a .py file) or you can access one from veneer.stats
End of explanation
for v in vs:
veneer.log('Configuring recording for server on port %d'%v.port)
v.configure_recording(enable=[comparison],disable=[{}])
Explanation: We need to do one more thing: We need to make sure that each of our n servers is configured to record the output we require.
We'll need to loop over each of our veneer clients and configure recording. We have comparison which describes what we want to record. We can disable all other outputs
End of explanation
print(calibration.ptf_text())
print(calibration.pif_text())
print(calibration.pcf_text())
print(calibration.prf_text())
Explanation: If we look at the content of the PEST config files now, we'll see more details filled in:
End of explanation
pest_path='C:\\PEST'
import os
os.environ['PATH'] = os.environ['PATH']+';'+pest_path
results = calibration.run()
#results = calibration.get_results()
results['parameters']
kill_all_now(processes)
Explanation: Running PEST
We can now invoke the PEST run.
When you call calibration.run(), PEST will start, and, in this case, it will start parallel PEST mode with n workers. However all output of PEST will be in files, and in the command prompt window from which you started the Jupyter notebook. You will need to look in these places for progress of the calibration.
When PEST finishes, you'll get the calibrated parameters back as well as details of the PEST run. The PEST manual covers the outputs of a PEST run in much detail
Note: PEST needs to be in your Windows path. If its not, the run() command won't work. You can temporarily add PEST to your path from within the notebook:
End of explanation |
9,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Back to data again
On a commencé à analyser les données de Pokemons et à afficher plusieurs graphiques pour avoir une vision de nos données. Le problème avec les données c'est que dans la vrai vie, les données ne sont pas propre (dirty data)...
Par contre, comme l'on sait déja manier un peu les DataFrames et les graphiques on peux plus facilement arriver à détecter ces petits problèmes ;)
Step1: Titanic dataset
Step2: Analyse data
Step3: Pour regarder les données
Step4: Signification des colonnes
Step5: 2) Connaitre les type des colonnes
Step6: 3) Connaitre la distribution des nos données
Step7: Uniquement que pour les données numériques
Analyse par colonnes
Step8: Embarked
Step9: Il semble qu'il manque quelques valeurs pour Embarked et on déteste les valeurs absentes...
Step10: On sait que la valeurs la plus présente (largement) est "S". On va donc remplir les données vide par "S"
Step11: On regarde si "Embarked" est lié à la survie des passagers
Step12: Fare
Step13: Le prix est il lié à la survie ?
Step14: Sexe des passagers
Step15: Pclass des passagers
Step16: Age des passagers
Step17: Lorsqu'il y a une absence non négligeable de données (pour les données continues), il y a plusieurs possibilité afin de résoudre le problème
Step18: 24 est la valeurs la plus présente
Step19: Connaitre la moyenne d'age des passagers
Step20: Créer une série de données qui respecte la distribution d'origine
Step21: La dernière méthode est la plus respectueuse des donnnées
Step22: La famille
Step23: Analyse général des données | Python Code:
Image(url="http://i.giphy.com/LY1DH1AMbG0tq.gif")
Explanation: Back to data again
On a commencé à analyser les données de Pokemons et à afficher plusieurs graphiques pour avoir une vision de nos données. Le problème avec les données c'est que dans la vrai vie, les données ne sont pas propre (dirty data)...
Par contre, comme l'on sait déja manier un peu les DataFrames et les graphiques on peux plus facilement arriver à détecter ces petits problèmes ;)
End of explanation
Image(url="http://i.giphy.com/12eayhW3TRPCjS.gif")
Explanation: Titanic dataset :
On va analyser un jeu de données sur les passagers du Titanic, et comme vous devez le savoir (Spoiler) y'a eu un petit soucis...
Télécharger le fichier train.csv dans le répertoire data*
End of explanation
# Charger la lib
import pandas as pd
#Afficher l'aide
#pd.read_csv?
data = pd.read_csv('data/train.csv') # Chargement des données.
Explanation: Analyse data :
Petits rappels
Pour lire un fichier .csv on utilise la fonction de read_csv de la librairie Pandas. Si vous voulez connaitre l'ensemble de paramètres de la fonction : read_csv? (une fenêtre d'aide s'ouvrira).
End of explanation
data.head()
data.tail()
Explanation: Pour regarder les données :
- .head() -> affiche les 5 premières lignes
- .tail() -> affiche les 5 dernières lignes
- .head(15) -> affiche les 15 premères lignes
End of explanation
data.shape
Explanation: Signification des colonnes :
Survived : indique la mort ou la survie du passager pour les données d'apprentissage. C'est ce que l'on doit prédire sur fichier test. Cette valeur est booléene (0 ou 1) : 1 pour survie, 0 pour la mort
Pclass : La classe des chambres du navire (3 niveaux), 1 étant la meilleure classe et 3 la classe "éco". C'est une variable texte qui va falloir transformer en faisant attention car il y a une notion d'ordre.
Name : Nom de la personne
Sex : Sexe du passager
Age : âge du passager
SisbSp : (Sibling and Spouse) : le nombre de membres de la famille du passager de type frère, soeur, demi-frère, demi-soeur, époux, épouse...
Parch : (Parent and Child) : le nombre de membres de la famille du passager du type père, mère, fils, fille, beau-fils, etc...
Ticket : Numéro du ticket
Fare : le prix du ticket
Cabin : numéro de cabine
Embarked : le port d'embarquement du passager : C -> Cherbourg; Q -> Queenstown; S -> Southampton
Analyse Maccro des données
1) Connaitre les nombres de lignes et colonnes :
End of explanation
data.dtypes
Explanation: 2) Connaitre les type des colonnes :
End of explanation
data.describe()
Explanation: 3) Connaitre la distribution des nos données :
End of explanation
data.head()
Explanation: Uniquement que pour les données numériques
Analyse par colonnes :
End of explanation
data.info()
data.Embarked.value_counts(normalize=True)
Explanation: Embarked :
End of explanation
Image(url="http://i.giphy.com/I3wsrN9ndx11m.gif")
# Voir le mask des données absente sur une colonne
pd.isnull(data.Embarked)
#Connaitre les lignes ou il y a des données absentes
data[pd.isnull(data.Embarked)] # On affiche notre dataframe avec une condition
Explanation: Il semble qu'il manque quelques valeurs pour Embarked et on déteste les valeurs absentes...
End of explanation
data["Embarked"] = data["Embarked"].fillna("S")
data[pd.isnull(data.Embarked)] # Il n'y a plus de données absentes
data.head()
Explanation: On sait que la valeurs la plus présente (largement) est "S". On va donc remplir les données vide par "S"
End of explanation
sns.barplot(x='Survived', y="Embarked", data=data)#, order=[1,0])
# Quelle est la survie moyenne selon les valeurs "Embarked"
Embarked_group = data[["Embarked", "Survived"]].groupby(['Embarked'], as_index=False).mean()
Embarked_group
# Graphique
sns.barplot(x='Embarked', y='Survived', data=Embarked_group)
Explanation: On regarde si "Embarked" est lié à la survie des passagers :
End of explanation
data.Fare.describe()
sns.boxplot(data.Fare)
Explanation: Fare : Prix du voyage
Analyse d'une données continue
End of explanation
sns.factorplot(x="Survived", y="Fare",
data=data, kind="box")
# Analyse de la distribution du prix des billets
sns.distplot(data.Fare)
fare_survived = data[data.Survived == 1]
fare_not_survived = data[data.Survived == 0]
fare_not_survived.head()
plt.figure(figsize=(12,5)) # Agrandir le graphique
sns.distplot(fare_survived.Fare, label="Survived") # Survived
sns.distplot(fare_not_survived.Fare, label="Dead") # Dead
plt.legend() # On affiche la légende
Explanation: Le prix est il lié à la survie ?
End of explanation
data.Sex.value_counts()
sns.countplot(data.Sex)
# Moyenne de survie suivant le sexe du passager
grp_sex = data[["Sex", "Survived"]].groupby(['Sex'],as_index=False).mean()
grp_sex
sns.barplot(x='Sex', y='Survived', data=grp_sex)
Explanation: Sexe des passagers :
End of explanation
sns.countplot(data.Pclass)
# Moyenne de survie suivant la classe du passager
grp_class = data[["Pclass", "Survived"]].groupby(['Pclass'],as_index=False).mean()
sns.barplot(x='Pclass', y='Survived', data=grp_class)
Explanation: Pclass des passagers :
End of explanation
data.info()
sns.boxplot(data.Age)
data[pd.isnull(data.Age)].head()
len(data[pd.isnull(data.Age)]) # Nombre de ligne ou il n'y a pas d'age...
Explanation: Age des passagers :
End of explanation
data.Age.value_counts()
Explanation: Lorsqu'il y a une absence non négligeable de données (pour les données continues), il y a plusieurs possibilité afin de résoudre le problème :
- Prendre la valeurs la plus présente
- Prendre la moyenne
- prendre la médianne
- Créer des données qui respecte les données d'origine
Connaitre la valeurs la plus présente :
End of explanation
data_age_1 = data.copy() # On fait une copie de notre DataFrame original
data_age_1['Age'] = data_age_1['Age'].fillna(24)
# Distribution de l'age de nos données d'origine (Age n'est pas vide)
sns.distplot(data[~pd.isnull(data.Age)]['Age'])
# Distribution de l'age de nos données lorque l'on remplie les données manquantes par la plus présente
sns.distplot(data_age_1.Age)
Explanation: 24 est la valeurs la plus présente
End of explanation
moyenne_age = data.Age.mean()
moyenne_age
data_age_2 = data.copy() # On fait une copie de notre DataFrame original
data_age_2['Age'] = data_age_2['Age'].fillna(moyenne_age)
sns.distplot(data_age_2.Age)
#### Connaitre la médianne d'age des passagers :
median_age = data.Age.median()
median_age
data_age_3 = data.copy() # On fait une copie de notre DataFrame original
data_age_3['Age'] = data_age_3['Age'].fillna(median_age)
sns.distplot(data_age_3.Age)
Explanation: Connaitre la moyenne d'age des passagers :
End of explanation
mean_age = data["Age"].mean()
std_age = data["Age"].std()
nbr_age_nan = data["Age"].isnull().sum()
print "Moyenne est " + str(mean_age) + " avec un écart-type de " + str(std_age) + " et " + str(nbr_age_nan) + " valeurs sont absentes"
# Lib de calcul
import numpy as np
np.random.randint(1, 10, 1)
#(mean - std) & (mean + std)
new_age = np.random.randint(mean_age - std_age, mean_age + std_age, size = nbr_age_nan)
new_age
new_age.mean()
data_age_4 = data.copy() # On fait une copie de notre DataFrame original
# Remplir les valeurs d'age manquantes par notre nouvelle série de données :
data_age_4.loc[pd.isnull(data_age_4['Age']), 'Age'] = new_age
# Simulation d'une nouvelle série d'age
sns.distplot(data_age_4.Age)
# Données d'origine :
sns.distplot(data[~pd.isnull(data.Age)]['Age'])
Explanation: Créer une série de données qui respecte la distribution d'origine :
Une série de données continue est caractérisé par ça moyenne et écart-type
End of explanation
#Relation entre l'age et la survie des passagers
# On transforme l'age en int
data_age_4['Age'] =data_age_4['Age'].astype('int')
# On prend la moyenne de survie par age
grp_age = data_age_4[["Age", "Survived"]].groupby(['Age'], as_index=False).mean()
plt.figure(figsize=(15,5)) # Agrandir le graphique
sns.barplot(x='Age', y='Survived', data=grp_age)
age_survived = data_age_4[data_age_4.Survived == 1]
age_not_survived = data_age_4[data_age_4.Survived == 0]
plt.figure(figsize=(15,5)) # Agrandir le graphique
sns.kdeplot(age_survived['Age'], label="Survived") # Survived
sns.kdeplot(age_not_survived['Age'], label="Dead") # Dead
plt.legend() # On affiche la légende
sns.lmplot('Age','Survived',hue='Pclass',data=data_age_4)
# Réaction des spé Big data qui veulent up leurs scores sur Kaggle ;)
Image(url="http://i.giphy.com/xTiTnnLkYTDWSOWSHK.gif")
Explanation: La dernière méthode est la plus respectueuse des donnnées
End of explanation
data[['Parch', 'SibSp']].describe()
data['Family'] = data["Parch"] + data["SibSp"]
data.Family.value_counts()
# Est ce que la personne à de la famille (oui ou non) --> Booléen
data['is_Family'] = 0 # On initialise notre nouvelle colonne
data.loc[data['Family'] > 0, 'is_Family'] = 1
data.loc[data['Family'] == 0, 'is_Family'] = 0
data.is_Family.value_counts()
sns.countplot(x='is_Family', data=data)
grp_is_family = data[["is_Family", "Survived"]].groupby(['is_Family'],as_index=False).mean()
sns.barplot(x='is_Family', y='Survived', data=grp_is_family)
Explanation: La famille :
SisbSp : (Sibling and Spouse) : le nombre de membres de la famille du passager de type frère, soeur, demi-frère, demi-soeur, époux, épouse...
Parch : (Parent and Child) : le nombre de membres de la famille du passager du type père, mère, fils, fille, beau-fils, etc...
End of explanation
sns.factorplot('is_Family', data=data,hue='Sex', kind='count')
sns.factorplot('Pclass', data=data,hue='Sex', kind='count')
fig = sns.FacetGrid(data, row="Sex", col='Pclass')
fig.map(sns.barplot,'is_Family', 'Survived')
fig = sns.FacetGrid(data, row="Sex", col='Pclass')
fig.map(sns.kdeplot,'Age')
Explanation: Analyse général des données
End of explanation |
9,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to create and use a Secret
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. In this notebook, we would learn how to create a Secret and how to use Secrets as files from a Pod as seen in https
Step1: Load config from default location
Step2: Create API endpoint instance and API resource instances
Step3: Fill required Secret fields
Step4: Create Secret
Step5: Create test Pod API resource instances
Step6: Add volumeMount which would be used to hold secret
Step7: Create volume required by secret
Step8: Create the Pod
Step9: View secret being used within the pod
Wait for atleast 10 seconds to ensure pod is running before executing this section.
Step10: Delete Pod
Step11: Delete Secret | Python Code:
from kubernetes import client, config
Explanation: How to create and use a Secret
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. In this notebook, we would learn how to create a Secret and how to use Secrets as files from a Pod as seen in https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
End of explanation
config.load_kube_config()
client.configuration.assert_hostname = False
Explanation: Load config from default location
End of explanation
api_instance = client.CoreV1Api()
sec = client.V1Secret()
Explanation: Create API endpoint instance and API resource instances
End of explanation
sec.metadata = client.V1ObjectMeta(name="mysecret")
sec.type = "Opaque"
sec.data = {"username": "bXl1c2VybmFtZQ==", "password": "bXlwYXNzd29yZA=="}
Explanation: Fill required Secret fields
End of explanation
api_instance.create_namespaced_secret(namespace="default", body=sec)
Explanation: Create Secret
End of explanation
pod = client.V1Pod()
spec = client.V1PodSpec()
pod.metadata = client.V1ObjectMeta(name="mypod")
container = client.V1Container()
container.name = "mypod"
container.image = "redis"
Explanation: Create test Pod API resource instances
End of explanation
volume_mounts = [client.V1VolumeMount()]
volume_mounts[0].mount_path = "/data/redis"
volume_mounts[0].name = "foo"
container.volume_mounts = volume_mounts
Explanation: Add volumeMount which would be used to hold secret
End of explanation
spec.volumes = [client.V1Volume(name="foo")]
spec.volumes[0].secret = client.V1SecretVolumeSource(secret_name="mysecret")
spec.containers = [container]
pod.spec = spec
Explanation: Create volume required by secret
End of explanation
api_instance.create_namespaced_pod(namespace="default",body=pod)
Explanation: Create the Pod
End of explanation
user = api_instance.connect_get_namespaced_pod_exec(name="mypod", namespace="default", command=[ "/bin/sh", "-c", "cat /data/redis/username" ], stderr=True, stdin=False, stdout=True, tty=False)
print(user)
passwd = api_instance.connect_get_namespaced_pod_exec(name="mypod", namespace="default", command=[ "/bin/sh", "-c", "cat /data/redis/password" ], stderr=True, stdin=False, stdout=True, tty=False)
print(passwd)
Explanation: View secret being used within the pod
Wait for atleast 10 seconds to ensure pod is running before executing this section.
End of explanation
api_instance.delete_namespaced_pod(name="mypod", namespace="default", body=client.V1DeleteOptions())
Explanation: Delete Pod
End of explanation
api_instance.delete_namespaced_secret(name="mysecret", namespace="default", body=sec)
Explanation: Delete Secret
End of explanation |
9,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Social Network Analysis
Step1: If we're trying to build a network we need two things
Step2: That's a lot of information! Let's grab out all of the speakers. All the speaker elements will have a text attribute that has their actual name, or abbreviation of their name.
Step3: To get a unique list we'll use set
Step4: Great start! In Network Analysis there are two fundamental principles. A node is an entity, it can have relationships with other entities. In literature, this is often a character, but it could be a Twitter user, organization, geographic location, or even words!
We may be interested in a node's properties. If it's a character, we may want to know how often they speak, age, etc. We can add this to the network as further layers.
The second concept is an edge. An edge connects nodes. We're foremost interested in the volume of connections between nodes. For literature, this would be the number of times two characters interact.
As we learned from Moretti and our readings for today, this is a very difficult task for most texts. Where does on character's speech end and another's begin? Luckily, in plays this is slightly easier to identify (though still not perfectly clear).
For Shakespeare, we'll settle for them being present in the same scene. If they're in the same scene together, we'll increase our measure of their interaction.
Thus for each character we want to know how many lines the speak in the entire play, along with which scenes they appear in. We can then collate this wil the other characters.
The get_cast_dict function below will parse the XML data and extract this information.
Step5: That's all we need to make a basic network and do some analysis! We have all the character names and the scenes in which they appear. We can collate some of this information to find out in which scenes certain characters appear together. This will happen in our make_graph function.
The NetworkX Python library will parse this dictionary for us to make a graph object. Let's write a function
Step6: We can graph this using matplotlib
Step7: Our graph, G, is a powerful object. We can calculate many of the standard network analysis statistics. There are various measures of centrality, many of which were referenced in the reading.
Step8: Wikipedia defines "degree centrality"
Step9: Wikipedia defines "betweeness centrality"
Step10: Wikipedia defines "eigenvector centrality"
Step11: Challenge
What is the overlap ((rank) correlation) between the three measurements presented above? What does that mean for the play?
Bonus
Step12: We can then add this to a D3 template
Step13: We'll then IFrame in the HTML file
Step15: Gini Coefficient
Algee-Hewitt was calculating the gini coefficient of the eigenvector centralities. He essentially wanted to know whether importance in a network was evenly distributed, or concentrated in the hands of a few. The lower the gini coefficient, the more equal the distribution, the closer to 1, the closer one gets to complete inequality. I've found a function online that will calculate the gini coefficient for you!
Step16: Just to demonstrate, let's make a very unequal array
Step17: The gini coefficient should be close to 1
Step18: What if we have half zeroes and half ones?
Step19: All ones?
Step20: Now we can use the gini function on Othello to see how evenly distributed centrality is
Step21: Great, but that's not terribly interesting itself, we want to see how it relates to other plays. We'll do that for homework.
First, let's write a function to calculate Algee-Hewitt's second measure. He takes the percentage of characters in the top quartile of eigenvector centralities. You'll want to use the np.percentile method!
Challenge
Step22: Homework
I've downloaded 40 other Shakespeare texts in the exact same XML structure. | Python Code:
with open("shakespeare_data/plays_xml/othello_ps_v3.xml") as f:
othello_xml = etree.fromstring(f.read().encode())
Explanation: Social Network Analysis: NetworkX
Mark Algee-Hewitt looks at thousands of plays across centuries. But as we've learned so far, to do this we first have to figure out how to calculate the metrics we're interested in for a single text. Let's take a look at a single play. Luckily, there are databases that exists that have already annotated a lot of plays in a markup language called XML. Especially well researched corpora have extensive metadata. We'll look at the Shakespeare corpus with data obtained from https://www.playshakespeare.com/ .
We'll start by looking at Othello.
End of explanation
all_elements = list(othello_xml.iter())
all_elements
Explanation: If we're trying to build a network we need two things: 1) nodes and 2) edges. For Algee-Hewitt, and for us today, that means we need to know the characters in Othello, and with whom they communicate. We'd also like to know how often that specific interaction occurs.
We can get all elements of the XML tree by iterating over all the nodes:
End of explanation
[e.text for e in all_elements if e.tag == "speaker"]
Explanation: That's a lot of information! Let's grab out all of the speakers. All the speaker elements will have a text attribute that has their actual name, or abbreviation of their name.
End of explanation
set([e.text for e in all_elements if e.tag == "speaker"])
Explanation: To get a unique list we'll use set:
End of explanation
cast_dict = {}
for c in set([e.text for e in all_elements if e.tag == "speaker"]):
cast_dict[c] = {"num_lines": 0,
"scenes": []}
cast_dict
# extract all scene elements from the xml
scenes = [e for e in all_elements if e.tag == "scene"]
scenes
elements = [e.find("acttitle").text for e in all_elements if e.tag == "act"]
def get_cast_dict(all_elements):
'''
returns a dictionary with the total number of lines and scenes a character appears in
'''
cast_dict = {}
# first get a unique set of all characters appearing in the play
for c in set([e.text for e in all_elements if e.tag == "speaker"]):
cast_dict[c] = {"num_lines": 0,
"scenes": []}
# extract all scene elements from the xml
scenes = [e for e in all_elements if e.tag == "scene"]
acts = [e for e in all_elements if e.tag == "act"]
# acts = [e.find("acttitle").text for e in all_elements if e.tag == "act"]
for a in acts:
# get title of acts
act_title = a.find("acttitle").text
# get scene elements
scenes = [e for e in a if e.tag == "scene"]
# iterate through each scene
for sc in scenes:
# grab all the speeches in the scene
speeches = [s for s in sc.getchildren() if s.tag == "speech"]
# iterate through speeches
for s in speeches:
# increment number of lines for the speaker
cast_dict[s.find("speaker").text]["num_lines"] += len(s.findall("line"))
# find all the speaker for each speech
speakers = [s.find("speaker").text for s in speeches]
# add the title of the scene for each speaker appearing in the scene
for s in set(speakers):
cast_dict[s]["scenes"].append(act_title + " " + sc.find("scenetitle").text)
# reassign scenes to only a unique set
for c in cast_dict.keys():
cast_dict[c]["scenes"] = list(set(cast_dict[c]["scenes"]))
return cast_dict
cast_dict = get_cast_dict(all_elements)
cast_dict
Explanation: Great start! In Network Analysis there are two fundamental principles. A node is an entity, it can have relationships with other entities. In literature, this is often a character, but it could be a Twitter user, organization, geographic location, or even words!
We may be interested in a node's properties. If it's a character, we may want to know how often they speak, age, etc. We can add this to the network as further layers.
The second concept is an edge. An edge connects nodes. We're foremost interested in the volume of connections between nodes. For literature, this would be the number of times two characters interact.
As we learned from Moretti and our readings for today, this is a very difficult task for most texts. Where does on character's speech end and another's begin? Luckily, in plays this is slightly easier to identify (though still not perfectly clear).
For Shakespeare, we'll settle for them being present in the same scene. If they're in the same scene together, we'll increase our measure of their interaction.
Thus for each character we want to know how many lines the speak in the entire play, along with which scenes they appear in. We can then collate this wil the other characters.
The get_cast_dict function below will parse the XML data and extract this information.
End of explanation
def make_graph(c_dict):
'''
This function accepts a dictionary with number of lines and scenes to create a
NetworkX graph object
'''
# setup graph object
G = nx.Graph()
# add nodes with attributes of number of lines and scenes
for c in c_dict.keys():
if c_dict[c]["num_lines"] > 0:
G.add_node(
c,
number_of_lines=c_dict[c]["num_lines"],
scenes=c_dict[c]["scenes"]
)
# make edges by iterating over all combinations of nodes
for (node1, data1), (node2, data2) in itertools.combinations(G.nodes(data=True), 2):
# count scenes together by getting union of their sets
scenes_together = len(set(data1['scenes']) & set(data2['scenes']))
if scenes_together:
# add more weight for more scenes together
G.add_edge(node1, node2, weight=scenes_together)
return G
G = make_graph(cast_dict)
Explanation: That's all we need to make a basic network and do some analysis! We have all the character names and the scenes in which they appear. We can collate some of this information to find out in which scenes certain characters appear together. This will happen in our make_graph function.
The NetworkX Python library will parse this dictionary for us to make a graph object. Let's write a function:
End of explanation
# nodes should be sized by number of lines
node_size = [data['number_of_lines'] for __, data in G.nodes(data=True)]
node_color = 'blue'
plt.figure(figsize=(13,8)) # make the figure size a little larger
plt.axis('off') # remove the axis, which isn't meaningful in this case
plt.title("Othello's Social Network", fontsize=20)
# The 'k' argument determines how spaced out the nodes will be from
# one another on the graph.
pos = nx.spring_layout(G, k=0.5)
nx.draw_networkx(
G,
pos=pos,
node_size=node_size,
node_color=node_color,
edge_color='gray', # change edge color
alpha=0.3, # make nodes more transparent to make labels clearer
font_size=14,
)
Explanation: We can graph this using matplotlib:
End of explanation
network_tab = Table()
network_tab.append_column(label="Characters", values=[c for c in sorted(cast_dict.keys())])
network_tab.show()
Explanation: Our graph, G, is a powerful object. We can calculate many of the standard network analysis statistics. There are various measures of centrality, many of which were referenced in the reading.
End of explanation
dc = [x[1] for x in sorted(nx.degree_centrality(G).items(), key=lambda x: x[0])]
network_tab.append_column(label="Degree Centrality", values=dc)
network_tab.show()
Explanation: Wikipedia defines "degree centrality":
Historically first and conceptually simplest is degree centrality, which is defined as the number of links incident upon a node (i.e., the number of ties that a node has).
End of explanation
bc = [x[1] for x in sorted(nx.betweenness_centrality(G).items(), key=lambda x: x[0])]
network_tab.append_column(label="Betweenness Centrality", values=bc)
network_tab.show()
Explanation: Wikipedia defines "betweeness centrality":
Betweenness is a centrality measure of a vertex within a graph (there is also edge betweenness, which is not discussed here). Betweenness centrality quantifies the number of times a node acts as a bridge along the shortest path between two other nodes.
End of explanation
ec = [x[1] for x in sorted(nx.eigenvector_centrality(G).items(), key=lambda x: x[0])]
network_tab.append_column(label="Eigenvector Centrality", values=ec)
network_tab.show()
Explanation: Wikipedia defines "eigenvector centrality":
Eigenvector centrality (also called eigencentrality) is a measure of the influence of a node in a network. It assigns relative scores to all nodes in the network based on the concept that connections to high-scoring nodes contribute more to the score of the node in question than equal connections to low-scoring nodes.
$x_v = \frac{1}{\lambda} \sum_{t \in M(v)}x_t = \frac{1}{\lambda} \sum_{t \in G} a_{v,t}x_t$
End of explanation
from networkx.readwrite import json_graph
import json
d3_data = json_graph.node_link_data(G)
d3_data
Explanation: Challenge
What is the overlap ((rank) correlation) between the three measurements presented above? What does that mean for the play?
Bonus: Making a prettier graph
matplotlib isn't always the most beautiful option. A popular way of visualizing networks is by using Javascript's D3 library. Luckily, networkx allows us to export the network information to JSON:
End of explanation
import re
with open('network.html', 'r') as f:
net_html = f.read()
pattern = re.compile(r'(<script type="application/json" id="net">)(\s*.*)')
net_html = net_html.replace(re.findall(pattern, net_html)[-1][-1].strip(), json.dumps(d3_data).strip())
with open('network.html', 'w') as f:
f.write(net_html)
Explanation: We can then add this to a D3 template:
End of explanation
from IPython.display import IFrame
IFrame('network.html', width=700, height=900)
Explanation: We'll then IFrame in the HTML file
End of explanation
def gini(array):
Calculate the Gini coefficient of a numpy array.
# https://github.com/oliviaguest/gini
array = np.sort(array) # values must be sorted
index = np.arange(1, array.shape[0] + 1) # index per array element
n = array.shape[0] # number of array elements
return ((np.sum((2 * index - n - 1) * array)) / (n * np.sum(array))) #Gini coefficient
Explanation: Gini Coefficient
Algee-Hewitt was calculating the gini coefficient of the eigenvector centralities. He essentially wanted to know whether importance in a network was evenly distributed, or concentrated in the hands of a few. The lower the gini coefficient, the more equal the distribution, the closer to 1, the closer one gets to complete inequality. I've found a function online that will calculate the gini coefficient for you!
End of explanation
np.concatenate((np.zeros(99), np.ones(1)))
Explanation: Just to demonstrate, let's make a very unequal array:
End of explanation
gini(np.concatenate((np.zeros(99), np.ones(1))))
Explanation: The gini coefficient should be close to 1:
End of explanation
gini(np.concatenate((np.zeros(50), np.ones(50))))
Explanation: What if we have half zeroes and half ones?
End of explanation
gini(np.ones(50))
Explanation: All ones?
End of explanation
import numpy as np
gini(network_tab['Eigenvector Centrality'])
Explanation: Now we can use the gini function on Othello to see how evenly distributed centrality is:
End of explanation
def percentage_top_quartile(character_table):
# YOUR CODE HERE
return percentage
percentage_top_quartile(network_tab['Eigenvector Centrality'])
Explanation: Great, but that's not terribly interesting itself, we want to see how it relates to other plays. We'll do that for homework.
First, let's write a function to calculate Algee-Hewitt's second measure. He takes the percentage of characters in the top quartile of eigenvector centralities. You'll want to use the np.percentile method!
Challenge
End of explanation
!ls shakespeare_data/plays_xml/
Explanation: Homework
I've downloaded 40 other Shakespeare texts in the exact same XML structure.
End of explanation |
9,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 2 - Printing and manipulating text
We started the first week by printing Hello world (you can try it below). This taught us a number of things. It taught us about strings, functions, statements. As we know, as biologists one of the primary entities that we deal with is string in the form of sequences, wether they be DNA, RNA, or amino acid sequences.
Step1: To python, a sequences in a string. A string of what? A string of characters. A string is an ummutable (unchangeable) sequence of characters arranged in a specific order. What is a character? A character is letter, number, or punctuation mark...anything that can be represented on a keyboard.
Thus Hello world is the string that we used in print function of our Hello world program.
"Hello world"
What did we do with that string?
We "printed" or wrote the string to the terminal. The python command print is a function, a collection of python source code that has been written to perform a particular action. We call or use a function by typing its name, followed by an open and closed parenthesis. Functions always have parentheses! In the case of the print function you must also include a parameter or string to print.
Getting help
NOTE
Step2: Single or double quotes?
As we can see from the code below, python does not care if we use single or double quotes around our text.
Step3: Comments, or what is that text at the end of the statement above?
Comments are a way to include text in your code and is ignored by the compiler. Comments are very helpful to understand your code without interfeering with its execution or logic.
Comments are preceeded by a pound sign "#" and a space.
# This is a comment
Advanced
Step4: Special characters
The backslash, also called the escape character, enables us to use invisibile or special characters in our python statements.
Print a new line character and python will go to the next line. Like this
Step5: Combining string
Strings can be combined using the plus operator. We know what one plus one is and so does python. 1 + 1 equals two. Well the plusoperator also can work for string. Try this below
Step6: Variables for strings
Thus far we have been working directly with string or text. We can create a variable to store the text that we want.
message = "Hello world!"
We can then use that variable in our print statement
Step7: Variables as objects
In the cell above, the word "message" is a variable. It holds the string "Hello world!". From the python perspective, every variable is an object. In fact, everything in python is an object. What is an object? An object is a template or a cookie cutter that has certain characteristics, like strings, integers and floating point numbers. String objects have proprties and methods or built-in functions. We will look at a number of methods below.
variable_name_1 = "A value"
varaible_name_2 = 10
Check and see what type of obect something is by using the built-in function "type"
Step8: Utilizing the python str (string) methods
To see all available methods (functions) please look at the Python Standard Library Documentation String Methods
Methods available to all objects
Step9: slice [ i
Step10: len
To get teh length or total count of the residues in our sequence use the len function
Step11: count
To count the number of times teh nucleotide "A" occurs in our string we use the count function
Step12: String methods (that are particularily important in bioinformatics)
To see what methods or properties (object variables) are availablel, type the name of an object, usually a variable name, type a period "." afterwards anf hit the tab key. IF he variable has already been defined you will see what methods and properies are availabble.
message.<hit tab>
Concatination
Like the plus sing (+) concatination joins string together. The concatanation symbol + will join two string into a single string. Lets say you would like to add two DNA sequences together. You would do the following
Step13: Changing case
We can also change the case of a string using the built in method name. Lets see how
Step14: Substring
One can extract a substring from a sequence as well using a built-in method. As we mentioned above, a string is a sequence or collection of characters (Unicode characters).
We use square brackets "[" and "]" to extract a subsequence.
Step15: Find
In addition to pull out a subsequence, we can find if a subsequence exista in a sequence.
str.find(sub[, start[, end]])
Return the lowest index in the string where substring sub is found within the slice s[start
Step16: Reversing
We can make use of a trick of the slicing capability to reverse a string. Use a -1 in the final position as step to reverse.
new_dna[ | Python Code:
print("Hello world")
Explanation: Week 2 - Printing and manipulating text
We started the first week by printing Hello world (you can try it below). This taught us a number of things. It taught us about strings, functions, statements. As we know, as biologists one of the primary entities that we deal with is string in the form of sequences, wether they be DNA, RNA, or amino acid sequences.
End of explanation
help(print)
Explanation: To python, a sequences in a string. A string of what? A string of characters. A string is an ummutable (unchangeable) sequence of characters arranged in a specific order. What is a character? A character is letter, number, or punctuation mark...anything that can be represented on a keyboard.
Thus Hello world is the string that we used in print function of our Hello world program.
"Hello world"
What did we do with that string?
We "printed" or wrote the string to the terminal. The python command print is a function, a collection of python source code that has been written to perform a particular action. We call or use a function by typing its name, followed by an open and closed parenthesis. Functions always have parentheses! In the case of the print function you must also include a parameter or string to print.
Getting help
NOTE: Would you like to get more inforamtion on print? Type
print?
or
help(print)
End of explanation
"Hello world" == 'Hello world'
print("Hello world")
print('Hello world')
print("Hello world") # What happen when you try to run this cell?
Explanation: Single or double quotes?
As we can see from the code below, python does not care if we use single or double quotes around our text.
End of explanation
print("This is a long python statemnt that nees to be wrapped.")
print("hello") # this is a test of
# carry over
Explanation: Comments, or what is that text at the end of the statement above?
Comments are a way to include text in your code and is ignored by the compiler. Comments are very helpful to understand your code without interfeering with its execution or logic.
Comments are preceeded by a pound sign "#" and a space.
# This is a comment
Advanced:Docstrings
Splitting a statement over two lines
Sometimes, in fact often, a python statement will be longer then one line on your screen. Good python practice declares that your programming line should be no longer than 80 characters. If a line of code is longer then 80 characters, you can wrap the python statement and add a backslash "\" to each line that is continued.
print("This is a long python statemnt that nees to be wrapped.")
print("This is a long python statemnt \
that nees to be wrapped.")
Try typing each or something similiar below to test it out.
End of explanation
# Please type your code here:
print("Hello world\nThis is the date\nMy name\nreport title")
Explanation: Special characters
The backslash, also called the escape character, enables us to use invisibile or special characters in our python statements.
Print a new line character and python will go to the next line. Like this:
print("Hello world\n!")
To see what special characters are availalble see this tutorial page.
What happens when you try it below?
End of explanation
# Please type your code here:
Explanation: Combining string
Strings can be combined using the plus operator. We know what one plus one is and so does python. 1 + 1 equals two. Well the plusoperator also can work for string. Try this below:
print("Hello" + "World!")
What is the result? Is there anythign wrong with it? If so, how do you fix it?
End of explanation
# Please type your code here:
message = "Hello world!"
print(message)
Explanation: Variables for strings
Thus far we have been working directly with string or text. We can create a variable to store the text that we want.
message = "Hello world!"
We can then use that variable in our print statement:
print(message)
What happens when you run the statement above?
End of explanation
# Please type your code here:
variable_name_1 = "A test"
variable_name_1
Explanation: Variables as objects
In the cell above, the word "message" is a variable. It holds the string "Hello world!". From the python perspective, every variable is an object. In fact, everything in python is an object. What is an object? An object is a template or a cookie cutter that has certain characteristics, like strings, integers and floating point numbers. String objects have proprties and methods or built-in functions. We will look at a number of methods below.
variable_name_1 = "A value"
varaible_name_2 = 10
Check and see what type of obect something is by using the built-in function "type":
type(message)
End of explanation
# Please type your code here:
message
message.upper()
new_dna = 'atgtag'
Explanation: Utilizing the python str (string) methods
To see all available methods (functions) please look at the Python Standard Library Documentation String Methods
Methods available to all objects:
in
To check to see if a nucleotide (i.e. a character) is in our DNA sequence use the in operator.
'a' in new_dna # Returns True
'atg' in new_dna # Returns True
'aaa' in new_dna # Returns False
More generically, to check to see if a python object (character, string, number, etc) is in another python object (string, list, etc):
x in s # Return True if an item of s is equal to x, else False
x not in s False if an item of s is equal to x, else True
s + t the concatenation of s and t
s * n or n * s equivalent to adding s to itself n times
End of explanation
# Please type your code here:
new_dna
new_dna[0]
new_dna[::-1]
Explanation: slice [ i : j : k ]
To slice a character or subsequence out of a sequence, use square brackets ("[", "]").
new_dna[0]
new_dna[0:3]
NOTE: The first number is INCLUSIVE (included), while the second number is EXCLUSIVE (not included).
Generically - s[i] ith item of s, origin 0
s[i:j] slice of s from i to j
s[i:j:k] slice of s from i to j with step k
End of explanation
# Please type your code here:
len(new_dna)
Explanation: len
To get teh length or total count of the residues in our sequence use the len function:
len(new_dna) length of new_dna
End of explanation
# Please type your code here:
new_dna.count('A')
Explanation: count
To count the number of times teh nucleotide "A" occurs in our string we use the count function:
new_dna.count('A') total number of occurrences of x in s
End of explanation
# Please type your code here:
Explanation: String methods (that are particularily important in bioinformatics)
To see what methods or properties (object variables) are availablel, type the name of an object, usually a variable name, type a period "." afterwards anf hit the tab key. IF he variable has already been defined you will see what methods and properies are availabble.
message.<hit tab>
Concatination
Like the plus sing (+) concatination joins string together. The concatanation symbol + will join two string into a single string. Lets say you would like to add two DNA sequences together. You would do the following:
dna1 = "atgaattgg"
dna2 = "ttaaggtag"
new_dna = dna1 & dna2
End of explanation
# Please type your code here:
Explanation: Changing case
We can also change the case of a string using the built in method name. Lets see how:
For uppercase, use the upper() method. In the documentation (above link) we see it listed as: str.upper()
new_dna.upper()
For lowercase, use the lower() method.
new_dna.lower()
End of explanation
# Please type your code here:
Explanation: Substring
One can extract a substring from a sequence as well using a built-in method. As we mentioned above, a string is a sequence or collection of characters (Unicode characters).
We use square brackets "[" and "]" to extract a subsequence.
End of explanation
# Please type your code here:
new_dna.find('tag')
new_dna[3:]
Explanation: Find
In addition to pull out a subsequence, we can find if a subsequence exista in a sequence.
str.find(sub[, start[, end]])
Return the lowest index in the string where substring sub is found within the slice s[start:end].
Optional arguments start and end are interpreted as in slice notation. Return -1 if sub is not found.
Note The find() method should be used only if you need to know the position of sub. To check if sub is a
substring or not, use the __in__ operator:
'Py' in 'Python'
Find the position of the codon for methionine:
new_dna.find("atg")
Find the position of the stop codon:
new_dna.find("tag")
End of explanation
# Please type your code here:
Explanation: Reversing
We can make use of a trick of the slicing capability to reverse a string. Use a -1 in the final position as step to reverse.
new_dna[::-1]
End of explanation |
9,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Criação de imagens sintéticas
Imagens sintéticas são bastante utilizadas nos testes de algoritmos e na geração de
padrões de imagens.
Iremos aprender a gerar os valores dos pixels de uma imagem a partir de uma equação matemática
de forma muito eficiente, sem a necessidade de se usar explicitamente a varredura dos
pixels através do comando for.
A forma preferencial de criação de imagens sintéticas, quando sua equação é fornecida, é
através do uso das funções que geram uma matriz de coordenadas. As duas funções que
iremos utilizar neste curso são indices e meshgrid.
Estude o tutorial em
Step1: Imagem da função "sela"
A função "sela" bidimensional é uma função dada pelo produto de suas coordenadas r e c.
Observe que, implementando desta forma, é possível fazer com que o código Python/NumPy fique
muito próximo à equação matemática, colocada a seguir.
Vamos gerar uma função sela, onde os valores para as linhas serão valores inteiros entre
-75 e 75 e os valores para as colunas, inteiros no intervalo [-100,100] | Python Code:
import numpy as np
Explanation: Criação de imagens sintéticas
Imagens sintéticas são bastante utilizadas nos testes de algoritmos e na geração de
padrões de imagens.
Iremos aprender a gerar os valores dos pixels de uma imagem a partir de uma equação matemática
de forma muito eficiente, sem a necessidade de se usar explicitamente a varredura dos
pixels através do comando for.
A forma preferencial de criação de imagens sintéticas, quando sua equação é fornecida, é
através do uso das funções que geram uma matriz de coordenadas. As duas funções que
iremos utilizar neste curso são indices e meshgrid.
Estude o tutorial em:
Indices e Meshgrid
Ele é fundamental para entender os exemplos a seguir.
End of explanation
r,c = np.meshgrid(np.arange(-75,75), np.arange(-100,100), indexing='ij')
f = r * c
%matplotlib inline
import matplotlib.pyplot as plt
plt.title('Ponto de sela')
plt.imshow(f, cmap='gray')
Explanation: Imagem da função "sela"
A função "sela" bidimensional é uma função dada pelo produto de suas coordenadas r e c.
Observe que, implementando desta forma, é possível fazer com que o código Python/NumPy fique
muito próximo à equação matemática, colocada a seguir.
Vamos gerar uma função sela, onde os valores para as linhas serão valores inteiros entre
-75 e 75 e os valores para as colunas, inteiros no intervalo [-100,100]:
$$ f(r,c) = r \ c $$
$$ \text{para} \ r \in [-75,75] $$
$$ c \in [-100,100]$$
No exemplo a seguir é utilizado a função arange para gerar os vetores de coordenadas. Para melhorar
a visualização foi utilizada a função ia636:iaisolines iaisolines que permite visualizar os
pixels de mesmo valores (isolinhas) da imagem gerada com uma cor destacada.
End of explanation |
9,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Lesson
Step2: Project 1 | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory
End of explanation
import numpy as np
bag_of_words = {}
pos_words = {}
neg_words = {}
for i in range(len(reviews)):
words = reviews[i].split(' ')
for word in words:
if word in bag_of_words.keys():
bag_of_words[word] += 1
else:
bag_of_words[word] = 1
pos_words[word] = 0
neg_words[word] = 0
if labels[i] == 'POSITIVE':
if word in pos_words.keys():
pos_words[word] += 1
elif labels[i] == 'NEGATIVE':
if word in neg_words.keys():
neg_words[word] += 1
words_pos_neg_ratio = []
for word in bag_of_words.keys():
if bag_of_words[word] > 500:
pos_neg_ratio = pos_words[word] / float(neg_words[word] + 1)
words_pos_neg_ratio.append((word, np.log(pos_neg_ratio)))
words_pos_neg_ratio = sorted(words_pos_neg_ratio, key=lambda x: x[1], reverse=True)
print('\nTop positive words: \n')
for i in range(10):
print(words_pos_neg_ratio[i][0],': ', round(words_pos_neg_ratio[i][1], 10), sep='')
print('\nTop negative words: \n')
for i in range(-1, -11, -1):
print(words_pos_neg_ratio[i][0],': ', round(words_pos_neg_ratio[i][1], 10), sep='')
Explanation: Project 1: Quick Theory Validation
End of explanation |
9,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 11 de Agosto del 2015
Los datos del experimento
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: En el boxplot, se ve como la mayoría de los datos están por encima de la media (primer cuartil). Se va a tratar de bajar ese porcentaje. La primera aproximación que vamos a realizar será la de hacer mayores incrementos al subir la velocidad en los tramos que el diámetro se encuentre entre $1.80mm$ y $1.75 mm$(caso 5) haremos incrementos de $d_v2$ en lugar de $d_v1$
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Step3: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$ | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('1119703.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 11 de Agosto del 2015
Los datos del experimento:
* Hora de inicio: 14:27
* Hora final : 15:08
* Filamento extruido: 537cm
* $T: 150ºC$
* $V_{min} tractora: 1.5 mm/s$
* $V_{max} tractora: 3.4 mm/s$
* Los incrementos de velocidades en las reglas del sistema experto son las mismas.
End of explanation
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
Explanation: En el boxplot, se ve como la mayoría de los datos están por encima de la media (primer cuartil). Se va a tratar de bajar ese porcentaje. La primera aproximación que vamos a realizar será la de hacer mayores incrementos al subir la velocidad en los tramos que el diámetro se encuentre entre $1.80mm$ y $1.75 mm$(caso 5) haremos incrementos de $d_v2$ en lugar de $d_v1$
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
Explanation: Representación de X/Y
End of explanation
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Explanation: Analizamos datos del ratio
End of explanation
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation |
9,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some utility functions
Step1: Load in the mnist dataset
Step2: Simple logistic regression a' la sklearn
Let's set a baseline with a simple logistic regression, ommiting reguralization. Just to see where we are starting from.
Step3: 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes.
Building the model
Step4: Training the model
Step5: Same network, but with dropout and l2 reguralization
Step6: The accuracy measures with different dropout tactics
Step7: Simple convolutional network, stride=2
Step8: Buffed up convolutional network
max pooling
dropouts
Some helper functions | Python Code:
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
# Reformat the dataset for the convolutional networks
def reformat(dataset):
dataset = dataset.reshape((-1, image_size, image_size, num_channels)).astype(np.float32)
return dataset
Explanation: Some utility functions
End of explanation
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# The mnist images have a dimension of 28*28.
image_size = 28
# There are 10 labels.
num_labels = 10
train_dataset = mnist.train.images
train_labels = mnist.train.labels
perm = np.random.permutation(mnist.test.images.shape[0])
split_point = int(mnist.test.images.shape[0] * 0.1)
valid_dataset, test_dataset = mnist.test.images[:split_point], mnist.test.images[split_point:]
valid_labels, test_labels = mnist.test.labels[:split_point], mnist.test.labels[split_point:]
Explanation: Load in the mnist dataset
End of explanation
train_labels_not_hot = np.nonzero(mnist.train.labels)[1]
test_labels_not_hot = np.nonzero(mnist.test.labels[split_point:])[1]
lr = LogisticRegression()
lr.fit(train_dataset, train_labels_not_hot)
lr.score(test_dataset, test_labels_not_hot)
Explanation: Simple logistic regression a' la sklearn
Let's set a baseline with a simple logistic regression, ommiting reguralization. Just to see where we are starting from.
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
with tf.name_scope('input'):
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
with tf.name_scope('hidden'):
weights_hidden = tf.Variable(tf.truncated_normal([image_size * image_size, 1024], stddev=0.1),
name='weights')
biases_hidden = tf.Variable(tf.constant(0.1, shape=[1024]), name='biases')
relu_output = tf.nn.relu(tf.matmul(tf_train_dataset, weights_hidden) + biases_hidden)
with tf.name_scope('output'):
weights_output = tf.Variable(tf.truncated_normal([1024, num_labels], stddev=0.1), name='weights')
biases_output = tf.Variable(tf.constant(0.1, shape=[num_labels]), name='biases')
logits = tf.matmul(relu_output, weights_output) + biases_output
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(
tf.matmul(tf_valid_dataset, weights_hidden) +
biases_hidden),
weights_output) +
biases_output)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(
tf.matmul(tf_test_dataset, weights_hidden) +
biases_hidden),
weights_output) +
biases_output)
Explanation: 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes.
Building the model
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data,
tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction],
feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
# Merge all the summaries and write them out to /tmp/mnist_logs (by default)
merged = tf.merge_all_summaries()
train_writer = tf.train.SummaryWriter('./train',
session.graph)
test_writer = tf.train.SummaryWriter('./test')
Explanation: Training the model
End of explanation
batch_size = 128
beta = 0.001
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
with tf.name_scope('input'):
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
keep_prob = tf.placeholder(tf.float32)
with tf.name_scope('hidden'):
weights_hidden = tf.Variable(tf.truncated_normal([image_size * image_size, 1024], stddev=0.1),
name='weights')
weights_hidden_dropped = tf.nn.dropout(weights_hidden, keep_prob)
biases_hidden = tf.Variable(tf.constant(0.1, shape=[1024]), name='biases')
relu_output = tf.nn.relu(tf.matmul(tf_train_dataset, weights_hidden_dropped) + biases_hidden)
with tf.name_scope('output'):
weights_output = tf.Variable(tf.truncated_normal([1024, num_labels], stddev=0.1), name='weights')
weights_output_dropped = tf.nn.dropout(weights_output, keep_prob)
biases_output = tf.Variable(tf.constant(0.1, shape=[num_labels]), name='biases')
logits = tf.matmul(relu_output, weights_output_dropped) + biases_output
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
loss = tf.reduce_mean( loss + beta * tf.nn.l2_loss(weights_output_dropped))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(
tf.nn.relu(tf.matmul(
tf_valid_dataset, weights_hidden) + biases_hidden),
weights_output) + biases_output)
test_prediction = tf.nn.softmax(
tf.matmul(
tf.nn.relu(tf.matmul(
tf_test_dataset, weights_hidden) + biases_hidden),
weights_output) + biases_output)
num_steps = 3001
for kp in np.arange(0.5,1,0.1):
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size) % 10
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {
tf_train_dataset : batch_data,
tf_train_labels : batch_labels,
keep_prob: kp}
_, l, predictions = session.run([optimizer, loss, train_prediction],
feed_dict=feed_dict)
print("Keep prob: %s Test accuracy: %.1f%%" % (kp, accuracy(test_prediction.eval(), test_labels)))
accuracy_val_nn_l2.append(accuracy(test_prediction.eval(), test_labels))
Explanation: Same network, but with dropout and l2 reguralization
End of explanation
num_channels = 1
batch_size = 16
patch_size = 5
depth = 32
num_hidden = 64
num_channels = 1
train_dataset_conv = reformat(train_dataset)
valid_dataset_conv = reformat(valid_dataset)
test_dataset_conv = reformat(test_dataset)
print(train_dataset_conv.shape, train_labels.shape)
print(valid_dataset_conv.shape, valid_labels.shape)
print(test_dataset_conv.shape, test_labels.shape)
Explanation: The accuracy measures with different dropout tactics:
Dropout on both layers results:
Initialized
Keep prob: 0.5 Test accuracy: 75.3%
Initialized
Keep prob: 0.6 Test accuracy: 76.7%
Initialized
Keep prob: 0.7 Test accuracy: 76.8%
Initialized
Keep prob: 0.8 Test accuracy: 76.4%
Initialized
Keep prob: 0.9 Test accuracy: 74.1%
Dropout on both layers plus l2 reguralization with a 0.0001 beta:
Initialized
Keep prob: 0.5 Test accuracy: 75.9%
Initialized
Keep prob: 0.6 Test accuracy: 76.0%
Initialized
Keep prob: 0.7 Test accuracy: 75.8%
Initialized
Keep prob: 0.8 Test accuracy: 75.3%
Initialized
Keep prob: 0.9 Test accuracy: 75.5%
Convolutional Part
Prepare data and variables for convolutions
End of explanation
depth = 16
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset_conv)
tf_test_dataset = tf.constant(test_dataset_conv)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal([patch_size, patch_size, num_channels, depth], stddev=0.1))
layer1_biases = tf.Variable(tf.zeros([depth]))
layer2_weights = tf.Variable(tf.truncated_normal([patch_size, patch_size, depth, depth], stddev=0.1))
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))
layer3_weights = tf.Variable(tf.truncated_normal([image_size // 4 * image_size // 4 * depth, num_hidden],
stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal([num_hidden, num_labels], stddev=0.1))
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 1001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset_conv[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
Explanation: Simple convolutional network, stride=2
End of explanation
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
depth = 32
graph = tf.Graph()
with graph.as_default():
# Placeholders
keep_prob = tf.placeholder(tf.float32)
# Input data.
tf_train_batch = tf.placeholder(tf.float32, shape=(None, image_size, image_size, num_channels))
# The None at the shape argument means that the dimension is not defined,
tf_train_labels = tf.placeholder(tf.float32, shape=(None, num_labels))
# Constants
tf_valid_dataset = tf.constant(valid_dataset_conv)
tf_test_dataset = tf.constant(test_dataset_conv)
# Variables.
h_conv1_weights = weight_variable([patch_size, patch_size, num_channels, depth])
h_conv1_biases = bias_variable([depth])
h_conv2_weights = weight_variable([patch_size, patch_size, depth, depth * 2])
h_conv2_biases = bias_variable([depth * 2])
conv_image_size = image_size // 4
fc1_weights = weight_variable([conv_image_size * conv_image_size * depth * 2, num_hidden])
fc1_biases = bias_variable([num_hidden])
output_softmax_weights = weight_variable([num_hidden, num_labels])
output_softmax_biases = bias_variable([num_labels])
#Define the model:
# First layer, patches of 5x5 into 32 features
h_conv1 = tf.nn.relu(conv2d(tf_train_batch, h_conv1_weights) + h_conv1_biases)
h_pool1 = max_pool_2x2(h_conv1)
# Second layer, patches of 5x5 into 64 features
h_conv2 = tf.nn.relu(conv2d(h_pool1, h_conv2_weights) + h_conv2_biases)
h_pool2 = max_pool_2x2(h_conv2)
# Reshape into the densely connected layer
h_pool2_flat = tf.reshape(h_pool2, [-1, conv_image_size * conv_image_size * depth * 2])
# Define the fully connected layer
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, fc1_weights) + fc1_biases)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# Readout layer
y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, output_softmax_weights) + output_softmax_biases)
with tf.Session(graph=graph) as sess:
# Training computation.
cross_entropy = tf.reduce_mean(-tf.reduce_sum(tf_train_labels * tf.log(y_conv), reduction_indices=[1]))
# Optimizer
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
# These two lines are measure the accuracy of our model.
# y_conv is a softmax output, the highest entry is the most probable according to our model
# (e.g.: [0.7, 0.2, 0.5, 0.5])
# tf_train_labels are the original labels for the training set.
# (eg.: [0, 0, 0, 1])
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(tf_train_labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Initialize the session variables.
sess.run(tf.initialize_all_variables())
for step in range(3001):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# I should randomize this part a bit more to reduce the possibility of reoccuring batches.
batch_data = train_dataset_conv[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
if step % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={tf_train_batch: batch_data,
tf_train_labels: batch_labels,
keep_prob: 1.0})
print("step %d, training accuracy %g" % (step, train_accuracy))
train_step.run(feed_dict={tf_train_batch: batch_data,
tf_train_labels: batch_labels,
keep_prob: 0.5})
print("test accuracy %g" % accuracy.eval(feed_dict={tf_train_batch: test_dataset_conv,
tf_train_labels: test_labels,
keep_prob: 1.0}))
Explanation: Buffed up convolutional network
max pooling
dropouts
Some helper functions
End of explanation |
9,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dependencies
Step1: Loading Data
First, we want to create our word vectors. For simplicity, we're going to be using a pretrained model.
As one of the biggest players in the ML game, Google was able to train a Word2Vec model on a massive Google News dataset that contained over 100 billion different words! From that model, Google was able to create 3 million word vectors, each with a dimensionality of 300.
In an ideal scenario, we'd use those vectors, but since the word vectors matrix is quite large (3.6 GB!), we'll be using a much more manageable matrix that is trained using GloVe, a similar word vector generation model. The matrix will contain 400,000 word vectors, each with a dimensionality of 50.
We're going to be importing two different data structures, one will be a Python list with the 400,000 words, and one will be a 400,000 x 50 dimensional embedding matrix that holds all of the word vector values.
Step2: We can search our word list for a word like "baseball", and then access its corresponding vector through the embedding matrix.
Step3: Now that we have our vectors, our first step is taking an input sentence and then constructing the its vector representation. Let's say that we have the input sentence "I thought the movie was incredible and inspiring". In order to get the word vectors, we can use Tensorflow's embedding lookup function. This function takes in two arguments, one for the embedding matrix (the wordVectors matrix in our case), and one for the ids of each of the words. The ids vector can be thought of as the integerized representation of the training set. This is basically just the row index of each of the words. Let's look at a quick example to make this concrete.
Step4: TODO### Insert image
The 10 x 50 output should contain the 50 dimensional word vectors for each of the 10 words in the sequence.
Step5: Before creating the ids matrix for the whole training set, let’s first take some time to visualize the type of data that we have. This will help us determine the best value for setting our maximum sequence length. In the previous example, we used a max length of 10, but this value is largely dependent on the inputs you have.
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. Each of the reviews is stored in a txt file that we need to parse through. The positive reviews are stored in one directory and the negative reviews are stored in another. The following piece of code will determine total and average number of words in each review.
Step6: We can also use the Matplot library to visualize this data in a histogram format.
Step7: From the histogram as well as the average number of words per file, we can safely say that most reviews will fall under 250 words, which is the max sequence length value we will set.
Step8: Data
Step9: Parameters
Step10: Separating train and test data
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews.
Let's first give a positive label [1, 0] to the first 12500 reviews, and a negative label [0, 1] to the other reviews.
Step11: Then, let's shuffle the data and use 90% of the reviews for training and the other 10% for testing.
Step12: Verifying if the train and test data have enough positive and negative examples
Step13: Input functions
Step14: Creating the Estimator model
Step16: Create and Run Experiment
Step17: Making Predictions
First let's generate our own sentences to see how the model classifies them.
Step18: Now, let's generate predictions for the sentences | Python Code:
# Tensorflow
import tensorflow as tf
print('Tested with TensorFlow 1.2.0')
print('Your TensorFlow version:', tf.__version__)
# Feeding function for enqueue data
from tensorflow.python.estimator.inputs.queues import feeding_functions as ff
# Rnn common functions
from tensorflow.contrib.learn.python.learn.estimators import rnn_common
# Model builder
from tensorflow.python.estimator import model_fn as model_fn_lib
# Run an experiment
from tensorflow.contrib.learn.python.learn import learn_runner
# Helpers for data processing
import pandas as pd
import numpy as np
import argparse
import random
Explanation: Dependencies
End of explanation
# data from: http://ai.stanford.edu/~amaas/data/sentiment/
TRAIN_INPUT = 'data/train.csv'
TEST_INPUT = 'data/test.csv'
# data manually generated
MY_TEST_INPUT = 'data/mytest.csv'
# wordtovec
# https://nlp.stanford.edu/projects/glove/
# the matrix will contain 400,000 word vectors, each with a dimensionality of 50.
word_list = np.load('word_list.npy')
word_list = word_list.tolist() # originally loaded as numpy array
word_list = [word.decode('UTF-8') for word in word_list] # encode words as UTF-8
print('Loaded the word list, length:', len(word_list))
word_vector = np.load('word_vector.npy')
print ('Loaded the word vector, shape:', word_vector.shape)
Explanation: Loading Data
First, we want to create our word vectors. For simplicity, we're going to be using a pretrained model.
As one of the biggest players in the ML game, Google was able to train a Word2Vec model on a massive Google News dataset that contained over 100 billion different words! From that model, Google was able to create 3 million word vectors, each with a dimensionality of 300.
In an ideal scenario, we'd use those vectors, but since the word vectors matrix is quite large (3.6 GB!), we'll be using a much more manageable matrix that is trained using GloVe, a similar word vector generation model. The matrix will contain 400,000 word vectors, each with a dimensionality of 50.
We're going to be importing two different data structures, one will be a Python list with the 400,000 words, and one will be a 400,000 x 50 dimensional embedding matrix that holds all of the word vector values.
End of explanation
baseball_index = word_list.index('baseball')
print('Example: baseball')
print(word_vector[baseball_index])
Explanation: We can search our word list for a word like "baseball", and then access its corresponding vector through the embedding matrix.
End of explanation
max_seq_length = 10 # maximum length of sentence
num_dims = 50 # dimensions for each word vector
first_sentence = np.zeros((max_seq_length), dtype='int32')
first_sentence[0] = word_list.index("i")
first_sentence[1] = word_list.index("thought")
first_sentence[2] = word_list.index("the")
first_sentence[3] = word_list.index("movie")
first_sentence[4] = word_list.index("was")
first_sentence[5] = word_list.index("incredible")
first_sentence[6] = word_list.index("and")
first_sentence[7] = word_list.index("inspiring")
# first_sentence[8] = 0
# first_sentence[9] = 0
print(first_sentence.shape)
print(first_sentence) # shows the row index for each word
Explanation: Now that we have our vectors, our first step is taking an input sentence and then constructing the its vector representation. Let's say that we have the input sentence "I thought the movie was incredible and inspiring". In order to get the word vectors, we can use Tensorflow's embedding lookup function. This function takes in two arguments, one for the embedding matrix (the wordVectors matrix in our case), and one for the ids of each of the words. The ids vector can be thought of as the integerized representation of the training set. This is basically just the row index of each of the words. Let's look at a quick example to make this concrete.
End of explanation
with tf.Session() as sess:
print(tf.nn.embedding_lookup(word_vector, first_sentence).eval().shape)
Explanation: TODO### Insert image
The 10 x 50 output should contain the 50 dimensional word vectors for each of the 10 words in the sequence.
End of explanation
from os import listdir
from os.path import isfile, join
positiveFiles = ['positiveReviews/' + f for f in listdir('positiveReviews/') if isfile(join('positiveReviews/', f))]
negativeFiles = ['negativeReviews/' + f for f in listdir('negativeReviews/') if isfile(join('negativeReviews/', f))]
numWords = []
for pf in positiveFiles:
with open(pf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Positive files finished')
for nf in negativeFiles:
with open(nf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Negative files finished')
numFiles = len(numWords)
print('The total number of files is', numFiles)
print('The total number of words in the files is', sum(numWords))
print('The average number of words in the files is', sum(numWords)/len(numWords))
Explanation: Before creating the ids matrix for the whole training set, let’s first take some time to visualize the type of data that we have. This will help us determine the best value for setting our maximum sequence length. In the previous example, we used a max length of 10, but this value is largely dependent on the inputs you have.
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. Each of the reviews is stored in a txt file that we need to parse through. The positive reviews are stored in one directory and the negative reviews are stored in another. The following piece of code will determine total and average number of words in each review.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(numWords, 50)
plt.xlabel('Sequence Length')
plt.ylabel('Frequency')
plt.axis([0, 1200, 0, 8000])
plt.show()
Explanation: We can also use the Matplot library to visualize this data in a histogram format.
End of explanation
max_seq_len = 250
Explanation: From the histogram as well as the average number of words per file, we can safely say that most reviews will fall under 250 words, which is the max sequence length value we will set.
End of explanation
ids_matrix = np.load('ids_matrix.npy').tolist()
Explanation: Data
End of explanation
# Parameters for training
STEPS = 15000
BATCH_SIZE = 32
# Parameters for data processing
REVIEW_KEY = 'review'
SEQUENCE_LENGTH_KEY = 'sequence_length'
Explanation: Parameters
End of explanation
POSITIVE_REVIEWS = 12500
# copying sequences
data_sequences = [np.asarray(v, dtype=np.int32) for v in ids_matrix]
# generating labels
data_labels = [[1, 0] if i < POSITIVE_REVIEWS else [0, 1] for i in range(len(ids_matrix))]
# also creating a length column, this will be used by the Dynamic RNN
# see more about it here: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
data_length = [max_seq_len for i in range(len(ids_matrix))]
Explanation: Separating train and test data
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews.
Let's first give a positive label [1, 0] to the first 12500 reviews, and a negative label [0, 1] to the other reviews.
End of explanation
data = list(zip(data_sequences, data_labels, data_length))
random.shuffle(data) # shuffle
data = np.asarray(data)
# separating train and test data
limit = int(len(data) * 0.9)
train_data = data[:limit]
test_data = data[limit:]
Explanation: Then, let's shuffle the data and use 90% of the reviews for training and the other 10% for testing.
End of explanation
LABEL_INDEX = 1
def _number_of_pos_labels(df):
pos_labels = 0
for value in df:
if value[LABEL_INDEX] == [1, 0]:
pos_labels += 1
return pos_labels
pos_labels_train = _number_of_pos_labels(train_data)
total_labels_train = len(train_data)
pos_labels_test = _number_of_pos_labels(test_data)
total_labels_test = len(test_data)
print('Total number of positive labels:', pos_labels_train + pos_labels_test)
print('Proportion of positive labels on the Train data:', pos_labels_train/total_labels_train)
print('Proportion of positive labels on the Test data:', pos_labels_test/total_labels_test)
Explanation: Verifying if the train and test data have enough positive and negative examples
End of explanation
def get_input_fn(df, batch_size, num_epochs=1, shuffle=True):
def input_fn():
sequences = np.asarray([v for v in df[:,0]], dtype=np.int32)
labels = np.asarray([v for v in df[:,1]], dtype=np.int32)
length = np.asarray(df[:,2], dtype=np.int32)
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data
dataset = (
tf.contrib.data.Dataset.from_tensor_slices((sequences, labels, length)) # reading data from memory
.repeat(num_epochs) # repeat dataset the number of epochs
.batch(batch_size)
)
# for our "manual" test we don't want to shuffle the data
if shuffle:
dataset = dataset.shuffle(buffer_size=100000)
# create iterator
review, label, length = dataset.make_one_shot_iterator().get_next()
features = {
REVIEW_KEY: review,
SEQUENCE_LENGTH_KEY: length,
}
return features, label
return input_fn
features, label = get_input_fn(test_data, 2, shuffle=False)()
with tf.Session() as sess:
items = sess.run(features)
print(items[REVIEW_KEY])
print(sess.run(label))
train_input_fn = get_input_fn(train_data, BATCH_SIZE, None)
test_input_fn = get_input_fn(test_data, BATCH_SIZE)
Explanation: Input functions
End of explanation
def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01,
embed_dim=128):
def model_fn(features, labels, mode):
review = features[REVIEW_KEY]
sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], tf.int32)
# Creating embedding
data = tf.Variable(tf.zeros([BATCH_SIZE, max_seq_len, 50]),dtype=tf.float32)
data = tf.nn.embedding_lookup(word_vector, review)
# Each RNN layer will consist of a LSTM cell
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
# Runs the RNN model dynamically
# more about it at:
# https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs, sequence_length)
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(
last_activations, units, activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
predictions_softmax = tf.nn.softmax(predictions)
loss = None
train_op = None
eval_op = None
preds_op = {
'prediction': predictions_softmax,
'label': labels
}
if mode == tf.estimator.ModeKeys.EVAL:
eval_op = {
"accuracy": tf.metrics.accuracy(
tf.argmax(input=predictions_softmax, axis=1),
tf.argmax(input=labels, axis=1))
}
if mode != tf.estimator.ModeKeys.PREDICT:
loss = tf.losses.softmax_cross_entropy(labels, predictions)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss,
tf.contrib.framework.get_global_step(),
optimizer=optimizer,
learning_rate=learning_rate)
return model_fn_lib.EstimatorSpec(mode,
predictions=predictions_softmax,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_op)
return model_fn
model_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers
label_dimension=2, # since are just 2 classes
dnn_layer_sizes=[128, 64], # size of units in the dense layers on top of the RNN
optimizer='Adam',
learning_rate=0.001,
embed_dim=512)
Explanation: Creating the Estimator model
End of explanation
# create experiment
def generate_experiment_fn():
Create an experiment function given hyperparameters.
Returns:
A function (output_dir) -> Experiment where output_dir is a string
representing the location of summaries, checkpoints, and exports.
this function is used by learn_runner to create an Experiment which
executes model code provided in the form of an Estimator and
input functions.
All listed arguments in the outer function are used to create an
Estimator, and input functions (training, evaluation, serving).
Unlisted args are passed through to Experiment.
def _experiment_fn(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=test_input_fn,
train_steps=STEPS
)
return _experiment_fn
# run experiment
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir='testing2'))
Explanation: Create and Run Experiment
End of explanation
def string_to_array(s, separator=' '):
return s.split(separator)
def generate_data_row(sentence, label, max_length):
sequence = np.zeros((max_length), dtype='int32')
for i, word in enumerate(string_to_array(sentence)):
sequence[i] = word_list.index(word)
return sequence, label, max_length
def generate_data(sentences, labels, max_length):
data = []
for s, l in zip(sentences, labels):
data.append(generate_data_row(s, l, max_length))
return np.asarray(data)
sentences = ['i thought the movie was incredible and inspiring',
'this is a great movie',
'this is a good movie but isnt the best',
'it was fine i guess',
'it was definitely bad',
'its not that bad',
'its not that bad i think its a good movie',
'its not bad i think its a good movie']
labels = [[1, 0],
[1, 0],
[1, 0],
[0, 1],
[0, 1],
[1, 0],
[1, 0],
[1, 0]] # [1, 0]: positive, [0, 1]: negative
my_test_data = generate_data(sentences, labels, 10)
Explanation: Making Predictions
First let's generate our own sentences to see how the model classifies them.
End of explanation
preds = estimator.predict(input_fn=get_input_fn(my_test_data, 1, 1, shuffle=False))
print()
for p, s in zip(preds, sentences):
print('sentence:', s)
print('good review:', p[0], 'bad review:', p[1])
print('-' * 10)
Explanation: Now, let's generate predictions for the sentences
End of explanation |
9,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solvers
Step1: General "Fitting" Workflow
PHOEBE includes wrappers around several different inverse-problem "algorithms" with a common interface. These available "algorithms" are divided into three categories
Step2: Solving an eclipsing binary is a very time-intensive task (both for you as well as your computer). There is no one-size-fits-all recipe to follow, but in general you might find the following workflow useful
Step3: Adding Solver Options
As there are quite a few different solvers implemented in PHOEBE and each have their own available options, we won't get in to the details here. See LC esimators, RV estimators, Nelder-Mead Optimizer, and emcee sampler for details on some of the most commonly-used solver. The solver API docs or solver example scripts may also help.
As you may expect, to use a solver you must first call b.add_solver, set the desired options, and then call b.run_solver (or b.export_solver and b.import_solution).
Step4: In addition to the solver API docs, remember that each parameter has a description and possibly a set of available choices (if its a ChoiceParameter or SelectParameter).
Step5: run_solver
b.run_solver (or b.export_solver and b.import_solution) allows optionally setting a solution tag (if not provided, one will be created automatically), just as b.run_compute allows setting a model tag.
Step6: In many cases, the solution itself is plottable - showing some sort of diagnostic figures. In some cases, such as sampler.emcee or sampler.dynesty, there are several different diagnostic figures available which can be chosen by passing the available options to style.
Step7: The proposed values can be viewed via b.adopt_solution.
By passing trial_run=True the proposed changed parameters will be shown, but not changed in the bundle itself.
Step8: Otherwise, the changes will be made and all changed parameters (including those changed via constraints) will be returned.
Step9: The Merit Function
Both optimizers and samplers require running a forward model and use a merit function to compare the synthetic model to the observational data. This merit function is described in detail in the 2.3 release paper (Conroy+ 2020).
Several bundle methods allow for accessing the values used in the merit function
Step10: Now we'll look at the affect of priors_combine on the resulting priors distributions that would be sent to the merit function. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger()
Explanation: Solvers: The Inverse Problem
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
print(phoebe.list_available_solvers())
Explanation: General "Fitting" Workflow
PHOEBE includes wrappers around several different inverse-problem "algorithms" with a common interface. These available "algorithms" are divided into three categories:
estimators: provides proposed values for a number of parameters from the datasets as input alone, not requiring full forward-models via run_compute.
optimizers: runs off-the-shelf optimizers to attempt to find the local (or global) solution.
samplers: samples the local parameter space to estimate uncertainties and correlations.
To see the currently implemented set of solvers, we can call phoebe.list_available_solvers
End of explanation
b = phoebe.default_binary()
b.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101))
b.run_compute(irrad_method='none')
times = b.get_value('times', context='model')
fluxes = b.get_value('fluxes', context='model') + np.random.normal(size=times.shape) * 0.01
sigmas = np.ones_like(times) * 0.02
b = phoebe.default_binary()
b.add_dataset('lc', times=times, fluxes=fluxes, sigmas=np.full_like(fluxes, fill_value=0.1))
Explanation: Solving an eclipsing binary is a very time-intensive task (both for you as well as your computer). There is no one-size-fits-all recipe to follow, but in general you might find the following workflow useful:
Create a bundle with the appropriate configuration (single star, detached binary, semi-detached, contact, etc).
Attach observational datasets
Flip constraints as necessary to parameterize the system in the way that makes sense for any information you know in advance, types of data, and scientific goals. For example: if you have an SB2 system with RVs, it might make sense to reparameterize to "fit" for asini instead of sma and incl.
Manually set known or approximate values for parameters wherever possible.
Run the appropriate estimators, checking to see if the proposed values make sense before adopting them.
Try to optimize the forward model. See which expensive effects can be disabled without affecting the synthetic model (to some precision tolerance). Make sure to revisit these assumptions as optimizers may move the system to different areas of parameter space where they are no longer valid.
Run optimizers to find (what you hope and assume to be) the global solution. Start with just a few parameters that are most sensitive to the remaining residuals and add more until the residuals are flat (no systematics). Check all read-only constrained parameters to make sure that they make sense, are consistent with known information, and are physical.
Run samplers around the global solution found by the optimizers to explore that local parameter space and the correlations between parameters. Check for convergence before interpreting the resulting posteriors.
For the sake of a simple crude example, we'll just use the synthetic light curve of a default binary with a bit of noise as our "observations". See the inverse problem example scripts for more realistic examples.
End of explanation
b.add_solver('estimator.lc_geometry', solver='my_lcgeom_solver')
print(b.get_solver(solver='my_lcgeom_solver'))
Explanation: Adding Solver Options
As there are quite a few different solvers implemented in PHOEBE and each have their own available options, we won't get in to the details here. See LC esimators, RV estimators, Nelder-Mead Optimizer, and emcee sampler for details on some of the most commonly-used solver. The solver API docs or solver example scripts may also help.
As you may expect, to use a solver you must first call b.add_solver, set the desired options, and then call b.run_solver (or b.export_solver and b.import_solution).
End of explanation
print(b.get_parameter('expose_model').description)
print(b.get_parameter('lc_datasets').description)
print(b.get_parameter('lc_datasets').choices)
Explanation: In addition to the solver API docs, remember that each parameter has a description and possibly a set of available choices (if its a ChoiceParameter or SelectParameter).
End of explanation
b.run_solver(solver='my_lcgeom_solver', solution='my_lcgeom_solution')
Explanation: run_solver
b.run_solver (or b.export_solver and b.import_solution) allows optionally setting a solution tag (if not provided, one will be created automatically), just as b.run_compute allows setting a model tag.
End of explanation
_ = b.plot(solution='my_lcgeom_solution', show=True)
Explanation: In many cases, the solution itself is plottable - showing some sort of diagnostic figures. In some cases, such as sampler.emcee or sampler.dynesty, there are several different diagnostic figures available which can be chosen by passing the available options to style.
End of explanation
print(b.adopt_solution(trial_run=True))
Explanation: The proposed values can be viewed via b.adopt_solution.
By passing trial_run=True the proposed changed parameters will be shown, but not changed in the bundle itself.
End of explanation
print(b.adopt_solution())
Explanation: Otherwise, the changes will be made and all changed parameters (including those changed via constraints) will be returned.
End of explanation
b.add_distribution('teff@primary', phoebe.gaussian(6000,100), distribution='mydist01')
b.add_distribution('teff@secondary', phoebe.gaussian(5500,600), distribution='mydist01')
b.add_distribution('teff@primary', phoebe.uniform(5800,6200), distribution='mydist02')
b.add_solver('sampler.emcee', priors=['mydist01', 'mydist02'], solver='myemceesolver')
print(b.filter(qualifier='prior*'))
Explanation: The Merit Function
Both optimizers and samplers require running a forward model and use a merit function to compare the synthetic model to the observational data. This merit function is described in detail in the 2.3 release paper (Conroy+ 2020).
Several bundle methods allow for accessing the values used in the merit function:
b.calculate_residuals
b.calculate_chi2
b.calculate_lnlikelihood
b.calculate_lnp
The log-probability used as the merit function within optimizers and samplers is defined as calculate_lnp(priors, combine=priors_combine) + calculate_lnlikelihood.
To see the affect of priors_combine, we can pass the solver tag directly to b.get_distribution_collection, b.plot_distribution_collection, or b.calculate_lnp.
End of explanation
print(b.get_parameter('priors_combine').description)
_ = b.plot_distribution_collection('priors@myemceesolver', show=True)
b.calculate_lnp('priors@myemceesolver')
b.set_value('priors_combine', 'first')
_ = b.plot_distribution_collection('priors@myemceesolver', show=True)
b.calculate_lnp('priors@myemceesolver')
Explanation: Now we'll look at the affect of priors_combine on the resulting priors distributions that would be sent to the merit function.
End of explanation |
9,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reminder (Before we start)
Check whether the kernel at the top right is set to "urbs" in order to be able to run this script.
Example 1
Step1: <span style="color
Step2: Now let's solve the model!
Step3: For more on solver status and termination conditions
Step4: <span style="color
Step5: <span style="color
Step6: <span style="color
Step7: <span style="color
Step8: Plotting with matplotlib | Python Code:
# Load the object "environ" from the library "pyomo" which is already installed in our urbs environment.
# Whenever we will use it, we will call it using its alias "pyo"
import pyomo.environ as pyo
# Let's create a ConcreteModel object and fill it with life!
model = pyo.ConcreteModel()
model.name = "Example1"
## Variables
# Our variable "s" (supply) has two dimensions: time and technology. It is always positive.
model.s = pyo.Var(["t1", "t2", "t3", "t4", "t5"], ["Gas", "Biomass"], domain=pyo.NonNegativeReals)
## Objective function
# The objective is also a variable, albeit a special one which we will optimize.
model.OBJ = pyo.Objective(expr=50*model.s["t1", "Gas"] + 25*model.s["t1", "Biomass"] +\
50*model.s["t2", "Gas"] + 25*model.s["t2", "Biomass"] +\
50*model.s["t3", "Gas"] + 25*model.s["t3", "Biomass"] +\
50*model.s["t4", "Gas"] + 25*model.s["t4", "Biomass"] +\
50*model.s["t5", "Gas"] + 25*model.s["t5", "Biomass"])
## Constraints
# The supply from the Gas power plant cannot exceed its capacity of 100 MW
model.ConstraintGasCap1 = pyo.Constraint(expr = model.s["t1", "Gas"] <= 100)
model.ConstraintGasCap2 = pyo.Constraint(expr = model.s["t2", "Gas"] <= 100)
model.ConstraintGasCap3 = pyo.Constraint(expr = model.s["t3", "Gas"] <= 100)
model.ConstraintGasCap4 = pyo.Constraint(expr = model.s["t4", "Gas"] <= 100)
model.ConstraintGasCap5 = pyo.Constraint(expr = model.s["t5", "Gas"] <= 100)
# The supply from the Biomass power plant cannot exceed its capacity of 30 MW
model.ConstraintBiomassCap1 = pyo.Constraint(expr = model.s["t1", "Biomass"] <= 30)
model.ConstraintBiomassCap2 = pyo.Constraint(expr = model.s["t2", "Biomass"] <= 30)
model.ConstraintBiomassCap3 = pyo.Constraint(expr = model.s["t3", "Biomass"] <= 30)
model.ConstraintBiomassCap4 = pyo.Constraint(expr = model.s["t4", "Biomass"] <= 30)
model.ConstraintBiomassCap5 = pyo.Constraint(expr = model.s["t5", "Biomass"] <= 30)
# The supply should at least be equal to the demand
model.ConstraintDem1 = pyo.Constraint(expr = model.s["t1", "Gas"] + model.s["t1", "Biomass"] >= 60)
model.ConstraintDem2 = pyo.Constraint(expr = model.s["t2", "Gas"] + model.s["t2", "Biomass"] >= 100)
model.ConstraintDem3 = pyo.Constraint(expr = model.s["t3", "Gas"] + model.s["t3", "Biomass"] >= 120)
model.ConstraintDem4 = pyo.Constraint(expr = model.s["t4", "Gas"] + model.s["t4", "Biomass"] >= 80)
model.ConstraintDem5 = pyo.Constraint(expr = model.s["t5", "Gas"] + model.s["t5", "Biomass"] >= 30)
# Write the LP mathematical problem that is solved to a file (optional)
# Here, we are reporting the model itself, not its solution
model.write("01_concrete_a.lp")
Explanation: Reminder (Before we start)
Check whether the kernel at the top right is set to "urbs" in order to be able to run this script.
Example 1: Electricity supply of an island
Learning objectives
Translate a mathematical optimization problem into a pyomo ConcreteModel/AbstractModel
Recognize the basic structure of an optimization model
Report the results of pyomo in different formats
Run and edit scripts using pyomo, pandas and matplotlib
Mathematical formulation
We start with a simple example. Let's assume we have a gas power plant ($P_{gas}$ = 100 MW) and a biomass power plant ($P_{bm}$ = 30 MW) supplying an island. The cost of supplying 1 MWh of electricity using the gas power plant is EUR 50, whereas the cost of using biomass is 25 EUR/MWh. These costs include operation and maintenance costs and fuel costs. The efficiency of the power plants is already taken into account.
We would like to minimize the cost of operating the system for a given demand of electricity $d(t)$.
$$\min \quad 50s_{gas}(t) + 25s_{bm}(t)$$
$$s.t. \quad s_{gas}(t) + s_{bm}(t) \geq d(t)$$
The supply from the power plant is non-negative:
$$s_{gas}(t), s_{bm}(t) \geq 0$$
It cannot exceed the capacity of the power plants:
$$s_{gas}(t) \leq 100$$
$$s_{bm}(t) \leq 30$$
Further, we define the demand as follows:
$$d(t) = [60, 100, 120, 80, 30]$$
<span style="color:blue">Task</span>
Try to solve this problem with pen and paper!
Formulation as a pyomo ConcreteModel
We could solve this problem using a pyomo ConcreteModel:
End of explanation
# Try this now
model.write("01_concrete_b.lp", io_options={'symbolic_solver_labels': True})
Explanation: <span style="color:blue">Task</span>
Open the file "01_concrete_a.lp" with a text editor. Can you recognize the variables and constraints?
End of explanation
# We first load the solver
opt = pyo.SolverFactory('glpk') # glpk: GNU Linear Programming Kit
results = opt.solve(model)
# First way of reporting the solution
results
print(results.Solver)
Explanation: Now let's solve the model!
End of explanation
model.display()
Explanation: For more on solver status and termination conditions:
http://www.pyomo.org/blog/2015/1/8/accessing-solver
End of explanation
# Here we are loading all the objects within "pyomo.environ" and we can use them directly.
# The script will recognize their names. This is practical, but not a good coding style.
# There could be issues if you use one of the "reserved names" that are already defined in the library.
from pyomo.environ import *
model = AbstractModel()
# Sets
model.I = Set() # we could define the dimensions, or let pyomo determine them from the data
model.J = Set()
model.T = Set()
# Parameters
model.a = Param(model.I, model.J)
model.b = Param(model.I, model.T)
model.c = Param(model.J)
# Variables
model.x = Var(model.J, model.T, domain=NonNegativeReals) # the variable is indexed by the set J and the set T
# Objective function
def obj_expression(model):
sigma = 0
for t in model.T:
for j in model.J:
sigma = sigma + model.c[j] * model.x[(j, t)]
return sigma
model.OBJ = Objective(rule=obj_expression)
# Constraints
def ax_constraint_rule(model, i, t):
# return the expression for the constraint for i
return sum(model.a[i,j] * model.x[j, t] for j in model.J) >= model.b[i, t]
model.AxbConstraint = Constraint(model.I, model.T, rule=ax_constraint_rule) # this creates one constraint for each member of the set model.I
model.dual = Suffix(direction=Suffix.IMPORT)
Explanation: <span style="color:blue">Task</span>
Try to comment one or multiple constraints. What happens?
Try to maximize instead of minimizing the costs. (Tip: add the option 'sense=pyo.maximize' into the objective function)
How easy is it to add another power plant? Another time step?
Formulation as a pyomo AbstractModel
One way to add flexibility is to write the problem abstractly. For example, the following equations represent a linear program (LP) to find optimal values for the vector $x$ (in our case, the hourly supply from the power plants) with parameters $c_j$ (costs), $a_{i,j}$ and $b_i$ (constraints):
$$ \begin{array}{lll} \min & \sum_{j=1}^n c_j x_{j,t} & \
s.t. & \sum_{j=1}^n a_{i,j} x_{j,t} \geq b_{i,t} & \forall i = 1 \ldots m\ & x_{j,t} \geq 0 & \forall j = 1 \ldots n
\end{array} $$
For that, there is the pyomo class AbstractModel:
End of explanation
# We can create an instance without filling it with data
instance = model.create_instance()
instance.pprint()
# We can load data from a file (written in AMPL format)
data = DataPortal()
data.load(filename='01_abstract.dat')
# You can view the defined sets and parameters here
list(data.keys())
Explanation: <span style="color:blue">Task</span>
With pen and paper, determine the parameters a, b, and c to replicate the concrete model.
By running the code, we create an abstract model. Now we need to create an instance of it:
End of explanation
# We can create an instance that is filled with input data
instance = model.create_instance(data)
instance.pprint()
opt = SolverFactory('glpk')
status = opt.solve(instance)
status
instance.display()
# Another way of reporting the results
instance.solutions.store_to(status)
status.write(filename='01_abstract_results.json', format='json')
print ("Duals")
for c in instance.component_objects(Constraint, active=True):
print (" Constraint",c)
for index in c:
try:
print (" ", index, instance.dual[c[index]])
except: # if there is an error, skip that constraint
continue
Explanation: <span style="color:blue">Task</span>
Compare your results for the parameters a, b, and c with the used values in 01_abstract.dat.
End of explanation
# load the library first
import pandas as pd
# we will create a dictionary from the model instance variable x
# the keys of the dictionary are the indices, the values are the x values
supply_data = {(j, t): value(x) for (j, t), x in instance.x.items()}
# create a DataFrame object from that dictionary
df_supply = pd.DataFrame.from_dict(supply_data, orient="index", columns=["SE [MWh]"])
df_supply
# Let's make the index look better - through a multiindex
df_supply.index = pd.MultiIndex.from_tuples(df_supply.index, names=('Technology', 't'))
# Get rid of the letter t in the timesteps
df_supply.index = df_supply.index.set_levels(df_supply.index.levels[1].str.replace('t', ''), level=1)
# Show the DataFrame
df_supply
# This looks already good... but let's try to pivot the table, so that technologies appear as columns
df_supply = df_supply.unstack(level=0)
df_supply
# The columns have two levels now, let's get rid of the first one
df_supply = df_supply.droplevel(0, axis=1)
df_supply
# Let's add the unit to the names of the technologies
df_supply.columns = [x + " SE [MWh]" for x in df_supply.columns]
df_supply
# We repeat this for the demand
# Since I know that the first character of the timesteps is always t, I index from 1 in t[1:] to get rid of it
demand_data = {t[1:]: value(x) for (i, t), x in instance.b.items() if i == "Dem"}
# create a DataFrame object from that dictionary
df_demand = pd.DataFrame.from_dict(demand_data, orient="index", columns=["D [MWh]"])
df_demand
# Let' save the supply and demand into two different csv files
# ! Adapt the decimal character and the column separator to your local settings !
df_supply.to_csv("01_supply.csv", sep=";", decimal=",")
df_demand.to_csv("01_demand.csv", sep=";", decimal=",")
Explanation: <span style="color:blue">Task</span>
Set the demand in the last time step to 140. What happens?
How easy is it to add another power plant? Another time step?
Reporting into pandas DataFrame objects
pandas is a package that allows you to organize your data in multidimensional "tables", so-called DataFrame objects. It is useful if you want to export your results into csv or Microsoft Excel format.
End of explanation
# Most basic plot
# pandas has in-built functions from matplotlib, so we can plot a DataFrame object
%matplotlib inline
df_supply.plot.area()
# import the library
import matplotlib.pyplot as plt
# Create an empty figure
fig = plt.figure()
# Now let's make an interactive object that we can edit with a GUI
%matplotlib
plot_supply = df_supply.plot.area()
# Add these options:
# To change the colors: color=["darkgreen", "gray"]
# To add title: title = "Example 1"
# Let's add the demand to the same plot
df_demand.plot.line(ax=plot_supply, color="k")
Explanation: Plotting with matplotlib
End of explanation |
9,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 2
Imports
Step2: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
Step4: Write a function that computes the factorial of small numbers using a Python loop.
Step5: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 2
Imports
End of explanation
def np_fact(n):
Compute n! = n*(n-1)*...*1 using Numpy.
#Creates array from 1 to n
c = np.arange(1,n+1,1)
#Returns a 1D array of the factorials of each number
a = c.cumprod()
#Settles the 0 and 1 case
if n == 0 or n == 1:
return 1
#returns the last number in the array (The one we are looking for)
else:
return a[-1]
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
End of explanation
def loop_fact(n):
Compute n! using a Python for loop.
#Creates a list from 0 to n
array = [0,n+1]
#i is a counting variable, number is the placeholder
i = 0
number = 1
#0 and 1 case
if n == 0 or n == 1:
return 1
#while i is less than the number count up to the number and multiply it by the previous numbers (number)
else:
while i < n:
i += 1
number = number * i
return number
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Write a function that computes the factorial of small numbers using a Python loop.
End of explanation
%timeit -n1 -r1 np_fact(50)
%timeit -n1 -r1 loop_fact(50)
Explanation: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is:
python
%timeit -n1 -r1 function_to_time()
End of explanation |
9,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing binary decision trees
The goal of this notebook is to implement your own binary decision tree classifier. You will
Step1: Load the lending club dataset
We will be using the same LendingClub dataset as in the previous assignment.
Step2: Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
features
Step4: Let's explore what the dataset looks like.
Step5: Transform categorical data into binary features
In this assignment, we will implement binary decision trees (decision trees for binary features, a specific case of categorical variables taking on two values, e.g., true/false). Since all of our features are currently categorical features, we want to turn them into binary features.
For instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature
{'home_ownership'
Step6: Let's see what the feature columns look like now
Step7: Let's explore what one of these columns looks like
Step8: This column is set to 1 if the loan grade is A and 0 otherwise.
Checkpoint
Step9: Train-test split
We split the data into a train test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
Step10: Decision tree implementation
In this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections.
Function to count number of mistakes while predicting majority class
Recall from the lecture that prediction at an intermediate node works by predicting the majority class for all data points that belong to this node.
Now, we will write a function that calculates the number of missclassified examples when predicting the majority class. This will be used to help determine which feature is the best to split on at a given node of the tree.
Note
Step11: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
Step12: Function to pick best feature to split on
The function best_splitting_feature takes 3 arguments
Step13: To test your best_splitting_feature function, run the following code
Step14: Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values
Step15: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions
Step16: Here is a recursive function to count the nodes in your tree
Step17: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step18: Build the tree!
Now that all the tests are passing, we will train a tree model on the train_data. Limit the depth to 6 (max_depth = 6) to make sure the algorithm doesn't run for too long. Call this tree my_decision_tree.
Warning
Step19: Making predictions with a decision tree
As discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function classify, which takes in a learned tree and a test point x to classify. We include an option annotate that describes the prediction path when set to True.
Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
Step20: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
Step21: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step22: Quiz Question
Step23: Now, let's use this function to evaluate the classification error on the test set.
Step24: Quiz Question
Step25: Quiz Question
Step26: Exploring the left subtree of the left subtree
Step27: Quiz Question | Python Code:
import pandas as pd
import numpy as np
Explanation: Implementing binary decision trees
The goal of this notebook is to implement your own binary decision tree classifier. You will:
Use SFrames to do some feature engineering.
Transform categorical variables into binary variables.
Write a function to compute the number of misclassified examples in an intermediate node.
Write a function to find the best feature to split on.
Build a binary decision tree from scratch.
Make predictions using the decision tree.
Evaluate the accuracy of the decision tree.
Visualize the decision at the root node.
Important Note: In this assignment, we will focus on building decision trees where the data contain only binary (0 or 1) features. This allows us to avoid dealing with:
* Multiple intermediate nodes in a split
* The thresholding issues of real-valued features.
This assignment may be challenging, so brace yourself :)
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
End of explanation
loans = pd.read_csv('../../data/lending-club-data.csv')
Explanation: Load the lending club dataset
We will be using the same LendingClub dataset as in the previous assignment.
End of explanation
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
del loans['bad_loans']
Explanation: Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
End of explanation
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans_data = loans[features + [target]]
Explanation: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
features:
grade of the loan
the length of the loan term
the home ownership status: own, mortgage, rent
number of years of employment.
Since we are building a binary decision tree, we will have to convert these categorical features to a binary representation in a subsequent section using 1-hot encoding.
End of explanation
loans_data
Explanation: Let's explore what the dataset looks like.
End of explanation
from sklearn.feature_extraction import DictVectorizer
dvec = DictVectorizer(sparse=False)
X = dvec.fit_transform(loans_data.transpose().to_dict().values())
loans_data = pd.get_dummies(loans_data)
for column in loans_data.columns:
loans_data[column] = loans_data[column].fillna(0)
Explanation: Transform categorical data into binary features
In this assignment, we will implement binary decision trees (decision trees for binary features, a specific case of categorical variables taking on two values, e.g., true/false). Since all of our features are currently categorical features, we want to turn them into binary features.
For instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature
{'home_ownership': 'RENT'}
we want to turn this into three features:
{
'home_ownership = OWN' : 0,
'home_ownership = MORTGAGE' : 0,
'home_ownership = RENT' : 1
}
Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.
End of explanation
features = loans_data.columns.drop('safe_loans')
features
print("Number of features (after binarizing categorical variables) = %s" % len(features))
Explanation: Let's see what the feature columns look like now:
End of explanation
loans_data['grade_A']
Explanation: Let's explore what one of these columns looks like:
End of explanation
print("Total number of grade.A loans : %s" % loans_data['grade_A'].sum())
print("Expexted answer : 6422")
Explanation: This column is set to 1 if the loan grade is A and 0 otherwise.
Checkpoint: Make sure the following answers match up.
End of explanation
#train_data, test_data = loans_data.random_split(.8, seed=1)
#train_data, validation_data = loans_data.random_split(.8, seed=1)
import json
with open('../../data/module-5-assignment-2-train-idx.json') as json_data:
train_idx = json.load(json_data)
json_data.close()
train_data = loans_data.iloc[train_idx]
with open('../../data/module-5-assignment-2-test-idx.json') as json_data:
test_idx = json.load(json_data)
json_data.close()
test_data = loans_data.iloc[test_idx]
Explanation: Train-test split
We split the data into a train test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
End of explanation
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
safe_loans = labels_in_node[labels_in_node == 1].size
# Count the number of -1's (risky loans)
risky_loans = labels_in_node[labels_in_node == -1].size
# Return the number of mistakes that the majority classifier makes.
if safe_loans > risky_loans:
return risky_loans
else:
return safe_loans
Explanation: Decision tree implementation
In this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections.
Function to count number of mistakes while predicting majority class
Recall from the lecture that prediction at an intermediate node works by predicting the majority class for all data points that belong to this node.
Now, we will write a function that calculates the number of missclassified examples when predicting the majority class. This will be used to help determine which feature is the best to split on at a given node of the tree.
Note: Keep in mind that in order to compute the number of mistakes for a majority classifier, we only need the label (y values) of the data points in the node.
Steps to follow :
* Step 1: Calculate the number of safe loans and risky loans.
* Step 2: Since we are assuming majority class prediction, all the data points that are not in the majority class are considered mistakes.
* Step 3: Return the number of mistakes.
Now, let us write the function intermediate_node_num_mistakes which computes the number of misclassified examples of an intermediate node given the set of labels (y values) of the data points contained in the node. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.
End of explanation
# Test case 1
example_labels = np.array([-1, -1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print('Test passed!')
else:
print('Test 1 failed... try again!')
# Test case 2
example_labels = np.array([-1, -1, 1, 1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print('Test passed!')
else:
print('Test 2 failed... try again!')
# Test case 3
example_labels = np.array([-1, -1, -1, -1, -1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print('Test passed!')
else:
print('Test 3 failed... try again!')
Explanation: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
End of explanation
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
left_mistakes = intermediate_node_num_mistakes(left_split[target].values)
# Calculate the number of misclassified examples in the right split.
right_mistakes = intermediate_node_num_mistakes(right_split[target].values)
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = (left_mistakes + right_mistakes) / data.size
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_error = error
best_feature = feature
return best_feature # Return the best feature we found
Explanation: Function to pick best feature to split on
The function best_splitting_feature takes 3 arguments:
1. The data (SFrame of data which includes all of the feature columns and label column)
2. The features to consider for splits (a list of strings of column names to consider for splits)
3. The name of the target/label column (string)
The function will loop through the list of possible features, and consider splitting on each of them. It will calculate the classification error of each split and return the feature that had the smallest classification error when split on.
Recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}}
$$
Follow these steps:
* Step 1: Loop over each feature in the feature list
* Step 2: Within the loop, split the data into two groups: one group where all of the data has feature value 0 or False (we will call this the left split), and one group where all of the data has feature value 1 or True (we will call this the right split). Make sure the left split corresponds with 0 and the right split corresponds with 1 to ensure your implementation fits with our implementation of the tree building process.
* Step 3: Calculate the number of misclassified examples in both groups of data and use the above formula to compute the classification error.
* Step 4: If the computed error is smaller than the best error found so far, store this feature and its error.
This may seem like a lot, but we have provided pseudocode in the comments in order to help you implement the function correctly.
Note: Remember that since we are only dealing with binary features, we do not have to consider thresholds for real-valued features. This makes the implementation of this function much easier.
Fill in the places where you find ## YOUR CODE HERE. There are five places in this function for you to fill in.
End of explanation
if best_splitting_feature(train_data, features, 'safe_loans') == 'term_ 36 months':
print('Test passed!')
else:
print('Test failed... try again!')
Explanation: To test your best_splitting_feature function, run the following code:
End of explanation
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1
else:
leaf['prediction'] = -1
# Return the leaf node
return leaf
Explanation: Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values:
{
'is_leaf' : True/False.
'prediction' : Prediction at the leaf node.
'left' : (dictionary corresponding to the left tree).
'right' : (dictionary corresponding to the right tree).
'splitting_feature' : The feature that this node splits on.
}
First, we will write a function that creates a leaf node given a set of target values. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.
End of explanation
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print("--------------------------------------------------------------------")
print("Subtree, depth = %s (%s data points)." % (current_depth, len(target_values)))
# Stopping condition 1
# (Check if there are mistakes at current node.
# Recall you wrote a function intermediate_node_num_mistakes to compute this.)
if intermediate_node_num_mistakes(target_values) == 0:
print("Stopping condition 1 reached.")
# If not mistakes at current node, make current node a leaf node
return create_leaf(target_values)
# Stopping condition 2 (check if there are remaining features to consider splitting on)
if remaining_features.size == 0:
print("Stopping condition 2 reached.")
# If there are no remaining features to consider, make current node a leaf node
return create_leaf(target_values)
# Additional stopping condition (limit tree depth)
if current_depth >= max_depth:
print("Reached maximum depth. Stopping for now.")
# If the max tree depth has been reached, make current node a leaf node
return create_leaf(target_values)
# Find the best splitting feature (recall the function best_splitting_feature implemented above)
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
remaining_features = remaining_features.drop(splitting_feature)
print("Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split)))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print("Creating leaf node.")
return create_leaf(left_split[target])
if len(right_split) == len(data):
print("Creating leaf node.")
return create_leaf(right_split[target])
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth)
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Explanation: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions:
1. Stopping condition 1: All data points in a node are from the same class.
2. Stopping condition 2: No more features to split on.
3. Additional stopping condition: In addition to the above two stopping conditions covered in lecture, in this assignment we will also consider a stopping condition based on the max_depth of the tree. By not letting the tree grow too deep, we will save computational effort in the learning process.
Now, we will write down the skeleton of the learning algorithm. Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
End of explanation
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
Explanation: Here is a recursive function to count the nodes in your tree:
End of explanation
small_data_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 3)
if count_nodes(small_data_decision_tree) == 13:
print('Test passed!')
else:
print('Test failed... try again!')
print('Number of nodes found :', count_nodes(small_data_decision_tree))
print('Number of nodes that should be there : 13')
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
my_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6)
Explanation: Build the tree!
Now that all the tests are passing, we will train a tree model on the train_data. Limit the depth to 6 (max_depth = 6) to make sure the algorithm doesn't run for too long. Call this tree my_decision_tree.
Warning: This code block may take 1-2 minutes to learn.
End of explanation
def classify(tree, x, annotate = False):
#print(x)
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print("At leaf, predicting %s" % tree['prediction'])
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print("Split on %s = %s" % (tree['splitting_feature'], split_feature_value))
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
Explanation: Making predictions with a decision tree
As discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function classify, which takes in a learned tree and a test point x to classify. We include an option annotate that describes the prediction path when set to True.
Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
End of explanation
test_data.iloc[0]
print('Predicted class: %s ' % classify(my_decision_tree, test_data.iloc[0]))
Explanation: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
End of explanation
classify(my_decision_tree, test_data.iloc[0], annotate=True)
Explanation: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
End of explanation
def evaluate_classification_error(tree, data, target):
# Apply the classify(tree, x) to each row in your data
prediction = [classify(tree, data.iloc[i])
for i in range(0, len(data))]
#prediction = data.apply(lambda x: classify(tree, x))
label = data[target]
mistakes = 0
for i in range(0, len(prediction)):
if prediction[i] != label.iloc[i]:
mistakes = mistakes + 1
return mistakes/len(prediction)
# Once you've made the predictions, calculate the classification error and return it
Explanation: Quiz Question: What was the feature that my_decision_tree first split on while making the prediction for test_data[0]?
Quiz Question: What was the first feature that lead to a right split of test_data[0]?
Quiz Question: What was the last feature split on before reaching a leaf node for test_data[0]?
Evaluating your decision tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}}
$$
Now, write a function called evaluate_classification_error that takes in as input:
1. tree (as described above)
2. data (an SFrame)
3. target (a string - the name of the target/label column)
This function should calculate a prediction (class label) for each row in data using the decision tree and return the classification error computed using the above formula. Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
End of explanation
evaluate_classification_error(my_decision_tree, test_data, target)
Explanation: Now, let's use this function to evaluate the classification error on the test set.
End of explanation
def print_stump(tree, name = 'root'):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print("(leaf, label: %s)" % tree['prediction'])
return None
split_feature, split_value = split_name.split('_')
print(' %s' % name)
print(' |---------------|----------------|')
print(' | |')
print(' | |')
print(' | |')
print(' [{0} == 0] [{0} == 1] '.format(split_name))
print(' | |')
print(' | |')
print(' | |')
print(' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree')))
print_stump(my_decision_tree)
Explanation: Quiz Question: Rounded to 2nd decimal point, what is the classification error of my_decision_tree on the test_data?
Printing out a decision stump
As discussed in the lecture, we can print out a single decision stump (printing out the entire tree is left as an exercise to the curious reader).
End of explanation
print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])
Explanation: Quiz Question: What is the feature that is used for the split at the root node?
Exploring the intermediate left subtree
The tree is a recursive dictionary, so we do have access to all the nodes! We can use
* my_decision_tree['left'] to go left
* my_decision_tree['right'] to go right
End of explanation
print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature'])
Explanation: Exploring the left subtree of the left subtree
End of explanation
print_stump(my_decision_tree['right'], my_decision_tree['right']['splitting_feature'])
Explanation: Quiz Question: What is the path of the first 3 feature splits considered along the left-most branch of my_decision_tree?
Quiz Question: What is the path of the first 3 feature splits considered along the right-most branch of my_decision_tree?
End of explanation |
9,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Radial Velocity Offsets (rv_offset)
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
Radial velocity offsets allow for a per-component and per-dataset offset applied to the synthetic RVs.
First let's run a model without any offsets applied.
Step3: and now let's look at the rv_offset parameters and set an offset for the primary RV.
Step4: Now let's run another model, with the offset applied to the primary component.
Step5: Influence on Radial Velocities | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Radial Velocity Offsets (rv_offset)
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rv01')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.set_value_all('atm', 'blackbody')
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
b.run_compute(model='without_offset')
Explanation: Relevant Parameters
Radial velocity offsets allow for a per-component and per-dataset offset applied to the synthetic RVs.
First let's run a model without any offsets applied.
End of explanation
print(b.filter(qualifier='rv_offset'))
b.set_value(qualifier='rv_offset', component='primary', value=25)
Explanation: and now let's look at the rv_offset parameters and set an offset for the primary RV.
End of explanation
b.run_compute(model='with_offset')
Explanation: Now let's run another model, with the offset applied to the primary component.
End of explanation
afig, mplfig = b.plot(legend=True, show=True)
Explanation: Influence on Radial Velocities
End of explanation |
9,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression - Case Study - part 1
A well known case for regression with continuous features is the Boston housing dataset. It is so well known that it is distributed as part of the data science libraries, like scikit-learn.
Step1: Load and explore the dataset
For convenience, we load the features and target in two pandas dataframes X and y, and concatenate them in a dataframe called df.
Step2: The first lines of this dataset look like this, the last column MEDV being the target (price).
Step3: The seaborn package provides a very convenient way of plotting the values of features and target
Step4: We could already dive a little more into this dataset and its features but this will belong in the next parts. For now, we will use them untouched and focus on our first simple prediction pipeline.
Our first prediction pipeline
Let's define a function that will go through the whole prediction, post-processing and evaluation, so that we can run it easily.
In this pipeline, we will
Step5: Simple linear regression
The average number of rooms could be an influential feature for evaluating the price, let's compute the linear regression by ordinary least squares using this feature only
Step6: The OLS() method from the statsmodels library gives an extensive insight into the characteristics of an OLS regression
Step7: We can at least state that the simple linear regression with this feature only gives pretty bad results.
Multiple linear regression
Let's try multiple linear regression with all the features of the untouched dataset
Step8: Once again, we can get a more detailed insight | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from sklearn import linear_model
from sklearn.metrics import explained_variance_score, mean_squared_error
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
import statsmodels.api as sm
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.DESCR)
Explanation: Linear Regression - Case Study - part 1
A well known case for regression with continuous features is the Boston housing dataset. It is so well known that it is distributed as part of the data science libraries, like scikit-learn.
End of explanation
X = pd.DataFrame(boston.data)
X.columns = boston.feature_names
y = pd.DataFrame(boston.target)
y.columns = ['MEDV']
df = pd.concat([X,y], axis=1)
Explanation: Load and explore the dataset
For convenience, we load the features and target in two pandas dataframes X and y, and concatenate them in a dataframe called df.
End of explanation
df.head()
Explanation: The first lines of this dataset look like this, the last column MEDV being the target (price).
End of explanation
fig = plt.figure(figsize=(18,18))
for i, col in enumerate(X.columns):
ax = fig.add_subplot(4,4,i+1)
sns.distplot(X[col], ax=ax)
ax = fig.add_subplot(4,4,len(X.columns)+2)
sns.distplot(y.MEDV, ax=ax, color='r');
Explanation: The seaborn package provides a very convenient way of plotting the values of features and target:
End of explanation
def run_regression(X, y):
# Split the data into training/test sets
X_train = X.values[:-100]
X_test = X.values[-100:]
# Split the targets into training/test sets
y_train = y.values[:-100]
y_test = y.values[-100:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# Make prediction
y_pred = regr.predict(X_test)
# Print accuracy score
mse = mean_squared_error(y_pred, y_test)
# Plot outputs
fig = plt.figure(figsize=(12,6))
ax = fig.add_subplot(121)
ax.scatter(y_test, y_pred, color='black')
ax.plot([0,40],[0,40], color='g')
ax.set_xlim([0,40])
ax.set_ylim([0,40])
ax.set_xlabel('Actual value', fontsize=14)
ax.set_ylabel('Predicted value', fontsize=14)
ax.set_title('Mean squared error: {:.2f}'.format(mse), fontsize=16)
ax2 = fig.add_subplot(122)
sns.distplot(y_test-y_pred, ax=ax2, bins=15)
ax2.set_xlim([-30,30])
ax2.axvline(0, color='k', linestyle='dotted')
ax2.set_title('Residuals distribution', fontsize=16)
Explanation: We could already dive a little more into this dataset and its features but this will belong in the next parts. For now, we will use them untouched and focus on our first simple prediction pipeline.
Our first prediction pipeline
Let's define a function that will go through the whole prediction, post-processing and evaluation, so that we can run it easily.
In this pipeline, we will:
split the dataset into a training and a test set
This will be made in a very simple way for the moment: the test set corresponds to the last 100 values (approx. 20%) of the dataset.
fit a linear regression model on the training data
use this model to make a prediction on test data
compute the mean squared error between the predicted and true targets
plot 1/ the predicted values vs true values, 2/ the distribution of residuals
In the next parts of this case study we will address the limitations of this first pipeline, but it illustrates some important steps.
End of explanation
run_regression(pd.DataFrame(X.RM),y)
Explanation: Simple linear regression
The average number of rooms could be an influential feature for evaluating the price, let's compute the linear regression by ordinary least squares using this feature only:
End of explanation
olsy = y.values
olsX = X.RM.values
olsX = sm.add_constant(olsX)
model = sm.OLS(olsy, olsX)
results = model.fit()
print(results.summary())
Explanation: The OLS() method from the statsmodels library gives an extensive insight into the characteristics of an OLS regression:
End of explanation
run_regression(X,y)
Explanation: We can at least state that the simple linear regression with this feature only gives pretty bad results.
Multiple linear regression
Let's try multiple linear regression with all the features of the untouched dataset:
End of explanation
olsy = y.values
olsX = X.values
olsX = sm.add_constant(olsX)
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
Explanation: Once again, we can get a more detailed insight:
End of explanation |
9,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaia TGAS + 2MASS + WISE
The provided Gaia dataset is a dump of the Gaia science archive's match between the astrometry in the Tycho-Gaia Astrometric Solution (TGAS) and photometric sources in 2MASS and WISE. The data contains all of the columns provided in the TGAS catalog (described here) along with photometric columns from the 2MASS and WISE catalogs.
The 2MASS photometry is in the $J$, $H$, and $K_s$ bands, so the corresponding magnitude measurements are stored in the columns j_m, h_m, ks_m. The uncertainties for each magnitude balue are in the columns j_msigcom, h_msigcom, ks_msigcom.
The WISE filters are named $W1$-$W4$, so the magnitudes and uncertainties are w*mpro and w*mpro_error with the * replaced by the filter number, 1-4.
Step1: Reading the Gaia TGAS data
The data are all stored in a single FITS binary table and can be read using the astropy.table.Table class
Step2: As an example of using the data, let's make two color-magnitude diagrams for all stars within 150 pc
Step3: Compute absolute magnitude using the distance to each star, and compute colors by differencing magnitudes | Python Code:
from os import path
import numpy as np
import astropy.coordinates as coord
import astropy.units as u
from astropy.io import fits
from astropy.table import Table
import matplotlib.pyplot as plt
plt.style.use('notebook.mplstyle')
%matplotlib inline
import numpy as np
data_path = '../data/'
Explanation: Gaia TGAS + 2MASS + WISE
The provided Gaia dataset is a dump of the Gaia science archive's match between the astrometry in the Tycho-Gaia Astrometric Solution (TGAS) and photometric sources in 2MASS and WISE. The data contains all of the columns provided in the TGAS catalog (described here) along with photometric columns from the 2MASS and WISE catalogs.
The 2MASS photometry is in the $J$, $H$, and $K_s$ bands, so the corresponding magnitude measurements are stored in the columns j_m, h_m, ks_m. The uncertainties for each magnitude balue are in the columns j_msigcom, h_msigcom, ks_msigcom.
The WISE filters are named $W1$-$W4$, so the magnitudes and uncertainties are w*mpro and w*mpro_error with the * replaced by the filter number, 1-4.
End of explanation
tgas = Table.read(path.join(data_path, 'gaia', 'tgas_2mass_wise.fits'))
print(tgas.colnames)
Explanation: Reading the Gaia TGAS data
The data are all stored in a single FITS binary table and can be read using the astropy.table.Table class:
End of explanation
with u.set_enabled_equivalencies(u.parallax()):
dist = coord.Distance((tgas['parallax'] * u.mas).to(u.pc),
allow_negative=True)
dist_cut = (dist < 256. * u.pc) & (dist > 0)
Explanation: As an example of using the data, let's make two color-magnitude diagrams for all stars within 150 pc:
End of explanation
M_G = tgas['phot_g_mean_mag'][dist_cut] - dist[dist_cut].distmod.value
G_J = tgas['phot_g_mean_mag'][dist_cut] - tgas[dist_cut]['j_m']
G_W1 = tgas['phot_g_mean_mag'][dist_cut] - tgas[dist_cut]['w1mpro']
fig,axes = plt.subplots(1, 2, figsize=(12, 6), sharey=True)
axes[0].plot(G_J, M_G, marker=',', linestyle='none')
axes[1].plot(G_W1, M_G, marker=',', linestyle='none')
axes[0].set_xlim(-0.25, 2.5)
axes[0].set_ylim(9, -1)
axes[1].set_xlim(-0.25, 3.5)
axes[0].set_xlabel('$G-J$ [mag]')
axes[1].set_xlabel('$G-W1$ [mag]')
axes[0].set_ylabel('$M_G$ [mag]')
RG = (G_J > 1.) & (M_G < (6*G_J - 3.5)) & (M_G < 4.5)
fig,ax = plt.subplots(1, 1, figsize=(6, 6), sharey=True)
ax.plot(G_J, M_G, marker=',', linestyle='none')
ax.plot(G_J[RG], M_G[RG], marker=',', linestyle='none', color='r')
ax.set_xlim(-0.25, 2.5)
ax.set_ylim(9, -1)
axes[0].set_xlabel('$G-J$ [mag]')
axes[0].set_ylabel('$M_G$ [mag]')
Explanation: Compute absolute magnitude using the distance to each star, and compute colors by differencing magnitudes:
End of explanation |
9,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing binary decision trees
The goal of this notebook is to implement your own binary decision tree classifier. You will
Step1: Load the lending club dataset
We will be using the same LendingClub dataset as in the previous assignment.
Step2: Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
features
Step4: Let's explore what the dataset looks like.
Step5: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.
Step6: Note
Step7: Let's see what the feature columns look like now
Step8: Let's explore what one of these columns looks like
Step9: This column is set to 1 if the loan grade is A and 0 otherwise.
Checkpoint
Step10: Train-test split
We split the data into a train test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
Step11: Decision tree implementation
In this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections.
Function to count number of mistakes while predicting majority class
Recall from the lecture that prediction at an intermediate node works by predicting the majority class for all data points that belong to this node.
Now, we will write a function that calculates the number of missclassified examples when predicting the majority class. This will be used to help determine which feature is the best to split on at a given node of the tree.
Note
Step12: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
Step13: Function to pick best feature to split on
The function best_splitting_feature takes 3 arguments
Step14: To test your best_splitting_feature function, run the following code
Step15: Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values
Step16: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions
Step17: Here is a recursive function to count the nodes in your tree
Step18: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step19: Build the tree!
Now that all the tests are passing, we will train a tree model on the train_data. Limit the depth to 6 (max_depth = 6) to make sure the algorithm doesn't run for too long. Call this tree my_decision_tree.
Warning
Step20: Making predictions with a decision tree
As discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function classify, which takes in a learned tree and a test point x to classify. We include an option annotate that describes the prediction path when set to True.
Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
Step21: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
Step22: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step23: Quiz question
Step24: Now, let's use this function to evaluate the classification error on the test set.
Step25: Quiz Question
Step26: Quiz Question
Step27: Exploring the left subtree of the left subtree
Step28: Quiz question | Python Code:
import graphlab
Explanation: Implementing binary decision trees
The goal of this notebook is to implement your own binary decision tree classifier. You will:
Use SFrames to do some feature engineering.
Transform categorical variables into binary variables.
Write a function to compute the number of misclassified examples in an intermediate node.
Write a function to find the best feature to split on.
Build a binary decision tree from scratch.
Make predictions using the decision tree.
Evaluate the accuracy of the decision tree.
Visualize the decision at the root node.
Important Note: In this assignment, we will focus on building decision trees where the data contain only binary (0 or 1) features. This allows us to avoid dealing with:
* Multiple intermediate nodes in a split
* The thresholding issues of real-valued features.
This assignment may be challenging, so brace yourself :)
Fire up Graphlab Create
Make sure you have the latest version of GraphLab Create.
End of explanation
loans = graphlab.SFrame('lending-club-data.gl/')
Explanation: Load the lending club dataset
We will be using the same LendingClub dataset as in the previous assignment.
End of explanation
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
Explanation: Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
End of explanation
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
Explanation: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
features:
grade of the loan
the length of the loan term
the home ownership status: own, mortgage, rent
number of years of employment.
Since we are building a binary decision tree, we will have to convert these categorical features to a binary representation in a subsequent section using 1-hot encoding.
End of explanation
loans
Explanation: Let's explore what the dataset looks like.
End of explanation
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
Explanation: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.
End of explanation
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Transform categorical data into binary features
In this assignment, we will implement binary decision trees (decision trees for binary features, a specific case of categorical variables taking on two values, e.g., true/false). Since all of our features are currently categorical features, we want to turn them into binary features.
For instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature
{'home_ownership': 'RENT'}
we want to turn this into three features:
{
'home_ownership = OWN' : 0,
'home_ownership = MORTGAGE' : 0,
'home_ownership = RENT' : 1
}
Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.
End of explanation
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
print "Number of features (after binarizing categorical variables) = %s" % len(features)
loans_data.head(n=1)
Explanation: Let's see what the feature columns look like now:
End of explanation
loans_data['grade.A']
Explanation: Let's explore what one of these columns looks like:
End of explanation
print "Total number of grade.A loans : %s" % loans_data['grade.A'].sum()
print "Expexted answer : 6422"
Explanation: This column is set to 1 if the loan grade is A and 0 otherwise.
Checkpoint: Make sure the following answers match up.
End of explanation
train_data, test_data = loans_data.random_split(.8, seed=1)
Explanation: Train-test split
We split the data into a train test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
End of explanation
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
num_of_positive = (labels_in_node == +1).sum()
# Count the number of -1's (risky loans)
num_of_negative = (labels_in_node == -1).sum()
# Return the number of mistakes that the majority classifier makes.
return num_of_negative if num_of_positive > num_of_negative else num_of_positive
Explanation: Decision tree implementation
In this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections.
Function to count number of mistakes while predicting majority class
Recall from the lecture that prediction at an intermediate node works by predicting the majority class for all data points that belong to this node.
Now, we will write a function that calculates the number of missclassified examples when predicting the majority class. This will be used to help determine which feature is the best to split on at a given node of the tree.
Note: Keep in mind that in order to compute the number of mistakes for a majority classifier, we only need the label (y values) of the data points in the node.
Steps to follow :
* Step 1: Calculate the number of safe loans and risky loans.
* Step 2: Since we are assuming majority class prediction, all the data points that are not in the majority class are considered mistakes.
* Step 3: Return the number of mistakes.
Now, let us write the function intermediate_node_num_mistakes which computes the number of misclassified examples of an intermediate node given the set of labels (y values) of the data points contained in the node. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.
End of explanation
# Test case 1
example_labels = graphlab.SArray([-1, -1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 1 failed... try again!'
# Test case 2
example_labels = graphlab.SArray([-1, -1, 1, 1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 2 failed... try again!'
# Test case 3
example_labels = graphlab.SArray([-1, -1, -1, -1, -1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 3 failed... try again!'
Explanation: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
End of explanation
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split[target])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = (left_mistakes + right_mistakes) / num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_feature = feature
best_error = error
return best_feature # Return the best feature we found
Explanation: Function to pick best feature to split on
The function best_splitting_feature takes 3 arguments:
1. The data (SFrame of data which includes all of the feature columns and label column)
2. The features to consider for splits (a list of strings of column names to consider for splits)
3. The name of the target/label column (string)
The function will loop through the list of possible features, and consider splitting on each of them. It will calculate the classification error of each split and return the feature that had the smallest classification error when split on.
Recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}}
$$
Follow these steps:
* Step 1: Loop over each feature in the feature list
* Step 2: Within the loop, split the data into two groups: one group where all of the data has feature value 0 or False (we will call this the left split), and one group where all of the data has feature value 1 or True (we will call this the right split). Make sure the left split corresponds with 0 and the right split corresponds with 1 to ensure your implementation fits with our implementation of the tree building process.
* Step 3: Calculate the number of misclassified examples in both groups of data and use the above formula to compute the classification error.
* Step 4: If the computed error is smaller than the best error found so far, store this feature and its error.
This may seem like a lot, but we have provided pseudocode in the comments in order to help you implement the function correctly.
Note: Remember that since we are only dealing with binary features, we do not have to consider thresholds for real-valued features. This makes the implementation of this function much easier.
Fill in the places where you find ## YOUR CODE HERE. There are five places in this function for you to fill in.
End of explanation
if best_splitting_feature(train_data, features, 'safe_loans') == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
Explanation: To test your best_splitting_feature function, run the following code:
End of explanation
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = +1
else:
leaf['prediction'] = -1
# Return the leaf node
return leaf
Explanation: Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values:
{
'is_leaf' : True/False.
'prediction' : Prediction at the leaf node.
'left' : (dictionary corresponding to the left tree).
'right' : (dictionary corresponding to the right tree).
'splitting_feature' : The feature that this node splits on.
}
First, we will write a function that creates a leaf node given a set of target values. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in.
End of explanation
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1
# (Check if there are mistakes at current node.
# Recall you wrote a function intermediate_node_num_mistakes to compute this.)
if intermediate_node_num_mistakes(target_values) == 0: ## YOUR CODE HERE
print "Stopping condition 1 reached."
# If not mistakes at current node, make current node a leaf node
return create_leaf(target_values)
# Stopping condition 2 (check if there are remaining features to consider splitting on)
if remaining_features == []: ## YOUR CODE HERE
print "Stopping condition 2 reached."
# If there are no remaining features to consider, make current node a leaf node
return create_leaf(target_values)
# Additional stopping condition (limit tree depth)
if current_depth >= max_depth: ## YOUR CODE HERE
print "Reached maximum depth. Stopping for now."
# If the max tree depth has been reached, make current node a leaf node
return create_leaf(target_values)
# Find the best splitting feature (recall the function best_splitting_feature implemented above)
## YOUR CODE HERE
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target])
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target])
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Explanation: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions:
1. Stopping condition 1: All data points in a node are from the same class.
2. Stopping condition 2: No more features to split on.
3. Additional stopping condition: In addition to the above two stopping conditions covered in lecture, in this assignment we will also consider a stopping condition based on the max_depth of the tree. By not letting the tree grow too deep, we will save computational effort in the learning process.
Now, we will write down the skeleton of the learning algorithm. Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
End of explanation
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
Explanation: Here is a recursive function to count the nodes in your tree:
End of explanation
small_data_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 3)
if count_nodes(small_data_decision_tree) == 13:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there : 13'
small_data_decision_tree
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
# Make sure to cap the depth at 6 by using max_depth = 6
my_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6)
my_decision_tree
Explanation: Build the tree!
Now that all the tests are passing, we will train a tree model on the train_data. Limit the depth to 6 (max_depth = 6) to make sure the algorithm doesn't run for too long. Call this tree my_decision_tree.
Warning: This code block may take 1-2 minutes to learn.
End of explanation
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
Explanation: Making predictions with a decision tree
As discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function classify, which takes in a learned tree and a test point x to classify. We include an option annotate that describes the prediction path when set to True.
Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
End of explanation
test_data[0]
print 'Predicted class: %s ' % classify(my_decision_tree, test_data[0])
Explanation: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
End of explanation
classify(my_decision_tree, test_data[0], annotate=True)
classify(small_data_decision_tree, test_data[0], annotate=True)
Explanation: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
End of explanation
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
num_of_mistakes = (prediction != data[target]).sum()/float(len(data))
return num_of_mistakes
Explanation: Quiz question: What was the feature that my_decision_tree first split on while making the prediction for test_data[0]?
Quiz question: What was the first feature that lead to a right split of test_data[0]?
Quiz question: What was the last feature split on before reaching a leaf node for test_data[0]?
Evaluating your decision tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}}
$$
Now, write a function called evaluate_classification_error that takes in as input:
1. tree (as described above)
2. data (an SFrame)
This function should return a prediction (class label) for each row in data using the decision tree. Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in.
End of explanation
evaluate_classification_error(my_decision_tree, test_data)
Explanation: Now, let's use this function to evaluate the classification error on the test set.
End of explanation
def print_stump(tree, name = 'root'):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('.')
print ' %s' % name
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0] [{0} == 1] '.format(split_name)
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
print_stump(my_decision_tree)
Explanation: Quiz Question: Rounded to 2nd decimal point, what is the classification error of my_decision_tree on the test_data?
Printing out a decision stump
As discussed in the lecture, we can print out a single decision stump (printing out the entire tree is left as an exercise to the curious reader).
End of explanation
print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])
Explanation: Quiz Question: What is the feature that is used for the split at the root node?
Exploring the intermediate left subtree
The tree is a recursive dictionary, so we do have access to all the nodes! We can use
* my_decision_tree['left'] to go left
* my_decision_tree['right'] to go right
End of explanation
print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature'])
Explanation: Exploring the left subtree of the left subtree
End of explanation
print_stump(my_decision_tree['right'], my_decision_tree['splitting_feature'])
Explanation: Quiz question: What is the path of the first 3 feature splits considered along the left-most branch of my_decision_tree?
Quiz question: What is the path of the first 3 feature splits considered along the right-most branch of my_decision_tree?
End of explanation |
9,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example
Step1: Set the workspace loglevel to not print anything
Step2: As the paper requires some lengthy calculation we have split it into parts and put the function in a separate notebook to be re-used in each part. The following code runs and loads the shared functions into this kernel
Step3: The main function runs the simulation for a given network size 'n' and number of points for the relative diffusivity curve. Setting 'npts' to 1 will return the single phase diffusivity. the network size is doubled in the z direction for percolation but the diffusion calculation is effectively only calculated on the middle square section of length 'n'. This is achieved by copying the saturation distribution from the larger network to a smaller one.
We can inspect the source in this notebook by running a code cell with the following | Python Code:
import openpnm as op
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
import openpnm.models.geometry as gm
import openpnm.topotools as tt
%matplotlib inline
np.random.seed(10)
Explanation: Example: Regenerating Data from
R. Wu et al. / Elec Acta 54 25 (2010) 7394–7403
Import the modules
End of explanation
ws = op.Workspace()
ws.settings["loglevel"] = 50
Explanation: Set the workspace loglevel to not print anything
End of explanation
%run shared_funcs.ipynb
Explanation: As the paper requires some lengthy calculation we have split it into parts and put the function in a separate notebook to be re-used in each part. The following code runs and loads the shared functions into this kernel
End of explanation
x_values, y_values = simulation(n=8)
plt.figure()
plt.plot(x_values, y_values, 'ro')
plt.title('normalized diffusivity versus saturation')
plt.xlabel('saturation')
plt.ylabel('normalized diffusivity')
plt.show()
Explanation: The main function runs the simulation for a given network size 'n' and number of points for the relative diffusivity curve. Setting 'npts' to 1 will return the single phase diffusivity. the network size is doubled in the z direction for percolation but the diffusion calculation is effectively only calculated on the middle square section of length 'n'. This is achieved by copying the saturation distribution from the larger network to a smaller one.
We can inspect the source in this notebook by running a code cell with the following: simulation??
Run the simulation once for a network of size 8 x 8 x 8
End of explanation |
9,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Publically-available .csv for reproducibility
I generate two files currently named viomet-snapshot-project-df.csv and viomet-2012-snapshot-project-df.csv, which are the September to Novermber dataframes for 2016 and 2012, respectively. These contain all rows that have been identified as metaphor. These were built using the following commands in Python
Step1: Fitting excited state models to each network and all networks
Given the project dataframe, the desired date range, and the corresponding IatvCorpus name (xxx need to add downloadable data to read from like project_df xxx), the excited state frequency change model can be calculated for every cable news source, and for the frequency of the sources taken as a whole.
By inspecting fit_all_networks, we can dig deeper into how the model fitting works. We'll return to this. For now, notice that fit_networks is a dictionary with three keys, one for each of the cable networks we are studying
Step2: Visualize model fits overlaid on timeseries data
Once we find all the parameters of the models (partition dates and ground/excited state means or "levels") we can plot the model and the data together to compare.
Step3: Trump, Clinton as Subject and Object, and vice-versa
Step4: Violent phrase activating source domain
In this calculation, we need the partition dates from all models that we calculated above, stored in partition_infos. We calculate the daily average of the number of times a given violent word was used to activate the source domain. The average daily usage increases disproportionately with attack as the violent word, at least on Fox News. On the other networks, there is a drop in usage of the next most common violent words used, hit, and beat. These appear as tables in the paper. We'll just print out the tables here in the notebook.
Step5: September to November 2012 and $Q$
Two to-dos are coming together below. One is to generate more intuitive and powerful observables. These
are outlined and calculated below. The other is to analyze the 2012 data. I'll do both at the same
time below, saving plots for the end.
Observables
Step6: From Google Ngram Viewer, we get that the frequency of attack, hit, and beat are .0067, .0062, and .0034 for their American English corpus in 2008. We can use this to compare frequencies of metaphor with attack, hit, and beat. We could also use the total instances identified through search in our corpus.
All this is well and good, now on to calculating these excitability quotients for 2012. | Python Code:
metaphors_url = 'http://metacorps.io/static/viomet-snapshot-project-df.csv'
project_df = get_project_data_frame(metaphors_url)
print(project_df.columns)
Explanation: Publically-available .csv for reproducibility
I generate two files currently named viomet-snapshot-project-df.csv and viomet-2012-snapshot-project-df.csv, which are the September to Novermber dataframes for 2016 and 2012, respectively. These contain all rows that have been identified as metaphor. These were built using the following commands in Python:
python
from projects.common import get_project_data_frame
df = get_project_data_frame('Viomet Sep-Nov 2016')
df.to_csv('/Users/mt/Desktop/viomet-snapshot-project-df.csv',
header=True, index=False, na_rep=None)
I then uploaded it to the Metacorps server using scp.
For completeness I will soon upload the full dataset in .csv form, which will include the potential instances that were either not metaphor or not about politics. This and the other .csv will be made available on a data publishing portal, and mirrored on the Metacorps server.
End of explanation
from viomet_9_10_17 import fit_all_networks
import pandas as pd
date_range = pd.date_range('2016-9-1', '2016-11-30', freq='D')
# uncomment below to run model fits; takes tens of seconds at least
fit_networks = fit_all_networks(project_df, date_range=date_range, iatv_corpus_name='Viomet Sep-Nov 2016')
print(fit_networks)
# set by_network=False to get the fit for all networks taken together
fit_sum = fit_all_networks(project_df, by_network=False, date_range=date_range, iatv_corpus_name='Viomet Sep-Nov 2016')
print(fit_sum)
Explanation: Fitting excited state models to each network and all networks
Given the project dataframe, the desired date range, and the corresponding IatvCorpus name (xxx need to add downloadable data to read from like project_df xxx), the excited state frequency change model can be calculated for every cable news source, and for the frequency of the sources taken as a whole.
By inspecting fit_all_networks, we can dig deeper into how the model fitting works. We'll return to this. For now, notice that fit_networks is a dictionary with three keys, one for each of the cable networks we are studying: 'MSNBCW', 'CNNW', and 'FOXNEWSW'. The W stands for west, since the western version of these were the versions recorded in San Francisco. This information can be confirmed by examining the TVNA metadata blobs for each show.
The resulting data, printed to the console below, is presented in tables at the beginning of the Results section.
End of explanation
from viomet_9_10_17 import by_network_frequency_figure
partition_infos = {network: fit_networks[network][0] for network in ['MSNBCW', 'CNNW', 'FOXNEWSW']}
by_network_frequency_figure(
project_df, date_range=date_range,
iatv_corpus_name='Viomet Sep-Nov 2016',
partition_infos=partition_infos,
save_path='Figures/model_fits.pdf'
)
from IPython.display import IFrame
IFrame("Figures/model_fits.pdf", width=600, height=450)
Explanation: Visualize model fits overlaid on timeseries data
Once we find all the parameters of the models (partition dates and ground/excited state means or "levels") we can plot the model and the data together to compare.
End of explanation
soa_dict = subject_object_analysis(
project_df, plot=True, save_dir=SAVE_DIR, font_scale=1.5
)
# check that the figures were saved to disk
os.listdir(SAVE_DIR)
Explanation: Trump, Clinton as Subject and Object, and vice-versa
End of explanation
from viomet_9_10_17 import by_facet_word
excited, ground = by_facet_word(
project_df, partition_infos, facet_words=['attack', 'beat', 'hit']
)
from IPython.display import display
print('Excited:')
display(excited)
print('\nGround:')
display(ground)
print('\nExcited - Ground:')
display(excited - ground)
Explanation: Violent phrase activating source domain
In this calculation, we need the partition dates from all models that we calculated above, stored in partition_infos. We calculate the daily average of the number of times a given violent word was used to activate the source domain. The average daily usage increases disproportionately with attack as the violent word, at least on Fox News. On the other networks, there is a drop in usage of the next most common violent words used, hit, and beat. These appear as tables in the paper. We'll just print out the tables here in the notebook.
End of explanation
IFrame('https://books.google.com/ngrams/graph?content=attack%2Chit%2Cbeat&year_start=2000&year_end=2016&corpus=17&smoothing=3&share=&direct_url=t1%3B%2Cattack%3B%2Cc0%3B.t1%3B%2Chit%3B%2Cc0%3B.t1%3B%2Cbeat%3B%2Cc0',
width=650, height=400)
Explanation: September to November 2012 and $Q$
Two to-dos are coming together below. One is to generate more intuitive and powerful observables. These
are outlined and calculated below. The other is to analyze the 2012 data. I'll do both at the same
time below, saving plots for the end.
Observables:
We should avoid terse variables when possible for NHB. We want to calculate in one table:
Excited Start Date
Excited End Date
Ground Frequency
Excited Frequency
Change in Frequency
In another table:
Sum total of ground
Sum total of excited
Excitability quotient = Sum of ground / sum of excited
$Q_\alpha$, where $\alpha$ indicates the source domain cross-section of interest. Specifically, we will calculate excitability quotients for cross sections of the specific violent word in the metaphorical construction, so $\alpha \in {\text{attack}, \text{hit}, \text{beat}}$, the three most-common words used for metaphorical violence.
We will also look at sums of cross-sections of who is the subject of metaphorical violence, the one who does the metaphorical violence, and the object of the metaphorical violence, or the victim of the metaphorical violence. As for individuals who could be the subject or object of metaphorical violence, we consider the two Republican and Democratic presidential candidates Mitt Romney and Barack Obama in 2012 and Donald Trump and Hillary Clinton in 2016. We will consider each of them as the subject or object, paired with all other objects/subjects except their rival, and then we'll also consider each candidate as the subject/object with their rival the object/subject. Then for 2016 we would have $\alpha \in {(\text{Trump}, \text{All}), (\text{Clinton}, \text{All}), (\text{Trump}, \text{Clinton}), (\text{Clinton}, \text{Trump}), (\text{All}, \text{Trump}), (\text{All}, \text{Clinton})}$. We will calculate total ground state usage and the excitability quotient for each subject/object pair, for each cable news station.
End of explanation
from project.common import get_project_data_frame
metaphors_url = 'http://metacorps.io/static/data/viomet-2012-snapshot-project-df.csv'
project_df = get_project_data_frame(metaphors_url)
print(project_df.columns)
from viomet_9_10_17 import fit_all_networks
import pandas as pd
IATV_CORPUS_NAME = 'Viomet Sep-Nov 2012'
date_range = pd.date_range('2012-9-1', '2012-11-30', freq='D')
# uncomment below to run model fits; takes tens of seconds at least
fit_networks = fit_all_networks(project_df, date_range=date_range,
iatv_corpus_name=IATV_CORPUS_NAME)
from viomet_9_10_17 import by_network_frequency_figure
partition_infos = {network: fit_networks[network][0]
for network in ['MSNBCW', 'CNNW', 'FOXNEWSW']}
by_network_frequency_figure(
project_df, date_range=date_range,
iatv_corpus_name=IATV_CORPUS_NAME,
partition_infos=partition_infos,
save_path='Figures/model_fits_2012.pdf'
)
from IPython.display import IFrame
IFrame("Figures/model_fits_2012.pdf", width=600, height=450)
soa_dict = subject_object_analysis(
project_df, subj_obj=[
('Romney', 'Obama'),
('Obama', 'Romney'),
('Romney', None),
('Obama', None),
(None, 'Romney'),
(None, 'Obama')
],
date_range=date_range,
plot=True, save_dir=SAVE_DIR, font_scale=1.5
)
from viomet_9_10_17 import by_facet_word
excited, ground = by_facet_word(
project_df, partition_infos, facet_words=['attack', 'beat', 'hit']
)
from IPython.display import display
print('Excited:')
display(excited)
print('\nGround:')
display(ground)
print('\nExcited - Ground:')
display(excited - ground)
Explanation: From Google Ngram Viewer, we get that the frequency of attack, hit, and beat are .0067, .0062, and .0034 for their American English corpus in 2008. We can use this to compare frequencies of metaphor with attack, hit, and beat. We could also use the total instances identified through search in our corpus.
All this is well and good, now on to calculating these excitability quotients for 2012.
End of explanation |
9,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bio-IT Hackathon
Step1: FAIRification
We submited the csv file to the fairifier
What did we do?
The CLNACC field, which is RCV#, was used to make a new column for the persistent ID like https | Python Code:
import json
import re
import os
import urllib.request as request
import gzip
import argparse
import shutil
from collections import OrderedDict
import os
import re
filePath = 'clinvar.vcf';
outputfile = open('clinvar.csv','w');
################################################
# Helper Methods #
################################################
def extractInfoString( info ):
result = []
clinallele_index = " ".join( clinallele_re.search( info ).group(1).split(",") )
diseases = " ".join( disease_re.search(info).group(1).split(",") )
clinsigs = " ".join( clinsig_re.search(info).group(1).split(',') )
clinrevstats = " ".join( clinrevstat_re.search(info).group(1).split(",") )
clinaccs = " ".join( clinacc_re.search(info).group(1).split(",") )
gene_group = gene_re.search(info)
if gene_group :
gene = "".join( gene_group.group(1) )
else:
gene = ""
result.append( clinallele_index.replace('\n','') )
result.append( diseases.replace('\n','') )
result.append( clinsigs.replace('\n','') )
result.append( clinrevstats.replace('\n','') )
result.append( clinaccs.replace('\n','') )
result.append( gene );
return result
def listToCSVRow( dataList ):
row = ""
for item in dataList:
item = item.replace(',','')
row += ',' + item
return row[1:]
################################################
# Fields in Info We Need #
################################################
clinallele_re = re.compile("CLNALLE=(-?\d+)")
disease_re = re.compile("CLNDBN=([^;]*)")
clinsig_re = re.compile("CLNSIG=([^;]*)")
clinrevstat_re = re.compile("CLNREVSTAT=([^;]*)")
clinacc_re = re.compile("CLNACC=([^;]*)")
gene_re = re.compile("GENEINFO=(\w+)")
fixed_tittle = "CHROM,POS,ID,REF,ALT,QUAL,FILTER"
info_tittle = "CLNALLE,CLNDBN,CLNSIG,CLNREVSTAT,CLNACC,GENEINFO"
full_tittle = fixed_tittle + ',' + info_tittle;
outputfile.write(full_tittle + os.linesep)
################################################
# Start Parsing #
################################################
with open( filePath ) as f:
for line in f:
if line.startswith("#",0, 2):
continue;
fieldList = line.split('\t')
fixedList = fieldList[0:7];
infoString = fieldList[7];
infoList = extractInfoString( infoString )
row = listToCSVRow( fixedList + infoList )
outputfile.write( row + os.linesep)
Explanation: Bio-IT Hackathon: FAIR ClinVar
The ClinVar database (https://www.ncbi.nlm.nih.gov/clinvar/) is a public repository of submissions from researchers on the genetic variants known in the human genome, and their assocciated diseases. The whole database can be downloaded as one gzip file in several formats, including vcf and xml. While deeply informative, this database is currently best used only on the NCBI website, and the relationships between meta-data are unclear. The database is also continually updated, (some portions daily), and the new database files are updated monthly. Therefore, we also wanted clear documentation on what we did and why. This way the method could be repeated with the new version of the database, and strengthen the arguement for changing how the database is generate/released.
Goals:
Assess the FAIR qualities of the NCBI ClinVAR database according to the 15 FAIR principles
Wrangle the database, and process using the FAIRifier (https://bioit.fair-dtls.surf-hosted.nl/fairifier/)
Correct deficenies in the FAIRness of the database
Create a relational scheme for the subjects (variables) in the file
Pre-processing
We found that the main vcf file contains both the whole database (over 200,000 entries and 58 columns)
Since the metadata is incorporated into the file, we needed to trim the file to a proof of concept csv for FAIRizing,
while including the meta-data names as header names in the csv file.
Our initial FAIR assessment:
No Globally unique identifiers
Metadata and data in same file, but this is a feature of the data
No metadata access when data is no longer available
Metadata doesn't use a broadly accessible language (assuming RDF was what was required)
Metadata using FAIR vocabularies - I don't think so.
Metadata doesn't have a complete versioning history but has some form of detailed provenance.
We question the, "metadata is richly described with a plurality of accurate and relevant attributes."
CSV proof-of-concept file made using python
End of explanation
from StringIO import StringIO
from rdflib import Graph, URIRef
contents = '''\
subject1\tpredicate1\tsubject2
subject2\tpredicate2\tobject2'''
tabfile = StringIO(contents)
graph = rdflib.Graph()
for line in tabfile:
triple = line.split() # triple is now a list of 3 strings
triple = (URIRef(t) for t in triple) # we have to wrap them in URIRef
graph.add(triple) # and add to the graph
print graph.serialize(format='nt')
Explanation: FAIRification
We submited the csv file to the fairifier
What did we do?
The CLNACC field, which is RCV#, was used to make a new column for the persistent ID like https://www.ncbi.nlm.nih.gov/clinvar/RCV000148988/
Relational scheme
30,000 ft view
Using common terms
Using the metadata labels
Create RDF file
End of explanation |
9,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejemplo 4. Valores y vectores propios
Si el tensor de esfuerzos en un punto $P$, en el sistema de referencia $X,Y,Z$ está definidido por
Step1: Solución
Step2: Resolviendo vía polinomio característico
Step3: Resolviendo vía librerías python con linalg.eig podemos encontrar valores (la) y direcciones principales (n) simultaneamente
Step4: De esta manera escribamos en tensor asociado a las direcciones principales
Step5: Los vectores $i'$, $j'$ y $k'$ están dados por
Step6: Verifiquemos que se cumplen los invariantes en el tensor asociado a direcciones principales
Step7: Para terminar se debe de tener en cuenta que las direcciones principales no son otra cosa que la matriz de cosenos directores que transformaría el tensor original al tensor en direcciones principales mediante la ecuación de transformación | Python Code:
from IPython.display import Image,Latex
#Image()
Image(filename='FIGURES/Sorigen.png',width=400)
Explanation: Ejemplo 4. Valores y vectores propios
Si el tensor de esfuerzos en un punto $P$, en el sistema de referencia $X,Y,Z$ está definidido por:
$$\begin{align}
\
&\sigma_{xx} = 200\dfrac{kgf}{cm^2}; \;\;\; \sigma_{yy} =0\dfrac{kgf}{cm^2}; \;\;\; \sigma_{zz} = 0\dfrac{kgf}{cm^2} \\
&\tau_{xy} = \tau_{yx} =100\dfrac{kgf}{cm^2}, \;\;\; \tau_{xz} = \tau_{zx} =300\dfrac{kgf}{cm^2}; \;\;\;\tau_{yz} = \tau_{zy} = 0 \dfrac{kgf}{cm^2}\\
\end{align}$$
Determine los valores y direccionees principales:
End of explanation
import numpy as np
from scipy import linalg
S = np.array([
[200,100,300.],
[100,0,0],
[300,0,0]])
IS = S[0,0]+S[1,1]+S[2,2]
IIS = S[0,0]*S[1,1]+S[1,1]*S[2,2]+S[0,0]*S[2,2]-(S[0,1]**2)-(S[0,2]**2)-(S[1,2]**2)
IIIS = S[0,0]*S[1,1]*S[2,2]-S[0,0]*(S[1,2]**2)-S[1,1]*(S[0,2]**2)-S[2,2]*(S[0,1]**2)+2*S[1,2]*S[0,2]*S[0,1]
print
print 'Invariantes:', IS,IIS,IIIS
print
Explanation: Solución:
Inicialmente encontremos los valores principales $(\lambda)$ a partir de la solución del polinomio característico:
${\lambda ^3} - {I_\sigma}{\lambda ^2} + {II_\sigma}\lambda - {III_\sigma} = 0$
Donde ${I_\sigma}$, ${II_\sigma}$ y ${III_\sigma}$ son los invariantes 1, 2 y 3 respectivamente que están dados por:
${I_\sigma } = {\sigma {xx}} + {\sigma {yy}} + {\sigma _{zz}}$
${II_\sigma } = {\sigma {xx}}{\sigma {yy}} + {\sigma {xx}}{\sigma {zz}} + {\sigma {zz}}{\sigma {yy}} - \tau {xy}^2 - \tau {xz}^2 - \tau _{yz}^2$
${III_\sigma } = {\sigma {xx}}{\sigma {yy}}{\sigma {zz}} + 2{\tau {xy}}{\tau {xz}}{\tau {yz}} - {\sigma {xx}}\tau {yz}^2 - {\sigma {yy}}\tau {xz}^2 - {\sigma {zz}}\tau {xy}^2$
End of explanation
coeff=[1.0,-IS,IIS,-IIIS]
ps=np.roots(coeff)
print
print "Esfuerzos principales:", np.sort(np.round(ps,1))
print
Explanation: Resolviendo vía polinomio característico:
End of explanation
la, n= linalg.eigh(S)
la = la.real
print
print "Esfuerzos principales:", np.round(la,1)
print
#print S
print
print 'n=', np.round(n,2)
print
Explanation: Resolviendo vía librerías python con linalg.eig podemos encontrar valores (la) y direcciones principales (n) simultaneamente
End of explanation
print
Sp = np.array([
[la[0],0,0],
[0,la[1],0],
[0,0,la[2]]])
print 'Sp =',np.round(Sp,1)
print
Image(filename='FIGURES/Sprinc.png',width=400)
Explanation: De esta manera escribamos en tensor asociado a las direcciones principales:
End of explanation
print "i'=", np.round(n[:,0],2)
print "j'=", np.round(n[:,1],2)
print "k'=", np.round(n[:,2],2)
print
Explanation: Los vectores $i'$, $j'$ y $k'$ están dados por:
End of explanation
IS = Sp[0,0]+Sp[1,1]+Sp[2,2]
IIS =Sp[0,0]*Sp[1,1]+Sp[1,1]*Sp[2,2]+Sp[0,0]*Sp[2,2]-(Sp[0,1]**2)-(Sp[0,2]**2)-(Sp[1,2]**2)
IIIS =Sp[0,0]*Sp[1,1]*Sp[2,2]-Sp[0,0]*(Sp[1,2]**2)-Sp[1,1]*(Sp[0,2]**2)-Sp[2,2]*(Sp[0,1]**2)+2*Sp[1,2]*Sp[0,2]*Sp[0,1]
print
print 'Invariantes:', IS,IIS,IIIS
print
Explanation: Verifiquemos que se cumplen los invariantes en el tensor asociado a direcciones principales:
End of explanation
C = n.T
Sp2 = np.dot(np.dot(C,S),C.T)
print
print 'Sp =', np.round(Sp2,1)
from IPython.core.display import HTML
def css_styling():
styles = open('./custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: Para terminar se debe de tener en cuenta que las direcciones principales no son otra cosa que la matriz de cosenos directores que transformaría el tensor original al tensor en direcciones principales mediante la ecuación de transformación:
\begin{align}
&[\sigma']=[C][\sigma][C]^T\
\end{align}
Teniendo en cuenta que n es dado por vectores columna entonces la matriz de cosenos directores está dada por:
\begin{align}
&[C] = [n]^T
\end{align}
End of explanation |
9,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note
Step11: Contrary to expectations, increasing the number of hidden nodes has no drastic impact on the test loss and validation loss numbers both.
Step12: How well does the model predict the data?
The model predicts the data quite well with 2 hidden nodes with a MSE Validation Loss ~0.395 and
Test loss of ~0.24 for 2 hidden nodes.
I infer this to mean that model is generalizable to completely unseen data in this case.
Where does it fail?
Step13: Where does the model fail?
The above graph clearly illustrates that the mean squared error is highest around holidays. These includes days such as Christmas and New Year eve (Dec 31). It is on such holidays that the model fails.
Why does it fail where it does?
Step14: For checking if there is a relation between model failing and actual consumption, let us look at the correlation between mean squared error and original values. Visually, we see that this is unlikely to be true. It also has a small and safe to ignore Pearson correlation coefficient.
I don't completely understand why the model fails where it fails, but I suspect having a way to factor in seasonality from previous years would improve this.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
def sigmoid(x):
return 1 / (1 + np.exp(-x))
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
'''Set number of nodes in input, hidden and output layers.'''
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
# print("input_hidden.shape",self.weights_input_to_hidden.shape)
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
self.learning_rate = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = sigmoid
self.del_w_hidden_output = np.zeros(self.weights_input_to_hidden.shape)
self.del_w_input_hidden = np.zeros(self.weights_hidden_to_output.shape)
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
# print("weights_input_hidden.shape", self.weights_input_to_hidden.shape, "inputs.shape",inputs.shape)
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
# print("hidden_inputs.shape:", hidden_inputs.shape)
# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# print("hidden_outputs.shape:", hidden_outputs.shape)
# signals from hidden layer
# TODO: Output layer
# print("hidden_outputs.shape:",hidden_outputs.shape, "weights_hidden_output.shape", self.weights_hidden_to_output.shape)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals into final output layer
final_outputs = final_inputs
# signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
# print("targets.shape:",targets.shape, "final_outputs.shape", final_outputs.shape)
output_errors = targets - final_outputs
# Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
# print("output_errors.shape:", output_errors.shape, "weights_hidden_to_output.shape", self.weights_hidden_to_output.shape)
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output).T
# print("hidden_errors.shape", hidden_errors.shape)
# errors propagated to the hidden layer
hidden_grad = hidden_errors * hidden_outputs * (1 - hidden_outputs)
# print("hidden_grad.shape", hidden_grad.shape)
# hidden layer gradients
# TODO: Update the weights
self.weights_hidden_to_output += self.lr * output_errors * hidden_outputs.T
# update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * hidden_grad * inputs.T
# update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals into final output layer
final_outputs = final_inputs
# signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
### Set the hyperparameters here ###
epochs = 1000
learning_rate = 0.01
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
def neural_network_training(N_i=N_i, hidden_nodes=hidden_nodes, output_nodes=output_nodes, learning_rate=learning_rate):
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
return network, losses
network, losses = neural_network_training()
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=90)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
print("Test Loss:", MSE(network.run(test_features)[0], test_targets['cnt'].values))
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
How well does the model predict the data?
The model predicts the data quite well with 2 hidden nodes with a MSE Validation Loss ~0.395 and Test loss of ~0.24 for 2 hidden nodes. I infer this to mean that model is generalizable to completely unseen data in this case.
End of explanation
losses_nodes = {}
for i in xrange(2, 7):
hidden_nodes = i
print("For %1.0f hidden_nodes" % i)
network, training_validation_losses = neural_network_training(hidden_nodes=i)
test_loss = MSE(network.run(test_features)[0], test_targets['cnt'].values)
print("\nTest Loss: %f\n" % test_loss)
losses_nodes[i] = [test_loss, training_validation_losses]
test_losses = [element[0] for element in losses_nodes.values()]
validation_losses = [min(element[1]['validation']) for element in losses_nodes.values()]
train_losses = [min(element[1]['train']) for element in losses_nodes.values()]
fig, ax = plt.subplots(figsize=(8,4))
ax.plot(test_losses, label='Test')
ax.plot(validation_losses, label='Validation')
ax.plot(train_losses, label = 'Train')
ax.set_xlim(right=len(test_losses))
ax.legend()
ax.set_xticks(np.arange(len(losses_nodes.keys())))
_ = ax.set_xticklabels(losses_nodes.keys(), rotation=90)
Explanation: Contrary to expectations, increasing the number of hidden nodes has no drastic impact on the test loss and validation loss numbers both.
End of explanation
def squared_error(y, Y):
return (y - Y)**2
SE_Test = squared_error(network.run(test_features)[0], test_targets['cnt'].values)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
ax.plot(SE_Test, label='Squared Errors (on Test Data)')
# ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(SE_Test))
ax.legend()
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=90)
Explanation: How well does the model predict the data?
The model predicts the data quite well with 2 hidden nodes with a MSE Validation Loss ~0.395 and
Test loss of ~0.24 for 2 hidden nodes.
I infer this to mean that model is generalizable to completely unseen data in this case.
Where does it fail?
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
ax.plot(SE_Test*std, label='Squared Errors (on Test Data)')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
# ax.plot(predictions[0], label='Predictions')
# ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(SE_Test))
ax.legend()
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=90)
print(np.corrcoef(test_targets['cnt'].values*std+mean,SE_Test))
Explanation: Where does the model fail?
The above graph clearly illustrates that the mean squared error is highest around holidays. These includes days such as Christmas and New Year eve (Dec 31). It is on such holidays that the model fails.
Why does it fail where it does?
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def runTest(self):
pass
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: For checking if there is a relation between model failing and actual consumption, let us look at the correlation between mean squared error and original values. Visually, we see that this is unlikely to be true. It also has a small and safe to ignore Pearson correlation coefficient.
I don't completely understand why the model fails where it fails, but I suspect having a way to factor in seasonality from previous years would improve this.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
9,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IPython Logbook Manager
This IPython notebook can be used to manage the Logbook via a collection of bash scripts that handle the listing, creating, and backing up of the logbook entries. Each subsection title corresponds to the function performed by the bash script contained within. The subsection is divided into two parts
Step1: Create Logbook Entry
When creating a new logbook entry, don't forget to update the {{indexLink}} accordingly!
To create a logbook entry, please set the following variables appropriately
Step2: List Logbook Entries
To list the available logbook entries, please set the following variables appropriately
Step3: Backup Logbook
To backup the logbook to the directory of your choice, please set the following variables appropriately | Python Code:
import ConfigParser
CP = ConfigParser.ConfigParser()
CP.read("../.config")
head = CP.get('IPyLogbook-Config','head')
url = CP.get('IPyLogbook-Config','url')
port = CP.get('IPyLogbook-Config','ssh-port')
headLink="[Logbook HEAD]("+url+":"+port+"/tree)"
extensionsLink="[Logbook Extensions]("+url+":"+port+"/notebooks/IPyLogbook/mgmt/IPyLogbookExtensions.ipynb)"
indexLink="[Logbook Index]("+url+":"+port+"/notebooks/IPyLogbook/IPyLogbookIndex.ipynb)"
usersguideLink="[Logbook User Guide]("+url+":"+port+"/notebooks/IPyLogbook/doc/IPyLogbookUsersGuide.ipynb)"
Explanation: IPython Logbook Manager
This IPython notebook can be used to manage the Logbook via a collection of bash scripts that handle the listing, creating, and backing up of the logbook entries. Each subsection title corresponds to the function performed by the bash script contained within. The subsection is divided into two parts: the first contains variables that must set by the user for the bash script; the second contains the script itself. The user should execute an action only by individual cell executations of only those cells that pertain to the action he/she wishes to enact.
Two layers of protection are implemented to prevent accidental executation of a script. The first is that - by default - the bash script cells are marked 'read only' using the IPython-notebook-extension 'read-only.js': the user must click the little 'lock' icon at the upper-right. The second is that the user must set a 'Script_Execute' flag to 'Yes'. Despite these protections, it is strongly recommended that the user not execute a Cell $\rightarrow$ Run All command while working within this notebook!
End of explanation
CreateScript_Dir=head+"/experiment/20140101" # Abs. path to directory where Logbook entry will be created
CreateScript_Name="20140101" # Name of the Logbook entry (will be used in name if notebook file)
CreateScript_Execute="No" # "Yes" = run script; "No" = do not run script
CreateScript_Overwrite="No" # "Yes" = overwrite preexising log entry; "No" = do NOT overwrite preexisting log entry
#-------------------- The user should not need to set anything below this line --------------------#
%%bash -s "$CreateScript_Dir" "$CreateScript_Name" "$CreateScript_Execute" "$CreateScript_Overwrite" "$head"
if [ "$#" -ne 5 ]; then
echo -e "\nError: This script requires four arguments that should be set by the user!"
echo -e "arg1 : Absolute path to where new Logbook entry will be created
echo -e "arg2 : Name of the new Logbook entry
echo -e "arg3 : Yes/No to run the script"
echo -e "arg4 : Yes/No to overwrite an existing Logbook entry with the name specified"
echo -e "arg5 : Full path to the HEAD directory\n"
exit
fi
# Ensure that the user has intentionally flagged this script to run
if [ "$3" == "No" ]; then
echo -e "\nThis script is not flagged for execution. Set 'CreateScript_Execute' flag to 'Yes' to execute"
exit
fi
# Set full path to the directory containing the new entry
EntryDir=$1
# Set the new entry's file name
EntryName="IPyLogbookEntry-"$2".ipynb"
# Set the full path to the new entry file
Entry=$EntryDir/$EntryName
# If the directory does NOT exist then create it
if [ ! -d $EntryDir ]; then
mkdir -p $EntryDir
# If the directory DOES exist then...
else
# ... overwrite the preexisting Logbook entry if the user has granted permission to do so
if [ "$4" == "No" ]; then
echo -e "\nWARNING : The Logbook entry '$Entry' already exists!"
echo -e " You may set the above 'CreateScript_Overwrite' parameter to 'Yes' to overwrite this entry,"
echo -e " but you should exercise extreme caution when using this option!\n"
exit
fi
fi
# Set the Logbook entry template to be copied to the new Logbook entry and copy it
EntryTemplate="IPyLogbookEntryTemplate.ipynb"
cp $EntryTemplate $Entry
# Place a symbolic link to the IPyLogbook config file so that the new Logbook entry will have access to it
if [ "$4" == "Yes" ]; then
rm $EntryDir/.config -f
fi
ln -s $5/IPyLogbook/.config $EntryDir/.config
echo -e "\nA new Logbook entry was successfully created at:\n"
echo -e " $Entry\n"
Explanation: Create Logbook Entry
When creating a new logbook entry, don't forget to update the {{indexLink}} accordingly!
To create a logbook entry, please set the following variables appropriately:
End of explanation
ListScript_Execute="No" # "Yes" = run script; "No" = do not run script
#-------------------- The user should not need to set anything below this line --------------------#
%%bash -s "$ListScript_Execute" "$head"
if [ "$#" -ne 2 ]; then
echo -e "\nError: This script requires four arguments that should be set by the user!"
echo -e "arg1 : Yes/No to run the script"
echo -e "arg2 : Full path to the HEAD directory\n"
exit
fi
# Ensure that the user has intentionally flagged this script to run
if [ "$1" == "No" ]; then
echo -e "\nThis script is not flagged for execution. Set 'ListScript_Execute' flag to 'Yes' to execute".
exit
fi
find $2 -name "IPyLogbookEntry-*.ipynb"
Explanation: List Logbook Entries
To list the available logbook entries, please set the following variables appropriately:
End of explanation
BackupScript_Directory="/home/hartwig/logbook/backup" # Full path to backup directory
BackupScript_Execute="No" # "Yes" = run script; "No" = do not run script
BackupScript_Overwrite="No" # "Yes" = overwrite preexising backup; "No" = do NOT overwrite preexisting backup
#-------------------- The user should not need to set anything below this line --------------------#
%%bash -s "$BackupScript_Directory" "$BackupScript_Execute" "$BackupScript_Overwrite" "$head"
# Ensure correct number or arguments are passed; provide helpful output
if [ "$#" -ne 4 ]; then
echo -e "\nError: This script requires four arguments that should be set by the user!"
echo -e "arg1 : Absolute path to where the Logbook will be backed up"
echo -e "arg2 : Yes/No to run the script"
echo -e "arg3 : Yes/No to overwrite an existing Logbook entry with the name specified"
echo -e "arg4 : Full path to the HEAD directory\n"
exit
fi
# Put cmd line args into reasonably named variables
Directory=$1
Execute=$2
Overwrite=$3
Head=$4
# Ensure that the user has intentionally flagged this script to run
if [ "$Execute" == "No" ]; then
echo -e "\nThis script is not flagged for execution. Set 'ListScript_Execute' flag to 'Yes' to execute"
exit
fi
# Check to see if a directory already exists where the backup will be made
if [ -d $Directory ]; then
# Prevent overwriting the directory; provide advice to overwrite
if [ "$Overwrite" == "No" ]; then
echo -e "\nA backup of this logbook already exists at $1! Set 'BackupScript_Overwrite' to 'Yes' to overwrite."
echo -e "Please exercise CAUTION when using this option!\n"
exit
# Remove and recreate the directory
elif [ "$Overwrite" == "Yes" ]; then
chmod 755 $Directory
rm $Directory -rf
mkdir $Directory
fi
else
mkdir -p $Directory
fi
# Change to the IPyLogbook HEAD directory and copy all of the IPyLogbook entries to
# the specified directory being sure to preserve the directory structure!
cd $Head
EntryList=$(find . -name 'IPyLogbookEntry-*.ipynb' | grep -v '.ipynb_checkpoints')
for Entry in $EntryList; do
Entry=${Entry#.\/}
cp --parents $Entry $Directory
done
cat > $Directory/README.txt << EOL
*************
** WARNING **
*************
This directory contains a backup of an IPyLogbook! Please treat with respect!
EOL
Date=$(date)
echo "name: README.txt" >> $Directory/README.txt
echo "date: "$Date >> $Directory/README.txt
echo -e "\nA backup of this IPyLogbook was created at $Directory that includes the following files:"
for Entry in $EntryList; do
Entry=${Entry#.\/}
echo " "$Entry
done
Explanation: Backup Logbook
To backup the logbook to the directory of your choice, please set the following variables appropriately:
End of explanation |
9,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Install + Imports
Step2: Add path to data and projection weights.
NOTE
Step4: Load images and build model
Step5: Get Embeddings | Python Code:
#@title Default title text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install --upgrade tf_slim
from google.colab import drive
import glob
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL.Image
import tensorflow.compat.v1 as tf
import tf_slim as slim
# Get slim inception code
# from tf_slim.nets import inception # throws error no attribute 'inception_v4_arg_scope'
if not os.path.exists('models/research/slim'):
!git clone https://github.com/tensorflow/models/
old_cwd = os.getcwd()
os.chdir('models/research/slim')
from nets import inception
os.chdir(old_cwd)
# Download inceptionv4 checkpoint
!wget http://download.tensorflow.org/models/inception_v4_2016_09_09.tar.gz
!tar -xvzf inception_v4_2016_09_09.tar.gz
Explanation: Install + Imports
End of explanation
#@title Connect to Drive (Run this cell once)
drive.mount('/content/gdrive')
# Verify folder exists after adding the shared folder to your drive.
!ls gdrive/My\ Drive/cell_embedder_colab
#@title Note: Change path to copy images+weights from Drive. (Run once)
!cp -R gdrive/My\ Drive/cell_embedder_colab* .
Explanation: Add path to data and projection weights.
NOTE: Data and weights are shared in this folder. Add these to your Google Drive by selecting "Add shortcut to Drive" before running these cells.
End of explanation
DATA_DIR = 'cell_embedder_colab/' # NOTE - You need to set this to the location of the data.
IMAGES_DIR = os.path.join(DATA_DIR, 'imgs/fullres_8bit_png_bbbc025')
RANDOM_PROJECTION_CKPT = os.path.join(
DATA_DIR, 'random_projection/random_projection.ckpt')
INCEPTION_V4_CKPT = 'inception_v4.ckpt' # This is downloaded in the installs.
#@title Helper functions
def load_image(file_path):
with PIL.Image.open(file_path) as im:
im = np.asarray(im)
return im
def build_inceptionv4_rand64_tower(inputs, is_training=False):
Builds an inceptionv4 rand64 tower starting from image tensor.
The tower consists of an Inception v4 base, and 1 fully connected layer
reducing output dim to 64, and a normalization layer. Loss is not included.
Args:
inputs: An input dictionary mapping key to a tensor of input images i.e.
{IMAGE_KEY: 4D tensor of (num, h, w, c)}.
is_training: (bool) Specifies if it is training phase.
Returns:
(tensor) A tensor of embeddings.
(dict) A dictionary mapping endpoint layer names to activation tensors.
with slim.arg_scope(inception.inception_v4_arg_scope()):
_, activations = inception.inception_v4(inputs[IMAGE_KEY],
num_classes=1001,
is_training=is_training)
net = activations['PreLogitsFlatten']
with slim.arg_scope([slim.fully_connected], activation_fn=None):
net = slim.fully_connected(net, 64, scope='fc0')
activations['fc0'] = net
net = tf.nn.l2_normalize(net, dim=-1, name='embed_norm')
net = tf.reshape(net, [-1, 64])
activations['embed_norm'] = net
return net, activations
#@title Load Images (sorted by stain names)
image_fnames = sorted(glob.glob('{}/*.png'.format(IMAGES_DIR)))
print(image_fnames)
np_images = []
plt.figure(figsize=(20, 15))
for i, img_fname in enumerate(image_fnames):
np_images.append(load_image(img_fname))
plt.subplot(1, len(image_fnames), i+1)
plt.imshow(np_images[-1])
np_images = np.array(np_images)
np_images = np.expand_dims(np_images, axis=3)
print(np_images.shape)
NUM_STAINS, IMG_HEIGHT, IMG_WIDTH = np_images.shape[0:3]
# The order in which you want the embeddings for each stain. Here embedding for
# img Stack00002.png (DAPI) will come first in the embedding.
STAIN_ORDER = [2,3,4,0,1]
print(NUM_STAINS, IMG_HEIGHT, IMG_WIDTH)
#@title Build model and intialize weights. (Run once)
IMAGE_KEY = 'images'
graph = tf.Graph()
with graph.as_default():
images_ph = tf.placeholder(tf.float32, shape=(None, IMG_HEIGHT, IMG_WIDTH, 1))
# Resize to 299, 299. This is the input image size for inception.
images_small = tf.image.resize_images(
images_ph, [299, 299],
method=tf.image.ResizeMethod.AREA)
# Adjust pixel brightness to [0, 1]
images_small /= 255.0
# Subtract 0.5 and multiply by 2.0 to keep it within [-1, 1]
images_small -= 0.5
images_small *= 2.0
# Assert image is in [-1, 1]. Add an epsilon on either bound for edge cases.
epsilon = 0.01
assert_min = tf.assert_greater_equal(tf.reduce_min(images_small), -(1 + epsilon))
assert_max = tf.assert_less_equal(tf.reduce_max(images_small), (1 + epsilon))
with tf.control_dependencies([assert_min, assert_max]):
images_small = tf.identity(images_small)
single_stain_images = tf.tile(images_small, [1, 1, 1, 3])
inputs = {IMAGE_KEY: single_stain_images}
embed, _ = build_inceptionv4_rand64_tower(inputs, is_training=False)
assignment_inception_map = {}
assignment_projection_map = {}
for v in slim.get_model_variables():
if v.op.name.startswith('InceptionV4'):
assignment_inception_map[v.op.name] = v.op.name
else:
assignment_projection_map[v.op.name] = v.op.name
tf.train.init_from_checkpoint(INCEPTION_V4_CKPT, assignment_inception_map)
tf.train.init_from_checkpoint(RANDOM_PROJECTION_CKPT, assignment_projection_map)
# We get 1 embedding for each stain. Concatenate the stain embeddings
# to get 1 embedding for the entire image. This will be of dimension
# size_of_embedding (64) x num_stains.
single_stain_embeds = tf.split(embed, NUM_STAINS)
stain_concat_embed = tf.concat(single_stain_embeds, 1)
sess = tf.Session(graph=graph)
saver = tf.train.Saver()
init_op = tf.global_variables_initializer()
sess.run(init_op)
def get_ordered_embeddings(input_imgs, images_ph=images_ph,sess=sess):
stain_embeds, concat_embed = sess.run([single_stain_embeds,
stain_concat_embed],
feed_dict={images_ph: input_imgs})
ordered_tf_embeds = np.concatenate([stain_embeds[i] for i in STAIN_ORDER],
axis=1)
return ordered_tf_embeds
Explanation: Load images and build model
End of explanation
embeds = get_ordered_embeddings(np_images)
print(embeds[0][:10])
plt.figure(figsize=(30,10))
plt.plot(embeds.T, 'b-o')
Explanation: Get Embeddings
End of explanation |
9,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The double dice problem
This notebook demonstrates a way of doing simple Bayesian updates using the table method, with a Pandas DataFrame as the table.
Copyright 2018 Allen Downey
MIT License
Step1: The BayesTable class
Here's the class that represents a Bayesian table.
Step2: The double dice problem
Suppose I have a box that contains one each of 4-sided, 6-sided, 8-sided, and 12-sided dice. I choose a die at random, and roll it twice
without letting you see the die or the outcome. I report that I got
the same outcome on both rolls.
1) What is the posterior probability that I rolled each of the dice?
2) If I roll the same die again, what is the probability that I get the same outcome a third time?
Solution
Here's a BayesTable that represents the four hypothetical dice.
Step3: Since we didn't specify prior probabilities, the default value is equal priors for all hypotheses. They don't have to be normalized, because we have to normalize the posteriors anyway.
Now we can specify the likelihoods
Step4: Now we can use update to compute the posterior probabilities
Step5: The 4-sided die is most likely because you are more likely to get doubles on a 4-sided die than on a 6-, 8-, or 12- sided die.
Part two
The second part of the problem asks for the (posterior predictive) probability of getting the same outcome a third time, if we roll the same die again.
If the die has n sides, the probability of getting the same value again is 1/n, which should look familiar.
To get the total probability of getting the same outcome, we have to add up the conditional probabilities
Step6: This calculation is similar to the first step of the update, so we can also compute it by
1) Creating a new table with the posteriors from table.
2) Adding the likelihood of getting the same outcome a third time.
3) Computing the normalizing constant. | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
from fractions import Fraction
Explanation: The double dice problem
This notebook demonstrates a way of doing simple Bayesian updates using the table method, with a Pandas DataFrame as the table.
Copyright 2018 Allen Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
class BayesTable(pd.DataFrame):
def __init__(self, hypo, prior=1, **options):
columns = ['hypo', 'prior', 'likelihood', 'unnorm', 'posterior']
super().__init__(columns=columns, **options)
self.hypo = hypo
self.prior = prior
def mult(self):
self.unnorm = self.prior * self.likelihood
def norm(self):
nc = np.sum(self.unnorm)
self.posterior = self.unnorm / nc
return nc
def update(self):
self.mult()
return self.norm()
def reset(self):
return BayesTable(self.hypo, self.posterior)
Explanation: The BayesTable class
Here's the class that represents a Bayesian table.
End of explanation
hypo = [Fraction(sides) for sides in [4, 6, 8, 12]]
table = BayesTable(hypo)
Explanation: The double dice problem
Suppose I have a box that contains one each of 4-sided, 6-sided, 8-sided, and 12-sided dice. I choose a die at random, and roll it twice
without letting you see the die or the outcome. I report that I got
the same outcome on both rolls.
1) What is the posterior probability that I rolled each of the dice?
2) If I roll the same die again, what is the probability that I get the same outcome a third time?
Solution
Here's a BayesTable that represents the four hypothetical dice.
End of explanation
table.likelihood = 1/table.hypo
table
Explanation: Since we didn't specify prior probabilities, the default value is equal priors for all hypotheses. They don't have to be normalized, because we have to normalize the posteriors anyway.
Now we can specify the likelihoods: if a die has n sides, the chance of getting the same outcome twice is 1/n.
So the likelihoods are:
End of explanation
table.update()
table
table.posterior.astype(float)
Explanation: Now we can use update to compute the posterior probabilities:
End of explanation
total = 0
for _, row in table.iterrows():
total += row.posterior / row.hypo
total
Explanation: The 4-sided die is most likely because you are more likely to get doubles on a 4-sided die than on a 6-, 8-, or 12- sided die.
Part two
The second part of the problem asks for the (posterior predictive) probability of getting the same outcome a third time, if we roll the same die again.
If the die has n sides, the probability of getting the same value again is 1/n, which should look familiar.
To get the total probability of getting the same outcome, we have to add up the conditional probabilities:
P(n | data) * P(same outcome | n)
The first term is the posterior probability; the second term is 1/n.
End of explanation
table2 = table.reset()
table2.likelihood = 1/table.hypo
table2
table2.update()
table2
Explanation: This calculation is similar to the first step of the update, so we can also compute it by
1) Creating a new table with the posteriors from table.
2) Adding the likelihood of getting the same outcome a third time.
3) Computing the normalizing constant.
End of explanation |
9,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Weights Prediction
Step1: Data import and cleaning
Step2: The data are messed up; name fields contain commas in a comma-separated file so two extra columns are created.
Step3: Clean pitch type column (convert all to upper case)
Step4: Parse dates to datetime types
Step5: I'm going to discard a few pitch types
Step6: So that I can look at patterns at different scales, I will create columns for month, week and day (game).
Step7: Data exploration
We can get an idea of some of the best pitches by summing weights across pitcher and pitch type
Step8: Let's look at Corey Kluber, just to isolate one player
Step9: About 10 runs saved from his cutter over 5 months
Step10: If you sum the allowed weights by month for each pitch, it gives the impression of a trend, in some instances.
Step11: However, if you look at the per-game observed run values, by summing the weights for each game, the trends mostly disappear.
Step12: If you, take this further and look at the distribution of linear weights allowed per game, you can see the underlying variability in the data. I will proceed with the analysis using the pitch-level data, as the monthly/weekly sums would gloss over the variability associated with those summaries.
Step13: Predictive modeling
The question posed suggests a time series prediction problem
Step14: I'm going to use PyMC3, and open-source Bayesian library for Python that I created many years ago, and continue to develop and maintain today. There are a variety of other Python packages I could have used instead
Step15: So, this is a flexible covariance function that is parameterized by scale and lengthscale parameters, which will estimate from the data. I will also specify a noise parameter $\sigma$ to characterize the variation of weights allowed within a game.
We will use optimization to obtain the maximum a posteriori (MAP) estimate of the model.
Step16: Here's an estimate of the standard deviation within days, which looks reasonable compared to the empirical, which is around 0.1.
Step17: The great thing about Gaussian processes is that it is trivial to predict to other points outside the dataset, so we can define a set of points that extends into September, and draw from the conditional distribution
Step18: Here we draw 1000 posterior samples from the predictive GP, to use for inference.
Step19: The plot below shows the estimated function, along with its uncertainty, which is characterized by many poserior draws from the estimated function. I've also plotted the observed mean of the daily weights allowed each day as a dashed blue line, as well as the per-pitch weights allowed themselves, for which I've specified a shading alpha so that mutliple occurrences of the same weight value appear darker.
Step20: If we look at the mean of the estimates for days in September, we get
Step21: That is, an estimate wSL/C of around -1.5 runs per 100 pitches, with a credible interval of (-4.3, 1.4).
Modeling components of variation
A more comprehensive approach involves modeling the components of variation in the time series. A nice property of Gausian processes is that covariance functions are additive, meaning that variation across different scales (in this case, temporal scales) can be modeled directly.
We can apply this here if, for example, we think there are short-term (the order of a couple games) and medium- or long-term (several weeks or months) components to the variability of particular pitches. Short term variability might involve the effects of a road trip, a minor injury, or other unmeasured factors that could come and go, and which are not particularly predictive. On the other hand, we may be more interested in the variation over a monthly time scale that may reveal the steady development of a pitch, and which may be predictive. Since this is very noisy data, this may be our best hope.
This approach involves using more informative priors, encoding information about the scales we will expect to see the observed weights to vary. Here, we will set the majority of the expected variation for the short term trend to be over a 1-5 game range (via a gamma(1, 0.75) prior), while the prior for the long-term lengthscale will cover the 20-60 day range (via a gamma(20, 0.5) prior).
It is simple to wrap all of the above in a function, so that it can be applied to other players and pitches
Step22: Here is Trevor Bauer's fastball, as another example. The prediction is smoothed relative to the simpler covariance model.
Step23: Here are the resulting predictions (mean and 95% interval) for September, shown as wSI/C
Step24: Conclusions
I am not confident that linear weights are predictive, though they are certaintly useful for evaluating how a pitcher/pitch combination fared over some sufficiently long time period. Even though they are adjusted for the count, they are still confounded with many other variables that contributed to the observed outcome
Step25: The predictiveness can be characterized by both $p$, which quantifies the proportion players that differ from the league mean, and the proportion of "skill variance" relative to the total variance | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
from pymc3.gp.util import plot_gp_dist
import theano.tensor as tt
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('dark')
Explanation: Linear Weights Prediction
End of explanation
seasonal_pitch_raw = pd.read_csv('../private_data/seasonal_pitch_data.csv', encoding='utf-8')
seasonal_pitch_raw.head()
Explanation: Data import and cleaning
End of explanation
colnames = seasonal_pitch_raw.columns.copy()
seasonal_pitch_raw.iloc[:, 5] = seasonal_pitch_raw.iloc[:, 5] + seasonal_pitch_raw.iloc[:, 6]
seasonal_pitch_raw.iloc[:, 1] = seasonal_pitch_raw.iloc[:, 1] + seasonal_pitch_raw.iloc[:, 2]
seasonal_pitch = (seasonal_pitch_raw.drop(colnames[[2, 6]], axis=1)
.reset_index())
seasonal_pitch.columns = colnames
Explanation: The data are messed up; name fields contain commas in a comma-separated file so two extra columns are created.
End of explanation
seasonal_pitch['pi_pitch_type'] = seasonal_pitch.pi_pitch_type.str.upper()
Explanation: Clean pitch type column (convert all to upper case)
End of explanation
seasonal_pitch['date'] = pd.to_datetime(seasonal_pitch.date)
seasonal_pitch.head()
Explanation: Parse dates to datetime types
End of explanation
bad_pitches = ~seasonal_pitch.pi_pitch_type.isin(['KN', 'IB', 'XX'])
data_subset = seasonal_pitch[bad_pitches].copy()
Explanation: I'm going to discard a few pitch types: 'KN', 'IB', 'XX'
End of explanation
data_subset['month'] = data_subset.date.dt.month
data_subset['week'] = data_subset.date.dt.week
data_subset['dayofyear'] = data_subset.date.dt.dayofyear
Explanation: So that I can look at patterns at different scales, I will create columns for month, week and day (game).
End of explanation
data_subset.groupby(['pitcher', 'pi_pitch_type']).lw.sum().sort_values()
Explanation: Data exploration
We can get an idea of some of the best pitches by summing weights across pitcher and pitch type:
End of explanation
kluber_pitches = (data_subset.loc[data_subset.pitcherid==446372,
['pi_pitch_type', 'month', 'dayofyear', 'lw']]
.sort_values(by='lw'))
kluber_pitches.head()
Explanation: Let's look at Corey Kluber, just to isolate one player:
End of explanation
kluber_pitches[kluber_pitches.pi_pitch_type=='FC'].lw.sum()
Explanation: About 10 runs saved from his cutter over 5 months:
End of explanation
kluber_month_sum = kluber_pitches.groupby(['pi_pitch_type', 'month']).lw.sum().reset_index()
g = sns.factorplot(data=kluber_month_sum, col="pi_pitch_type", x="month", y="lw",
col_wrap=3);
Explanation: If you sum the allowed weights by month for each pitch, it gives the impression of a trend, in some instances.
End of explanation
kluber_game_sum = (kluber_pitches.groupby(['pi_pitch_type', 'dayofyear']).lw
.sum().reset_index())
g = sns.factorplot(data=kluber_game_sum, col="pi_pitch_type", x="dayofyear", y="lw",
col_wrap=3)
g.set_xticklabels(rotation=90);
Explanation: However, if you look at the per-game observed run values, by summing the weights for each game, the trends mostly disappear.
End of explanation
g = sns.factorplot(data=kluber_pitches, col="pi_pitch_type", x="dayofyear", y="lw",
col_wrap=3)
g.set_xticklabels(rotation=90);
Explanation: If you, take this further and look at the distribution of linear weights allowed per game, you can see the underlying variability in the data. I will proceed with the analysis using the pitch-level data, as the monthly/weekly sums would gloss over the variability associated with those summaries.
End of explanation
PITCH = 'SL'
day_min = kluber_pitches.dayofyear - kluber_pitches.dayofyear.min()
day_kluber_fc, lw_kluber_fc = (kluber_pitches.assign(day=day_min)
.loc[kluber_pitches.pi_pitch_type==PITCH, ['day', 'lw']].T.values)
X = day_kluber_fc.reshape(-1,1)
y = lw_kluber_fc
Explanation: Predictive modeling
The question posed suggests a time series prediction problem: predicting the next month's linear weights allowed from the observed weights allowed in the previous 5 months. A conventional approach here might be an ARIMA model, which includes a first-order differencing term and a moving average component. I prefer instead to use a non-parametric Bayesian structural time series approach via Gaussian processes (GP).
A Gaussian process can be viewed as a probabilistic "distribution over functions", which seeks to model the covariance structure of the time series, estimating the degree to which particular observations in the time series are related to those nearby. This seems appropriate here: treating the observed linear weights allowed during each game as a set of Gaussian (this can be relaxed to a different distribution) outcomes, which are correlated with the outcomes from games before and after it. This is another way of saying we have a multivariate Gaussian model. A Gaussian process is just an infinitely-dimensional Gaussian, where we may marginalize over any non-observed elements.
I prefer to build a "data-generating model" based on the observed weights allowed, rather than on the weekly or monthly summaries of the data. I don't expect this to be predictive, but with this approach we may at least be able to characterize the covariance structure and be able to esimate how variable things might look in September.
As an example, let's look at Corey Kluber's slider, but we could easily swap in any player/pitch combination we like:
End of explanation
ls = 0.1
tau = 0.5
cov = tau * pm.gp.cov.Matern32(1, ls)
X_vals = np.linspace(0, 2, 200)[:,None]
K = cov(X_vals).eval()
plt.figure(figsize=(14,4))
plt.plot(X_vals, pm.MvNormal.dist(mu=np.zeros(K.shape[0]), cov=K).random(size=3).T);
plt.xlabel("X");
Explanation: I'm going to use PyMC3, and open-source Bayesian library for Python that I created many years ago, and continue to develop and maintain today. There are a variety of other Python packages I could have used instead: scikit-learn, Stan, GPFlow, and others. PyMC3 makes it very easy to implement GP models. PyMC lets me specify a GP in just a few lines of code.
Gaussian processes are parameterized by a mean function (instead of a mean vector in a multivariate normal) and a covariance function (in place of a covariance matrix). The form of the GP is dictated by the covariance function, which can be specified to account for different components of a time series (e.g. periodic). I will use a simple covariance function called the Matérn covariance. Here are a few samples from functions drawn from a Matérn(3/2), just to give an idea:
End of explanation
with pm.Model() as kluber_model:
# Specify covariance function
ℓ = pm.Exponential("ℓ", 0.1)
η = pm.HalfCauchy("η", 1)
cov = η**2 * pm.gp.cov.Matern32(1, ℓ)
# Define marginal GP
gp = pm.gp.Marginal(cov_func=cov)
# Noise parameter
σ = pm.Uniform("σ", 0, 0.3)
# Pass data to marginal likelihood
ml = gp.marginal_likelihood("ml", X=X, y=y, noise=σ)
mp = pm.find_MAP()
Explanation: So, this is a flexible covariance function that is parameterized by scale and lengthscale parameters, which will estimate from the data. I will also specify a noise parameter $\sigma$ to characterize the variation of weights allowed within a game.
We will use optimization to obtain the maximum a posteriori (MAP) estimate of the model.
End of explanation
mp['σ']
Explanation: Here's an estimate of the standard deviation within days, which looks reasonable compared to the empirical, which is around 0.1.
End of explanation
# new values from April through September
X_new = np.linspace(0, 180, 500)[:,None]
# add the GP conditional to the model, given the new X values
with kluber_model:
f_pred = gp.conditional("f_pred", X_new)
Explanation: The great thing about Gaussian processes is that it is trivial to predict to other points outside the dataset, so we can define a set of points that extends into September, and draw from the conditional distribution:
End of explanation
with kluber_model:
pred_samples = pm.sample_ppc([mp], vars=[f_pred], samples=1000)
Explanation: Here we draw 1000 posterior samples from the predictive GP, to use for inference.
End of explanation
# plot the results
fig, axes = plt.subplots(figsize=(12,5), sharex=True)
scale = 100
# plot the samples from the gp posterior with samples and shading
plot_gp_dist(axes, pred_samples["f_pred"]*scale, X_new, palette="bone_r");
# plot the data alongside the esitmates
axes.plot(X, y*scale, 'ok', ms=3, alpha=0.1, label="Observed pitch");
axes.set_ylim(-0.1*scale, 0.1*scale)
axes.set_title("Corey Kluber {}".format(PITCH))
axes.set_ylabel("Linear weight")
mean_lw = (kluber_pitches[kluber_pitches.pi_pitch_type==PITCH].groupby('dayofyear')
.lw.mean()*scale)
mean_lw.index = mean_lw.index - mean_lw.index.min()
mean_lw.plot(ax=axes, style=':', label='Empirical mean')
# axis labels and title
plt.xlabel("Day")
plt.legend()
Explanation: The plot below shows the estimated function, along with its uncertainty, which is characterized by many poserior draws from the estimated function. I've also plotted the observed mean of the daily weights allowed each day as a dashed blue line, as well as the per-pitch weights allowed themselves, for which I've specified a shading alpha so that mutliple occurrences of the same weight value appear darker.
End of explanation
pred_samples['f_pred'][:, 150:].mean()
np.percentile(pred_samples['f_pred'][:, 150:], [2.5, 97.5])
Explanation: If we look at the mean of the estimates for days in September, we get:
End of explanation
player_lookup = dict(data_subset[['pitcherid', 'pitcher']].drop_duplicates().values)
def predict_weights(player_id, pitch):
player_pitches = (data_subset.loc[(data_subset.pitcherid==player_id) & (data_subset.pi_pitch_type==pitch),
['dayofyear', 'lw']]
.sort_values(by='lw'))
day_min = player_pitches.dayofyear - player_pitches.dayofyear.min()
day, lw = (player_pitches.assign(day=day_min)[['day', 'lw']].T.values)
X = day.reshape(-1,1)
y = lw
with pm.Model():
# Short-term variation
η_short = pm.HalfCauchy("η_short", beta=0.5, testval=0.1)
ℓ_short = pm.Gamma("ℓ_short", alpha=1, beta=0.75)
cov_short = η_short**2 * pm.gp.cov.Matern32(1, ℓ_short)
gp_short = pm.gp.Marginal(cov_func=cov_short)
# long term trend (1-2 month scale)
η_trend = pm.HalfCauchy("η_trend", beta=2, testval=2)
ℓ_trend = pm.Gamma("ℓ_trend", alpha=20, beta=0.5)
cov_trend = η_trend**2 * pm.gp.cov.ExpQuad(1, ℓ_trend)
gp_trend = pm.gp.Marginal(cov_func=cov_trend)
# Define marginal GP
gp = gp_trend + gp_short
# Noise parameter
σ = pm.Exponential("σ", 10)
cov_noise = pm.gp.cov.WhiteNoise(σ)
# Pass data to marginal likelihood
ml = gp.marginal_likelihood("ml", X=X, y=y, noise=cov_noise)
mp = pm.find_MAP()
X_new = np.linspace(0, 180, 500)[:,None]
f_pred = gp.conditional("f_pred", X_new)
pred_samples = pm.sample_ppc([mp], vars=[f_pred], samples=1000)
# plot the results
fig, axes = plt.subplots(figsize=(12,5), sharex=True)
scale = 100
# plot the samples from the gp posterior with samples and shading
plot_gp_dist(axes, pred_samples["f_pred"]*scale, X_new, palette="bone_r");
# plot the data alongside the esitmates
axes.plot(X, y*scale, 'ok', ms=3, alpha=0.1, label="Observed pitch");
axes.set_ylim(-0.1*scale, 0.1*scale)
axes.set_title("{} {}".format(player_lookup[player_id], pitch))
axes.set_ylabel("Linear weight")
mean_lw = player_pitches.groupby('dayofyear').lw.mean()*scale
mean_lw.index = mean_lw.index - mean_lw.index.min()
mean_lw.plot(ax=axes, style=':', label='Empirical mean')
# axis labels and title
plt.xlabel("Day")
plt.legend()
return pred_samples
Explanation: That is, an estimate wSL/C of around -1.5 runs per 100 pitches, with a credible interval of (-4.3, 1.4).
Modeling components of variation
A more comprehensive approach involves modeling the components of variation in the time series. A nice property of Gausian processes is that covariance functions are additive, meaning that variation across different scales (in this case, temporal scales) can be modeled directly.
We can apply this here if, for example, we think there are short-term (the order of a couple games) and medium- or long-term (several weeks or months) components to the variability of particular pitches. Short term variability might involve the effects of a road trip, a minor injury, or other unmeasured factors that could come and go, and which are not particularly predictive. On the other hand, we may be more interested in the variation over a monthly time scale that may reveal the steady development of a pitch, and which may be predictive. Since this is very noisy data, this may be our best hope.
This approach involves using more informative priors, encoding information about the scales we will expect to see the observed weights to vary. Here, we will set the majority of the expected variation for the short term trend to be over a 1-5 game range (via a gamma(1, 0.75) prior), while the prior for the long-term lengthscale will cover the 20-60 day range (via a gamma(20, 0.5) prior).
It is simple to wrap all of the above in a function, so that it can be applied to other players and pitches:
End of explanation
pred_samples = predict_weights(545333, 'FA')
Explanation: Here is Trevor Bauer's fastball, as another example. The prediction is smoothed relative to the simpler covariance model.
End of explanation
pred_samples['f_pred'][:, 150:].mean() * 100
np.percentile(pred_samples['f_pred'][:, 150:], [2.5, 97.5]) * 100
Explanation: Here are the resulting predictions (mean and 95% interval) for September, shown as wSI/C:
End of explanation
data_summary = (data_subset[data_subset.pi_pitch_type=='CU'].groupby(['pitcher', 'month']).lw
.agg([sum, np.size])
.reset_index()
.rename(columns={'sum': 'weight', 'size': 'n'}))
all_pitchers = data_summary.pitcher.unique()
pitcher_lookup = dict(zip(all_pitchers, np.arange(len(all_pitchers))))
data_summary['pitcher_idx'] = data_summary.pitcher.replace(pitcher_lookup)
# all_pitches = data_summary.pi_pitch_type.unique()
# pitch_lookup = dict(zip(all_pitches, np.arange(len(all_pitches))))
# data_summary['pitch_idx'] = data_summary.pi_pitch_type.replace(pitch_lookup)
data_summary['var_weight'] = data_summary['n'] / data_summary['n'].mean()
y = data_summary.weight.values
w = data_summary.var_weight.values
i = data_summary.pitcher_idx.values
with pm.Model() as hier_weights_curves:
p = pm.Beta('p', 1, 1)
v = pm.Bernoulli('v', p, shape=len(all_pitchers))
σ_a = pm.HalfCauchy('σ_a', 1)
η = pm.Normal('η', 0, 1, shape=len(all_pitchers))
α = pm.Deterministic('α', η*σ_a*v)
μ = pm.Normal('μ', 0, sd=100)
σ = pm.HalfCauchy('σ', 1)
r = pm.Deterministic('r', σ_a / (σ_a + σ))
weight_pred = pm.Normal('weight_pred', μ + α[i], w*σ, observed=y)
with hier_weights_curves:
trace = pm.sample(1000, tune=2000)
pm.energyplot(trace)
Explanation: Conclusions
I am not confident that linear weights are predictive, though they are certaintly useful for evaluating how a pitcher/pitch combination fared over some sufficiently long time period. Even though they are adjusted for the count, they are still confounded with many other variables that contributed to the observed outcome: the effects of a particular batter, the pitch combination that preceded the current pitch, the possible influence of the presence of baserunners (was he pitching from the stretch?), and more. I would roughly equate this exercise with trying to predict future stock market returns (another stochastic process) based on past performance. There is serial autocorrelation that may be sometimes predictive over a very short time period, but in general it is not predictive. As with the stock market, we may be able to characterize the temporal variability (volatility) of linear weights allowed, but little more.
As a general approach, however, I like Gaussian processes for robust time series estimation and prediction. Since it is driven by the covariance function, the uncertainty in predictions extrapolated beyond the range of the data is automatically accounted for. The degree to which today's data are predictive of tomorrow's outcome is governed by the covariance function; once these are no longer closely related, the process just reverts to the prior (i.e. what is known in the absence of data).
Addendum
Modified from the approach of McShane et al. (2011), we can quantify the predictiveness of linear weights using a hierarchical model. I will fit the pitch weights via a population model:
$$lw_{ij} \sim N(\mu + \alpha_i, w_{ij} \sigma^2)$$
where $\mu$ is the population mean and $\alpha_i$ is a random effect corresponding to player $i$ that sum to predict the linear weight for that player in month $j$.
The partial pooling is governed by the global variance $\sigma^2$, which is weighted for each player-month by the number of times the pitch was thrown relative to the average:
$$w_{ij} = \frac{n_{ij}}{\bar{n}}$$
Finally, the hierarchical random effect $\alpha_i$ is modeled as a zero-inflated mixture that hypothesizes that some subset of players are no different from the population mean for a particular pitch, while others are allowed to vary. Thus, a probability $p$ governs the proportion of players that vary according to $\alpha_i \sim N(0, \sigma_a)$ versus those that are zero (with probability $1-p$).
This model is run for any particular pitch type; I will here use the curveball.
End of explanation
pm.traceplot(trace, varnames=['p', 'r']);
pm.summary(trace, varnames=['p', 'r']).round(3)
plt.figure(figsize=(5, 16))
pm.forestplot(trace, varnames=['α'], quartiles=False, ylabels=['']);
Explanation: The predictiveness can be characterized by both $p$, which quantifies the proportion players that differ from the league mean, and the proportion of "skill variance" relative to the total variance:
$$r = \frac{\sigma_a}{\sigma_a + \sigma}$$
From the posterior estimates below, we can see that both proportions are low (around 30%), making linear weights not particularly predictive, at least at the monthly scale.
End of explanation |
9,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img align="left" src="imgs/logo.jpg" width="50px" style="margin-right
Step1: I. Loading Labeling Matricies
First we'll load our label matrices from notebook 2
Step2: Now we set up and run the hyperparameter search, training our model with different hyperparamters and picking the best model configuration to keep. We'll set the random seed to maintain reproducibility.
Note that we are fitting our model's parameters to the training set generated by our labeling functions, while we are picking hyperparamters with respect to score over the development set labels which we created by hand.
II
Step3: 2. Model Accuracies
These are the weights learned for each LF
Step4: 3. Plotting Marginal Probabilities
One immediate santity check you can peform using the generative model is to visually examine the distribution of predicted training marginals. Ideally, there should get a bimodal distribution with large seperation between each peaks, as shown below by the far right image. The corresponds to good signal for true and positive class labels. For your first Snorkel application, you'll probably see marginals closer to the far left or middle images. With all mass centered around p=0.5, you probably need to write more LFs got get more overall coverage. In the middle image, you have good negative coverage, but not enough positive LFs
<img align="left" src="imgs/marginals-common.jpg" width="265px" style="margin-right
Step5: 4. Generative Model Metrics
Step6: 5. Saving our training labels
Finally, we'll save the training_marginals, which are our "noise-aware training labels", so that we can use them in the next tutorial to train our end extraction model
Step7: III. Advanced Generative Model Features
A. Structure Learning
We may also want to include the dependencies between our LFs when training the generative model. Snorkel makes it easy to do this! DependencySelector runs a fast structure learning algorithm over the matrix of LF outputs to identify a set of likely dependencies. | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import re
import numpy as np
# Connect to the database backend and initalize a Snorkel session
from lib.init import *
from snorkel.models import candidate_subclass
from snorkel.annotations import load_gold_labels
from snorkel.lf_helpers import (
get_left_tokens, get_right_tokens, get_between_tokens,
get_text_between, get_tagged_text,
)
# initialize our candidate type definition
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
# gold (human-labeled) development set labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
Explanation: <img align="left" src="imgs/logo.jpg" width="50px" style="margin-right:10px">
Snorkel Workshop: Extracting Spouse Relations <br> from the News
Part 3: Training the Generative Model
Now, we'll train a model of the LFs to estimate their accuracies. Once the model is trained, we can combine the outputs of the LFs into a single, noise-aware training label set for our extractor. Intuitively, we'll model the LFs by observing how they overlap and conflict with each other.
End of explanation
from snorkel.annotations import LabelAnnotator
labeler = LabelAnnotator(lfs=[])
L_train = labeler.load_matrix(session, split=0)
L_dev = labeler.load_matrix(session, split=1)
Explanation: I. Loading Labeling Matricies
First we'll load our label matrices from notebook 2
End of explanation
from snorkel.learning import GenerativeModel
from snorkel.learning import RandomSearch
# use random search to optimize the generative model
param_ranges = {
'step_size' : [1e-3, 1e-4, 1e-5, 1e-6],
'decay' : [0.9, 0.95],
'epochs' : [50, 100],
'reg_param' : [1e-3],
}
model_class_params = {'lf_propensity' : False}
searcher = RandomSearch(GenerativeModel, param_ranges, L_train, n=5, model_class_params=model_class_params)
%time gen_model, run_stats = searcher.fit(L_dev, L_gold_dev)
run_stats
Explanation: Now we set up and run the hyperparameter search, training our model with different hyperparamters and picking the best model configuration to keep. We'll set the random seed to maintain reproducibility.
Note that we are fitting our model's parameters to the training set generated by our labeling functions, while we are picking hyperparamters with respect to score over the development set labels which we created by hand.
II: Unifying supervision
Generative Model
In data programming, we use a more sophisitcated model to unify our labeling functions. We know that these labeling functions will not be perfect, and some may be quite low-quality, so we will model their accuracies with a generative model, which Snorkel will help us easily apply.
This will ultimately produce a single set of noise-aware training labels, which we will then use to train an end extraction model in the next notebook. For more technical details of this overall approach, see our NIPS 2016 paper.
NOTE: Make sure you've written some of your own LFs in the previous notebook to get a decent score!!!
1. Training the Model
When training the generative model, we'll tune our hyperparamters using a simple grid search.
Parameter Definitions
epochs A single pass through all the data in your training set
step_size The factor by which we update model weights after computing the gradient
decay The rate our update factor dimishes (decay) over time.
End of explanation
x = L_dev.lf_stats(session, L_gold_dev)
train_marginals = gen_model.marginals(L_train)
Explanation: 2. Model Accuracies
These are the weights learned for each LF
End of explanation
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20, range=(0.0, 1.0))
plt.show()
Explanation: 3. Plotting Marginal Probabilities
One immediate santity check you can peform using the generative model is to visually examine the distribution of predicted training marginals. Ideally, there should get a bimodal distribution with large seperation between each peaks, as shown below by the far right image. The corresponds to good signal for true and positive class labels. For your first Snorkel application, you'll probably see marginals closer to the far left or middle images. With all mass centered around p=0.5, you probably need to write more LFs got get more overall coverage. In the middle image, you have good negative coverage, but not enough positive LFs
<img align="left" src="imgs/marginals-common.jpg" width="265px" style="margin-right:0px">
<img align="left" src="imgs/marginals-real.jpg" width="265px" style="margin-right:0px">
<img align="left" src="imgs/marginals-ideal.jpg" width="265px" style="margin-right:0px">
End of explanation
dev_marginals = gen_model.marginals(L_dev)
_, _, _, _ = gen_model.error_analysis(session, L_dev, L_gold_dev)
Explanation: 4. Generative Model Metrics
End of explanation
from snorkel.annotations import save_marginals
%time save_marginals(session, L_train, train_marginals)
Explanation: 5. Saving our training labels
Finally, we'll save the training_marginals, which are our "noise-aware training labels", so that we can use them in the next tutorial to train our end extraction model:
End of explanation
from snorkel.learning.structure import DependencySelector
MAX_DEPS = 5
ds = DependencySelector()
deps = ds.select(L_train, threshold=0.1)
deps = set(list(deps)[0:min(len(deps), MAX_DEPS)])
print("Using {} dependencies".format(len(deps)))
Explanation: III. Advanced Generative Model Features
A. Structure Learning
We may also want to include the dependencies between our LFs when training the generative model. Snorkel makes it easy to do this! DependencySelector runs a fast structure learning algorithm over the matrix of LF outputs to identify a set of likely dependencies.
End of explanation |
9,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Escaping particles
Sometimes we are not interested in particles that get too far from the central body. Here we will define a radius beyond which we remove particles from the simulation. Let's set up an artificial situation with 3 planets, and the inner one moves radially outward with $v > v_{escape}$.
Step1: Now let's run a simulation for 20 years (in default units where $G=1$, and thus AU, yr/2$\pi$, and $M_\odot$, see Units.ipynb for how to change units), and set up a 50 AU sphere beyond which we remove particles from the simulation. We can do this by setting the exit_max_distance flag of the simulation object. If a particle's distance (from the origin of whatever inertial reference frame chosen) exceeds sim.exit_max_distance, an exception is thrown.
If we simply call sim.integrate(), the program will crash due to the unhandled exception when the particle escapes, so we'll create a try-except block to catch the exception. We'll also store the x,y positions of Venus, which we expect to survive.
Step2: So this worked as expected. Now let's plot what we got
Step3: This doesn't look right. The problem here is that when we removed particles[1] from the simulation, all the particles got shifted down in the particles array. So following the removal, xvenus all of a sudden started getting populated by the values for Earth (the new sim.particles[2]). A more robust way to access particles is using hashes (see UniquelyIdentifyingParticles.ipynb) | Python Code:
import rebound
import numpy as np
def setupSimulation():
sim = rebound.Simulation()
sim.add(m=1., hash="Sun")
sim.add(x=0.4,vx=5., hash="Mercury")
sim.add(a=0.7, hash="Venus")
sim.add(a=1., hash="Earth")
sim.move_to_com()
return sim
sim = setupSimulation()
sim.status()
Explanation: Escaping particles
Sometimes we are not interested in particles that get too far from the central body. Here we will define a radius beyond which we remove particles from the simulation. Let's set up an artificial situation with 3 planets, and the inner one moves radially outward with $v > v_{escape}$.
End of explanation
sim = setupSimulation() # Resets everything
sim.exit_max_distance = 50.
Noutputs = 1000
times = np.linspace(0,20.*2.*np.pi,Noutputs)
xvenus, yvenus = np.zeros(Noutputs), np.zeros(Noutputs)
for i,time in enumerate(times):
try:
sim.integrate(time)
except rebound.Escape as error:
print(error)
for j in range(sim.N):
p = sim.particles[j]
d2 = p.x*p.x + p.y*p.y + p.z*p.z
if d2>sim.exit_max_distance**2:
index=j # cache index rather than remove here since our loop would go beyond end of particles array
sim.remove(index=index)
xvenus[i] = sim.particles[2].x
yvenus[i] = sim.particles[2].y
print("Went down to {0} particles".format(sim.N))
Explanation: Now let's run a simulation for 20 years (in default units where $G=1$, and thus AU, yr/2$\pi$, and $M_\odot$, see Units.ipynb for how to change units), and set up a 50 AU sphere beyond which we remove particles from the simulation. We can do this by setting the exit_max_distance flag of the simulation object. If a particle's distance (from the origin of whatever inertial reference frame chosen) exceeds sim.exit_max_distance, an exception is thrown.
If we simply call sim.integrate(), the program will crash due to the unhandled exception when the particle escapes, so we'll create a try-except block to catch the exception. We'll also store the x,y positions of Venus, which we expect to survive.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig,ax = plt.subplots(figsize=(15,5))
ax.plot(xvenus, yvenus)
ax.set_aspect('equal')
ax.set_xlim([-2,10]);
Explanation: So this worked as expected. Now let's plot what we got:
End of explanation
sim = setupSimulation() # Resets everything
sim.exit_max_distance = 50.
Noutputs = 1000
times = np.linspace(0,20.*2.*np.pi,Noutputs)
xvenus, yvenus = np.zeros(Noutputs), np.zeros(Noutputs)
for i,time in enumerate(times):
try:
sim.integrate(time)
except rebound.Escape as error:
print(error)
for j in range(sim.N):
p = sim.particles[j]
d2 = p.x*p.x + p.y*p.y + p.z*p.z
if d2>sim.exit_max_distance**2:
index=j # cache index rather than remove here since our loop would go beyond end of particles array
sim.remove(index=index)
xvenus[i] = sim.get_particle_by_hash("Venus").x
yvenus[i] = sim.get_particle_by_hash("Venus").y
fig,ax = plt.subplots(figsize=(15,5))
ax.plot(xvenus, yvenus)
ax.set_aspect('equal')
ax.set_xlim([-2,10]);
Explanation: This doesn't look right. The problem here is that when we removed particles[1] from the simulation, all the particles got shifted down in the particles array. So following the removal, xvenus all of a sudden started getting populated by the values for Earth (the new sim.particles[2]). A more robust way to access particles is using hashes (see UniquelyIdentifyingParticles.ipynb)
End of explanation |
9,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
vocab_to_int = dict(zip(sorted_vocab, range(0, len(text))))
int_to_vocab = {v: k for k, v in vocab_to_int.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
dict = {'.': "||Period||", ',':'||Comma||', '"':'||Quotation_Mark||', ';':'||Semicolon||',
'!':'||Exclamation_Mark||', '?': '||Question_Mark||', '(':'||Left_Parentheses||',
')':'||Right_Parentheses||', '--':'||Dash||', '\n':'||Return||'}
# TODO: Implement Function
return dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
Input = tf.placeholder(tf.int32, [None, None], name='input')
Targets = tf.placeholder(tf.int32, [None, None])
LearningRage = tf.placeholder(tf.float32)
return Input, Targets, LearningRage
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.7)
#more lstms cause higher learning loss investigate...
cell = tf.contrib.rnn.MultiRNNCell([drop] * 1)
#initial state with all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
#initial_state = cell.zero_state(batch_size, tf.float32)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
# TODO: Implement Function
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# embed_dim dimesions of what?
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
#OMG activation_fn does not default to NONE/Linear...
logits = tf.contrib.layers.fully_connected(outputs, vocab_size,
weights_initializer=tf.truncated_normal_initializer(mean=0.0,stddev=0.1),
biases_initializer=tf.zeros_initializer(), activation_fn=None)
# TODO: Implement Function
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
#do the slice
slice_size = batch_size * seq_length
# TODO: Implement Function
#divide batches by slice
n_batches = int(len(int_text) / slice_size)
#do the numpy!
x_data = np.array(int_text[: n_batches * slice_size])
y_data = np.array(int_text[1: n_batches * slice_size + 1])
x_batches = np.split(x_data.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(y_data.reshape(batch_size, -1), n_batches, 1)
return np.asarray(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
# TODO: Implement Function
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return np.random.choice(list(int_to_vocab.values()), 1, p=probabilities)[0]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
9,596 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have the following DataFrame: | Problem:
import pandas as pd
import numpy as np
df = pd.DataFrame({'Col1': [1, 4, 7, 10, 13, 16],
'Col2': [2, 5, 8, 11, 14, 17],
'Col3': [3, 6, 9, 12, 15, 18],
'Type': [1, 1, 2, 2, 3, 3]})
List = np.random.permutation(len(df))
def g(df, List):
return df.iloc[List]
result = g(df.copy(), List) |
9,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
20150111_2DPlotsonPythonP3
Two-dimensional plots on Python [Part III]
Support material for the blog post "Two-dimensional plots on Python [Part III]", on Programming Science.
Author
Step1: Two plots in one window, different columns.
Step2: Five plots in one window, different rows.
Step3: subplot()
Step4: Plots of trigonometric functions divided in the same window. | Python Code:
from pylab import *
t = arange(0.0,2.0,0.01)
y1 = sin(2*pi*t)
y2 = cos(2*pi*t)
fig, ax = subplots(2, sharex=True)
ax[0].plot(t, y1, color='green', linestyle='-.', linewidth=3)
ax[1].plot(t, y2, color='red', linestyle=':', linewidth=3)
show()
Explanation: 20150111_2DPlotsonPythonP3
Two-dimensional plots on Python [Part III]
Support material for the blog post "Two-dimensional plots on Python [Part III]", on Programming Science.
Author: Alexandre 'Jaguar' Fioravante de Siqueira
Contact: http://programmingscience.org/?page_id=26
Support material:
http://www.github.com/programmingscience/code
In order to cite this material, please use the reference below
(this is a Chicago-like style):
de Siqueira, Alexandre Fioravante. "Two-dimensional plots on Python [Part III]". Programming Science. 2015, Jan 11. Available at
http://www.programmingscience.org/?p=42.
Access date: (please put your access date here).
Copyright (C) Alexandre Fioravante de Siqueira
This program is free software: you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by the Free
Software Foundation, either version 3 of the License, or (at your option)
any later version.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
more details.
You should have received a copy of the GNU General Public License along
with this program. If not, see http://www.gnu.org/licenses/.
Several plots in one window.
Two plots in one window, different rows.
End of explanation
from pylab import *
t = arange(0.0,2.0,0.01)
y1 = sin(2*pi*t)
y2 = cos(2*pi*t)
fig, ax = subplots(1, 2, sharex=True)
ax[0].plot(t, y1, color='green', linestyle='-.', linewidth=3)
ax[1].plot(t, y2, color='red', linestyle=':', linewidth=3)
show()
Explanation: Two plots in one window, different columns.
End of explanation
# fig, ax = subplots(5, sharex=True)
Explanation: Five plots in one window, different rows.
End of explanation
# fig, ax = subplots(3, 2, sharex=True)
Explanation: subplot(): plots in 3 rows, and 2 columns.
End of explanation
from pylab import *
t = arange(0.0,2.0,0.01)
fig, ax = subplots(3, 2, sharex=True)
ax[0, 0].plot(t, sin(pi/8*t), color='green', linestyle='-', linewidth=3)
ax[0, 1].plot(t, cos(pi/8*t), color='red', linestyle='--', linewidth=3)
ax[1, 0].plot(t, tan(pi/8*t), color='cyan', linestyle='-.', linewidth=3)
ax[1, 1].plot(t, 1/cos(pi/8*t), color='magenta', linestyle=':', linewidth=3)
ax[2, 0].plot(t, 1/sin(pi/8*t), color='yellow', linestyle='--', linewidth=3)
ax[2, 1].plot(t, 1/tan(pi/8*t), color='black', linestyle=':', linewidth=3)
show()
Explanation: Plots of trigonometric functions divided in the same window.
End of explanation |
9,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Consistency testing
For most problems, multiple flux states can achieve the same optimum and thus we try to obtain a consistent network. By this, we mean that there will be mulitple blocked reactions in the network, which gives rise to this inconsistency. To solve this problem, we use algorithms which can detect all the blocked reactions and also give us consistent networks.
Let us take a toy network, like so
Step1: Using FVA
The first approach we can follow is to use FVA (Flux Variability Analysis) which among many other applications, is used to detect blocked reactions. The cobra.flux_analysis.find_blocked_reactions() function will return a list of all the blocked reactions obtained using FVA.
Step2: As we see above, we are able to obtain the blocked reaction, which in this case is $v_2$.
Using FASTCC
The second approach to obtaining consistent network in cobrapy is to use FASTCC. Using this method, you can expect to efficiently obtain an accurate consistent network. For more details regarding the algorithm, please see Vlassis N, Pacheco MP, Sauter T (2014). | Python Code:
import cobra
test_model = cobra.Model("test_model")
v1 = cobra.Reaction("v1")
v2 = cobra.Reaction("v2")
v3 = cobra.Reaction("v3")
v4 = cobra.Reaction("v4")
v5 = cobra.Reaction("v5")
v6 = cobra.Reaction("v6")
test_model.add_reactions([v1, v2, v3, v4, v5, v6])
v1.reaction = "-> 2 A"
v2.reaction = "A <-> B"
v3.reaction = "A -> D"
v4.reaction = "A -> C"
v5.reaction = "C -> D"
v6.reaction = "D ->"
v1.bounds = (0.0, 3.0)
v2.bounds = (-3.0, 3.0)
v3.bounds = (0.0, 3.0)
v4.bounds = (0.0, 3.0)
v5.bounds = (0.0, 3.0)
v6.bounds = (0.0, 3.0)
test_model.objective = v6
Explanation: Consistency testing
For most problems, multiple flux states can achieve the same optimum and thus we try to obtain a consistent network. By this, we mean that there will be mulitple blocked reactions in the network, which gives rise to this inconsistency. To solve this problem, we use algorithms which can detect all the blocked reactions and also give us consistent networks.
Let us take a toy network, like so:
\begin{align}
v_1 &: {} \rightarrow 2A \
v_2 &: A \leftrightarrow B \
v_3 &: A \rightarrow D \
v_4 &: A \rightarrow C \
v_5 &: C \rightarrow D \
v_6 &: D \rightarrow
\end{align}
Here, $v_{x}$, where $x \in {1, 2, \ldots, 6}$ represent the flux carried by the reactions as shown above.
End of explanation
cobra.flux_analysis.find_blocked_reactions(test_model)
Explanation: Using FVA
The first approach we can follow is to use FVA (Flux Variability Analysis) which among many other applications, is used to detect blocked reactions. The cobra.flux_analysis.find_blocked_reactions() function will return a list of all the blocked reactions obtained using FVA.
End of explanation
consistent_model = cobra.flux_analysis.fastcc(test_model)
consistent_model.reactions
Explanation: As we see above, we are able to obtain the blocked reaction, which in this case is $v_2$.
Using FASTCC
The second approach to obtaining consistent network in cobrapy is to use FASTCC. Using this method, you can expect to efficiently obtain an accurate consistent network. For more details regarding the algorithm, please see Vlassis N, Pacheco MP, Sauter T (2014).
End of explanation |
9,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with Caffe on Docker environment
21 Octuber 2015
Alejandro Cartas
1. Introduction
What is a Deep Learning programming framework?
Is a combination of specialized hardware and software to create and train Deep Learning networks. The framework stack shown below corresponds to the GPU stack and some components are optional. In this guide we will use a CPU stack.
Adapted figure from Nvidia courseware.
Why use Caffe?
Every Deep Learning framework has its advantages and its drawbacks. I'll use Caffe mainly because it seems to be more out-of-the-box than Theano. But many researchers prefer Theano because of its detailed programming level. See a Reddit discussion on a Best framework for Deep Neural Nets? at https
Step1: Since the digits of the MNIST were stored in a special format, we need to load them
Step2: Now we can visualize one by one as follows (<span style="color
Step3: Now we can do some predictions using our trained LeNet model | Python Code:
import caffe
import matplotlib.pyplot as plt
import matplotlib.ticker as plticker
import matplotlib as mpl
import numpy as np
import os
import struct
%matplotlib inline
Explanation: Getting started with Caffe on Docker environment
21 Octuber 2015
Alejandro Cartas
1. Introduction
What is a Deep Learning programming framework?
Is a combination of specialized hardware and software to create and train Deep Learning networks. The framework stack shown below corresponds to the GPU stack and some components are optional. In this guide we will use a CPU stack.
Adapted figure from Nvidia courseware.
Why use Caffe?
Every Deep Learning framework has its advantages and its drawbacks. I'll use Caffe mainly because it seems to be more out-of-the-box than Theano. But many researchers prefer Theano because of its detailed programming level. See a Reddit discussion on a Best framework for Deep Neural Nets? at https://www.reddit.com/comments/2c9x0s and a performance comparison at https://github.com/soumith/convnet-benchmarks.
Deep learning frameworks comparison table from Nvidia courseware.
What is a Docker container?
According to its website, "Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in".
Why use Caffe on a Docker container?
Perhaps is the easiest and fastest way to get started with Caffe without breaking your system and your nerves.
2. Before starting
If you have no experience using Docker like me, please know that if you make any change on the Docker image that you want to preserve, you must save it by following step 2 of section 3 and using the Container ID described in the step 2 of section 3.
3. Setup
1. Install VirtualBox and Docker. Docker is a container that holds an entire OS and runs over a virtual machine, in this case it is VirtualBox. Some Docker installation packages already contain a VirtualBox inside, but you can download VirtualBox for free at https://www.virtualbox.org/wiki/Downloads. Additionally, a detailed and easy guide for installing Docker on your system can be found here https://docs.docker.com/installation/.
2. Download and start the Caffe image. Four different Caffe images for Docker are linked in the official site at https://github.com/BVLC/caffe/wiki/Installation. We will use a stable build of Caffe NoGPU. After running the Docker Terminal we type in the terminal:
```bash
Pulling the stable build Caffe (NoGPU) image for Docker
docker pull tleyden5iwx/caffe-cpu-master
```
Now, lets start our newly downloaded image:
bash
docker run -i -t tleyden5iwx/caffe-cpu-master:latest /bin/bash
We will be logged in our Caffe image with a similar prompt like this one root@4dabab76fab3:~/caffe#. The hostname displayed in the prompt corresponds to the Container ID, in this case the 4dabab76fab3 is our Container ID. It is important to keep the Container ID to commit any changes to the image.
3. Installing the required stuff. These steps will install the required packages to have a Jupyter notebook working. (Note that the current user is root).
```bash
Updating our system
apt-get update && apt-get upgrade
Installing numerical libraries
sudo apt-get install liblapack-dev liblapack-doc-man liblapack-doc liblapack-pic liblapack3 liblapack-test liblapack3gf liblapacke liblapacke-dev
Installing Python's pip and Jupyter
apt-get install python-pip python-numpy python-scipy python-matplotlib python-pandas python-sympy python-nose libatlas3gf-base python-sklearn python-yaml nano
pip install jupyter
```
4. Getting started with LeNet on MNIST dataset
<span style="color:red;font-weight:bold">IMPORTANT NOTE: A walkthrough guide of the LeNet network can be found on</span> https://github.com/BVLC/caffe/blob/master/examples/01-learning-lenet.ipynb. <span style="color:red;font-weight:bold">So if you pretend to follow the other guide, you can simply save your Docker image by doing the second and third steps of this section and skip the rest. Also note that the Caffe version distributed on this stable Docker container might need to be updated in order to follow that tutorial, you can follow section 5 to update it.</span>
1. Training the LeNet network.
```bash
Going to the Caffe directory
cd /opt/caffe
Downloading the MNIST dataset
./data/mnist/get_mnist.sh
Converts the data into lmdb/leveldb format (Calls a C++ binary that does the dirty job)
./examples/mnist/create_mnist.sh
```
An error reporting that libdc1394 error: Failed to initialize libdc1394 can appeared. You don't have to worry about this error since it seems to be related with the Docker image itself. A "solution" can be found here.
Since we are using only the CPU, we must do something before training the model. We have to replace the line solver_mode: GPU for solver_mode: CPU in the lenet_solver.prototxt file. We can do this as follows
bash
sed -i 's/solver_mode: GPU/solver_mode: CPU/' examples/mnist/lenet_solver.prototxt
Now lets train the model by doing
bash
./build/tools/caffe train --solver=examples/mnist/lenet_solver.prototxt
This will take a few minutes and will output logging messages. When it is done training the model, it will create the files lenet_iter_10000.caffemodel and lenet_iter_10000.solverstate on the examples/mnist/ directory.
2. Saving the container image. Now we should logout the current running image and save it. We can logout the image by typing exit and hitting enter or by pressing Ctrl+D. We can save the image using the Container ID we got in step 3.
bash
docker commit -m "Setup completed" 4dabab76fab3 tleyden5iwx/caffe-cpu-master:v1
3. Running Jupyter notebook. We can simply call a Jupyter notebook using the next command:
bash
docker run -i -p 8888:8888 -t tleyden5iwx/caffe-cpu-master:v1 /bin/bash -c 'cd /opt/caffe && jupyter notebook --port=8888 --ip="*" --no-browser'
<span style="color:red;font-weight:bold">Note that this opens a Jupyter notebook session to everybody and could be a security concern.</span>
Now we can open the notebook on any browser using the assigned ip address to Docker. We can find its ip by typing on the terminal:
bash
docker-machine ip default
While writing this guide my Docker ip was 192.168.99.100, so Jupyter can be accessed on http://192.168.99.100:8888.
4. Trying some MNIST test examples. After creating a Jupyter notebook on our Docker machine, we have to do the usual setup stuff:
End of explanation
# Function adapted from https://gist.github.com/akesling/5358964.
def load_mnist_test_data(path = "."):
fname_img = os.path.join(path, 't10k-images-idx3-ubyte')
fname_lbl = os.path.join(path, 't10k-labels-idx1-ubyte')
# Load everything in some numpy arrays
with open(fname_lbl, 'rb') as flbl:
magic, num = struct.unpack(">II", flbl.read(8))
lbl = np.fromfile(flbl, dtype=np.int8)
with open(fname_img, 'rb') as fimg:
magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
img = np.fromfile(fimg, dtype=np.uint8).reshape(len(lbl), rows, cols)
get_img = lambda idx: (lbl[idx], np.reshape(img[idx], (28,28,1)))
# Create an iterator which returns each image in turn
for i in xrange(len(lbl)):
yield get_img(i)
test_set=load_mnist_test_data("/opt/caffe/data/mnist/")
Explanation: Since the digits of the MNIST were stored in a special format, we need to load them:
End of explanation
def plot_mnist_digit(image, title=None):
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
imgplot = ax.imshow(image[:,:,0], cmap=mpl.cm.Greys)
imgplot.set_interpolation('nearest')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
major_ticks = np.arange(0, 29, 7)
minor_ticks = np.arange(0, 28, 1)
ax.set_xticks(major_ticks)
ax.set_xticks(minor_ticks, minor=True)
ax.set_yticks(major_ticks)
ax.set_yticks(minor_ticks, minor=True)
# ax.grid(which='both',color='gray', linestyle='-',linewidth=0.5)
if not title == None:
plt.title(title, fontsize=15)
plt.show()
digit = next(test_set)
label = digit[0]; image = digit[1]
plot_mnist_digit(image, "LABEL: " + str(label))
Explanation: Now we can visualize one by one as follows (<span style="color:red;font-style:italic;">Please note that the grayscale is inverse plotted</span>):
End of explanation
# Creating our trained classifier
classifier = caffe.Classifier('/opt/caffe/examples/mnist/lenet.prototxt',
'/opt/caffe/examples/mnist/lenet_iter_10000.caffemodel')
for i in xrange(5):
digit = next(test_set)
label = digit[0]; image = digit[1]
prediction = classifier.predict([image], oversample=False)
predicted_label=np.argmax(prediction)
plot_mnist_digit(image, "LABEL: " + str(label) + " PREDICTED LABEL: "+ str(predicted_label))
Explanation: Now we can do some predictions using our trained LeNet model
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.