Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
11,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting sentiment from product reviews
Fire up GraphLab Create
Step1: Read some product review data
Loading reviews for a set of baby products.
Step2: Let's explore this data together
Data includes the product name, the review text and the rating of the review.
Step3: Build the word count vector for each review
Step4: Examining the reviews for most-sold product
Step5: Build a sentiment classifier
Step6: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
Step7: Let's train the sentiment classifier
Step8: Evaluate the sentiment model
Step9: Applying the learned model to understand sentiment for Giraffe
Step10: Sort the reviews based on the predicted sentiment and explore
Step11: Most positive reviews for the giraffe
Step12: Show most negative reviews for giraffe | Python Code:
import graphlab
Explanation: Predicting sentiment from product reviews
Fire up GraphLab Create
End of explanation
products = graphlab.SFrame('amazon_baby.gl/')
Explanation: Read some product review data
Loading reviews for a set of baby products.
End of explanation
products.head()
Explanation: Let's explore this data together
Data includes the product name, the review text and the rating of the review.
End of explanation
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
graphlab.canvas.set_target('ipynb')
products['name'].show()
Explanation: Build the word count vector for each review
End of explanation
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].show(view='Categorical')
Explanation: Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'
End of explanation
products['rating'].show(view='Categorical')
Explanation: Build a sentiment classifier
End of explanation
#ignore all 3* reviews
products = products[products['rating'] != 3]
#positive sentiment = 4* or 5* reviews
products['sentiment'] = products['rating'] >=4
products.head()
Explanation: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
End of explanation
train_data,test_data = products.random_split(.8, seed=0)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
Explanation: Let's train the sentiment classifier
End of explanation
sentiment_model.evaluate(test_data, metric='roc_curve')
sentiment_model.show(view='Evaluation')
Explanation: Evaluate the sentiment model
End of explanation
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
Explanation: Applying the learned model to understand sentiment for Giraffe
End of explanation
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews.head()
Explanation: Sort the reviews based on the predicted sentiment and explore
End of explanation
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
Explanation: Most positive reviews for the giraffe
End of explanation
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
def sent_word_count(word_counts):
if 'hate' in word_counts:
return word_counts['hate']
else:
return 0
products['hate'] = products['word_count'].apply(sent_word_count)
word_dict = {}
for word in selected_words:
word_dict[word] = products[word].sum()
train_data, test_data = products.random_split(.8, seed=0)
selected_words_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['awesome'],
validation_set=test_data)
selected_words_model['coefficients']
swm_coefficients = selected_words_model['coefficients']
swm_coefficients.sort('value')
selected_words_model.evaluate(test_data, metric='roc_curve')
baby_products = products[products['name'] == 'Baby Trend Diaper Champ']
baby_products['predicted_sentiment'] = selected_words_model.predict(baby_products, output_type='probability')
baby_products = baby_products.sort('predicted_sentiment', ascending=False)
baby_products.head()
baby_products['review'][0]
baby_products['predicted_sentiment'] = sentiment_model.predict(baby_products, output_type='probability')
baby_products = baby_products.sort('predicted_sentiment', ascending=False)
baby_products['review'][0]
baby_products.head()
Explanation: Show most negative reviews for giraffe
End of explanation |
11,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Averaging detector data with Dask
We often want to average large detector data across trains, keeping the pulses within each train separate, so we have an average image for pulse 0, another for pulse 1, etc.
This data may be too big to load into memory at once, but using Dask we can work with it like a numpy array. Dask takes care of splitting the job up into smaller pieces and assembling the result.
Step1: First, we use Dask-Jobqueue to talk to the Maxwell cluster.
Step2: If the cluster is busy, you might need to wait a while for the jobs to start.
The cluster widget above will update when they're running.
Next, we'll set Dask up to use those workers
Step3: Now Dask is ready, let's open the run we're going to operate on
Step4: We're working with data from the DSSC detector.
In this run, it's recording 75 frames for each train - this is part of the info above.
Now, we'll define how we're going to average over trains for each module
Step5: Dask shows us what shape the result array will be, but so far, no real computation has happened.
Now that we've defined what we want, let's tell Dask to compute it.
This will take a minute or two. If you're running it, scroll up to the Dask cluster widget and click the status link to see what it's doing.
Step6: all_average_arr is a regular numpy array with our results. Here are the values from the corner of module 0, frame 0
Step7: Please shut down the cluster (or scale it down to 0 workers) if you won't be using it for a while.
This releases the resources for other people. | Python Code:
from karabo_data import open_run
import dask.array as da
from dask.distributed import Client, progress
from dask_jobqueue import SLURMCluster
import numpy as np
Explanation: Averaging detector data with Dask
We often want to average large detector data across trains, keeping the pulses within each train separate, so we have an average image for pulse 0, another for pulse 1, etc.
This data may be too big to load into memory at once, but using Dask we can work with it like a numpy array. Dask takes care of splitting the job up into smaller pieces and assembling the result.
End of explanation
partition = 'exfel' # For EuXFEL staff
#partition = 'upex' # For users
cluster = SLURMCluster(
queue=partition,
# Resources per SLURM job (per node, the way SLURM is configured on Maxwell)
# processes=16 runs 16 Dask workers in a job, so each worker has 1 core & 16 GB RAM.
processes=16, cores=16, memory='256GB',
)
# Get a notbook widget showing the cluster state
cluster
# Submit 2 SLURM jobs, for 32 Dask workers
cluster.scale(32)
Explanation: First, we use Dask-Jobqueue to talk to the Maxwell cluster.
End of explanation
client = Client(cluster)
print("Created dask client:", client)
Explanation: If the cluster is busy, you might need to wait a while for the jobs to start.
The cluster widget above will update when they're running.
Next, we'll set Dask up to use those workers:
End of explanation
run = open_run(proposal=2212, run=103)
run.info()
Explanation: Now Dask is ready, let's open the run we're going to operate on:
End of explanation
def average_module(modno, run, pulses_per_train=75):
source = f'SCS_DET_DSSC1M-1/DET/{modno}CH0:xtdf'
counts = run.get_data_counts(source, 'image.data')
arr = run.get_dask_array(source, 'image.data')
# Make a new dimension for trains
arr_trains = arr.reshape(-1, pulses_per_train, 128, 512)
if modno == 0:
print("array shape:", arr.shape) # frames, dummy, 128, 512
print("Reshaped to:", arr_trains.shape)
return arr_trains.mean(axis=0, dtype=np.float32)
mod_averages = [
average_module(i, run, pulses_per_train=75)
for i in range(16)
]
mod_averages
# Stack the averages into a single array
all_average = da.stack(mod_averages)
all_average
Explanation: We're working with data from the DSSC detector.
In this run, it's recording 75 frames for each train - this is part of the info above.
Now, we'll define how we're going to average over trains for each module:
End of explanation
%%time
all_average_arr = all_average.compute() # Get a concrete numpy array for the result
Explanation: Dask shows us what shape the result array will be, but so far, no real computation has happened.
Now that we've defined what we want, let's tell Dask to compute it.
This will take a minute or two. If you're running it, scroll up to the Dask cluster widget and click the status link to see what it's doing.
End of explanation
print(all_average_arr[0, 0, :5, :5])
Explanation: all_average_arr is a regular numpy array with our results. Here are the values from the corner of module 0, frame 0:
End of explanation
client.close()
cluster.close()
Explanation: Please shut down the cluster (or scale it down to 0 workers) if you won't be using it for a while.
This releases the resources for other people.
End of explanation |
11,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Blind Source Separation with the Shogun Machine Learning Toolbox
By Kevin Hughes
This notebook illustrates <a href="http
Step1: Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook.
Step2: Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here
Step3: Now let's load a second audio clip
Step4: and a third audio clip
Step5: Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together!
First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be filled with zeros so it won't affect the sound.
The signals are mixed by creating a mixing matrix $A$ and taking the dot product of $A$ with the signals $S$.
Afterwards I plot the mixed signals and create the wavPlayers, have a listen!
Step6: Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy!
Step7: Now lets unmix those signals!
In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail on how JADE works behind the scenes but here is the reference for the original paper
Step8: Thats all there is to it! Check out how nicely those signals have been separated and have a listen! | Python Code:
import numpy as np
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import wavfile
from scipy.signal import resample
import shogun as sg
def load_wav(filename,samplerate=44100):
# load file
rate, data = wavfile.read(filename)
# convert stereo to mono
if len(data.shape) > 1:
data = data[:,0]/2 + data[:,1]/2
# re-interpolate samplerate
ratio = float(samplerate) / float(rate)
data = resample(data, int(len(data) * ratio))
return samplerate, data.astype(np.int16)
Explanation: Blind Source Separation with the Shogun Machine Learning Toolbox
By Kevin Hughes
This notebook illustrates <a href="http://en.wikipedia.org/wiki/Blind_signal_separation">Blind Source Seperation</a>(BSS) on audio signals using <a href="http://en.wikipedia.org/wiki/Independent_component_analysis">Independent Component Analysis</a> (ICA) in Shogun. We generate a mixed signal and try to seperate it out using Shogun's implementation of ICA & BSS called <a href="http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CJade.html">JADE</a>.
My favorite example of this problem is known as the cocktail party problem where a number of people are talking simultaneously and we want to separate each persons speech so we can listen to it separately. Now the caveat with this type of approach is that we need as many mixtures as we have source signals or in terms of the cocktail party problem we need as many microphones as people talking in the room.
Let's get started, this example is going to be in python and the first thing we are going to need to do is load some audio files. To make things a bit easier further on in this example I'm going to wrap the basic scipy wav file reader and add some additional functionality. First I added a case to handle converting stereo wav files back into mono wav files and secondly this loader takes a desired sample rate and resamples the input to match. This is important because when we mix the two audio signals they need to have the same sample rate.
End of explanation
from IPython.display import Audio
from IPython.display import display
def wavPlayer(data, rate):
display(Audio(data, rate=rate))
Explanation: Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook.
End of explanation
# change to the shogun-data directory
import os
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
%matplotlib inline
import pylab as pl
# load
fs1,s1 = load_wav('tbawht02.wav') # Terran Battlecruiser - "Good day, commander."
# plot
pl.figure(figsize=(6.75,2))
pl.plot(s1)
pl.title('Signal 1')
pl.show()
# player
wavPlayer(s1, fs1)
Explanation: Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here: http://wavs.unclebubby.com/computer/starcraft/ among other places on the web or from your Starcraft install directory (come on I know its still there).
Another good source of data (although lets be honest less cool) is ICA central and various other more academic data sets: http://perso.telecom-paristech.fr/~cardoso/icacentral/base_multi.html. Note that for lots of these data sets the data will be mixed already so you'll be able to skip the next few steps.
Okay lets load up an audio file. I chose the Terran Battlecruiser saying "Good Day Commander". In addition to the creating a wavPlayer I also plotted the data using Matplotlib (and tried my best to have the graph length match the HTML player length). Have a listen!
End of explanation
# load
fs2,s2 = load_wav('TMaRdy00.wav') # Terran Marine - "You want a piece of me, boy?"
# plot
pl.figure(figsize=(6.75,2))
pl.plot(s2)
pl.title('Signal 2')
pl.show()
# player
wavPlayer(s2, fs2)
Explanation: Now let's load a second audio clip:
End of explanation
# load
fs3,s3 = load_wav('PZeRdy00.wav') # Protoss Zealot - "My life for Aiur!"
# plot
pl.figure(figsize=(6.75,2))
pl.plot(s3)
pl.title('Signal 3')
pl.show()
# player
wavPlayer(s3, fs3)
Explanation: and a third audio clip:
End of explanation
# Adjust for different clip lengths
fs = fs1
length = max([len(s1), len(s2), len(s3)])
s1 = np.resize(s1, (length,1))
s2 = np.resize(s2, (length,1))
s3 = np.resize(s3, (length,1))
S = (np.c_[s1, s2, s3]).T
# Mixing Matrix
#A = np.random.uniform(size=(3,3))
#A = A / A.sum(axis=0)
A = np.array([[1, 0.5, 0.5],
[0.5, 1, 0.5],
[0.5, 0.5, 1]])
print('Mixing Matrix:')
print(A.round(2))
# Mix Signals
X = np.dot(A,S)
# Mixed Signal i
for i in range(X.shape[0]):
pl.figure(figsize=(6.75,2))
pl.plot((X[i]).astype(np.int16))
pl.title('Mixed Signal %d' % (i+1))
pl.show()
wavPlayer((X[i]).astype(np.int16), fs)
Explanation: Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together!
First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be filled with zeros so it won't affect the sound.
The signals are mixed by creating a mixing matrix $A$ and taking the dot product of $A$ with the signals $S$.
Afterwards I plot the mixed signals and create the wavPlayers, have a listen!
End of explanation
from shogun import features
# Convert to features for shogun
mixed_signals = features((X).astype(np.float64))
Explanation: Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy!
End of explanation
# Separating with JADE
jade = sg.transformer('Jade')
jade.fit(mixed_signals)
signals = jade.transform(mixed_signals)
S_ = signals.get('feature_matrix')
A_ = jade.get('mixing_matrix')
A_ = A_ / A_.sum(axis=0)
print('Estimated Mixing Matrix:')
print(A_)
Explanation: Now lets unmix those signals!
In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail on how JADE works behind the scenes but here is the reference for the original paper:
Cardoso, J. F., & Souloumiac, A. (1993). Blind beamforming for non-Gaussian signals. In IEE Proceedings F (Radar and Signal Processing) (Vol. 140, No. 6, pp. 362-370). IET Digital Library.
Shogun also has several other ICA algorithms including the Second Order Blind Identification (SOBI) algorithm, FFSep, JediSep, UWedgeSep and FastICA. All of the algorithms inherit from the ICAConverter base class and share some common methods for setting an intial guess for the mixing matrix, retrieving the final mixing matrix and getting/setting the number of iterations to run and the desired convergence tolerance. Some of the algorithms have additional getters for intermediate calculations, for example Jade has a method for returning the 4th order cumulant tensor while the "Sep" algorithms have a getter for the time lagged covariance matrices. Check out the source code on GitHub (https://github.com/shogun-toolbox/shogun) or the Shogun docs (http://www.shogun-toolbox.org/doc/en/latest/annotated.html) for more details!
End of explanation
# Show separation results
# Separated Signal i
gain = 4000
for i in range(S_.shape[0]):
pl.figure(figsize=(6.75,2))
pl.plot((gain*S_[i]).astype(np.int16))
pl.title('Separated Signal %d' % (i+1))
pl.show()
wavPlayer((gain*S_[i]).astype(np.int16), fs)
Explanation: Thats all there is to it! Check out how nicely those signals have been separated and have a listen!
End of explanation |
11,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Reframing Design Pattern
The Reframing design pattern refers to changing the representation of the output of a machine learning problem. For example, we could take something that is intuitively a regression problem and instead pose it as a classification problem (and vice versa).
Let's look at the natality dataset. Notice that for a given set of inputs, the weight_pounds (the label) can take many different values.
Step3: Comparing categorical label and regression
Since baby weight is a positive real value, this is intuitively a regression problem. However, we can train the model as a multi-class classification by bucketizing the output label. At inference time, the model then predicts a collection of probabilities corresponding to these potential outputs.
Let's do both and see how they compare.
Step4: We'll use the same features for both models. But we need to create a categorical weight label for the classification model.
Step5: Create tf.data datsets for both classification and regression.
Step6: First, train the classification model and examine the validation accuracy.
Step7: Next, we'll train the regression model and examine the validation RMSE.
Step8: The regression model gives a single numeric prediction of baby weight.
Step9: The classification model predicts a probability for each bucket of values.
Step10: Increasing the number of categorical labels
We'll generalize the code above to accommodate N label buckets, instead of just 4.
Step11: Create the feature columns and build the model.
Step12: Make a prediction on the example above.
Step13: Restricting the prediction range
One way to restrict the prediction range is to make the last-but-one activation function sigmoid instead, and add a lambda layer to scale the (0,1) values to the desired range. The drawback is that it will be difficult for the neural network to reach the extreme values. | Python Code:
import numpy as np
import seaborn as sns
from google.cloud import bigquery
import matplotlib as plt
%matplotlib inline
bq = bigquery.Client()
query =
SELECT
weight_pounds,
is_male,
gestation_weeks,
mother_age,
plurality,
mother_race
FROM
`bigquery-public-data.samples.natality`
WHERE
weight_pounds IS NOT NULL
AND is_male = true
AND gestation_weeks = 38
AND mother_age = 28
AND mother_race = 1
AND plurality = 1
AND RAND() < 0.01
df = bq.query(query).to_dataframe()
df.head()
fig = sns.distplot(df[["weight_pounds"]])
fig.set_title("Distribution of baby weight")
fig.set_xlabel("weight_pounds")
fig.figure.savefig("weight_distrib.png")
#average weight_pounds for this cross section
np.mean(df.weight_pounds)
np.std(df.weight_pounds)
weeks = 36
age = 28
query =
SELECT
weight_pounds,
is_male,
gestation_weeks,
mother_age,
plurality,
mother_race
FROM
`bigquery-public-data.samples.natality`
WHERE
weight_pounds IS NOT NULL
AND is_male = true
AND gestation_weeks = {}
AND mother_age = {}
AND mother_race = 1
AND plurality = 1
AND RAND() < 0.01
.format(weeks, age)
df = bq.query(query).to_dataframe()
print('weeks={} age={} mean={} stddev={}'.format(weeks, age, np.mean(df.weight_pounds), np.std(df.weight_pounds)))
Explanation: Reframing Design Pattern
The Reframing design pattern refers to changing the representation of the output of a machine learning problem. For example, we could take something that is intuitively a regression problem and instead pose it as a classification problem (and vice versa).
Let's look at the natality dataset. Notice that for a given set of inputs, the weight_pounds (the label) can take many different values.
End of explanation
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras.utils import to_categorical
from tensorflow import keras
from tensorflow import feature_column as fc
from tensorflow.keras import layers, models, Model
%matplotlib inline
df = pd.read_csv("./data/babyweight_train.csv")
Explanation: Comparing categorical label and regression
Since baby weight is a positive real value, this is intuitively a regression problem. However, we can train the model as a multi-class classification by bucketizing the output label. At inference time, the model then predicts a collection of probabilities corresponding to these potential outputs.
Let's do both and see how they compare.
End of explanation
# prepare inputs
df.is_male = df.is_male.astype(str)
df.mother_race.fillna(0, inplace = True)
df.mother_race = df.mother_race.astype(str)
# create categorical label
def categorical_weight(weight_pounds):
if weight_pounds < 3.31:
return 0
elif weight_pounds >= 3.31 and weight_pounds < 5.5:
return 1
elif weight_pounds >= 5.5 and weight_pounds < 8.8:
return 2
else:
return 3
df["weight_category"] = df.weight_pounds.apply(lambda x: categorical_weight(x))
df.head()
def encode_labels(classes):
one_hots = to_categorical(classes)
return one_hots
FEATURES = ['is_male', 'mother_age', 'plurality', 'gestation_weeks', 'mother_race']
LABEL_CLS = ['weight_category']
LABEL_REG = ['weight_pounds']
N_TRAIN = int(df.shape[0] * 0.80)
X_train = df[FEATURES][:N_TRAIN]
X_valid = df[FEATURES][N_TRAIN:]
y_train_cls = encode_labels(df[LABEL_CLS][:N_TRAIN])
y_train_reg = df[LABEL_REG][:N_TRAIN]
y_valid_cls = encode_labels(df[LABEL_CLS][N_TRAIN:])
y_valid_reg = df[LABEL_REG][N_TRAIN:]
Explanation: We'll use the same features for both models. But we need to create a categorical weight label for the classification model.
End of explanation
# train/validation dataset for classification model
cls_train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train_cls))
cls_valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid_cls))
# train/validation dataset for regression model
reg_train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train_reg.values))
reg_valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid_reg.values))
# Examine the two datasets. Notice the different label values.
for data_type in [cls_train_data, reg_train_data]:
for dict_slice in data_type.take(1):
print("{}\n".format(dict_slice))
# create feature columns to handle categorical variables
numeric_columns = [fc.numeric_column("mother_age"),
fc.numeric_column("gestation_weeks")]
CATEGORIES = {
'plurality': list(df.plurality.unique()),
'is_male' : list(df.is_male.unique()),
'mother_race': list(df.mother_race.unique())
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = fc.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab, dtype=tf.string)
categorical_columns.append(fc.indicator_column(cat_col))
# create Inputs for model
inputs = {colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({colname: tf.keras.layers.Input(
name=colname, shape=(), dtype=tf.string)
for colname in ["plurality", "is_male", "mother_race"]})
# build DenseFeatures for the model
dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs)
# create hidden layers
h1 = layers.Dense(20, activation="relu")(dnn_inputs)
h2 = layers.Dense(10, activation="relu")(h1)
# create classification model
cls_output = layers.Dense(4, activation="softmax")(h2)
cls_model = tf.keras.models.Model(inputs=inputs, outputs=cls_output)
cls_model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
# create regression model
reg_output = layers.Dense(1, activation="relu")(h2)
reg_model = tf.keras.models.Model(inputs=inputs, outputs=reg_output)
reg_model.compile(optimizer='adam',
loss=tf.keras.losses.MeanSquaredError(),
metrics=['mse'])
Explanation: Create tf.data datsets for both classification and regression.
End of explanation
# train the classifcation model
cls_model.fit(cls_train_data.batch(50), epochs=1)
val_loss, val_accuracy = cls_model.evaluate(cls_valid_data.batch(X_valid.shape[0]))
print("Validation accuracy for classifcation model: {}".format(val_accuracy))
Explanation: First, train the classification model and examine the validation accuracy.
End of explanation
# train the classifcation model
reg_model.fit(reg_train_data.batch(50), epochs=1)
val_loss, val_mse = reg_model.evaluate(reg_valid_data.batch(X_valid.shape[0]))
print("Validation RMSE for regression model: {}".format(val_mse**0.5))
Explanation: Next, we'll train the regression model and examine the validation RMSE.
End of explanation
preds = reg_model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]),
"is_male": tf.convert_to_tensor(["True"]),
"mother_age": tf.convert_to_tensor([28]),
"mother_race": tf.convert_to_tensor(["1.0"]),
"plurality": tf.convert_to_tensor(["Single(1)"])},
steps=1).squeeze()
preds
Explanation: The regression model gives a single numeric prediction of baby weight.
End of explanation
preds = cls_model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]),
"is_male": tf.convert_to_tensor(["True"]),
"mother_age": tf.convert_to_tensor([28]),
"mother_race": tf.convert_to_tensor(["1.0"]),
"plurality": tf.convert_to_tensor(["Single(1)"])},
steps=1).squeeze()
preds
objects = ('very_low', 'low', 'average', 'high')
y_pos = np.arange(len(objects))
predictions = list(preds)
plt.bar(y_pos, predictions, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.title('Baby weight prediction')
plt.show()
Explanation: The classification model predicts a probability for each bucket of values.
End of explanation
# Read in the data and preprocess
df = pd.read_csv("./data/babyweight_train.csv")
# prepare inputs
df.is_male = df.is_male.astype(str)
df.mother_race.fillna(0, inplace = True)
df.mother_race = df.mother_race.astype(str)
# create categorical label
MIN = np.min(df.weight_pounds)
MAX = np.max(df.weight_pounds)
NBUCKETS = 50
def categorical_weight(weight_pounds, weight_min, weight_max, nbuckets=10):
buckets = np.linspace(weight_min, weight_max, nbuckets)
return np.digitize(weight_pounds, buckets) - 1
df["weight_category"] = df.weight_pounds.apply(lambda x: categorical_weight(x, MIN, MAX, NBUCKETS))
def encode_labels(classes):
one_hots = to_categorical(classes)
return one_hots
FEATURES = ['is_male', 'mother_age', 'plurality', 'gestation_weeks', 'mother_race']
LABEL_COLUMN = ['weight_category']
N_TRAIN = int(df.shape[0] * 0.80)
X_train, y_train = df[FEATURES][:N_TRAIN], encode_labels(df[LABEL_COLUMN][:N_TRAIN])
X_valid, y_valid = df[FEATURES][N_TRAIN:], encode_labels(df[LABEL_COLUMN][N_TRAIN:])
# create the training dataset
train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train))
valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid))
Explanation: Increasing the number of categorical labels
We'll generalize the code above to accommodate N label buckets, instead of just 4.
End of explanation
# create feature columns to handle categorical variables
numeric_columns = [fc.numeric_column("mother_age"),
fc.numeric_column("gestation_weeks")]
CATEGORIES = {
'plurality': list(df.plurality.unique()),
'is_male' : list(df.is_male.unique()),
'mother_race': list(df.mother_race.unique())
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = fc.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab, dtype=tf.string)
categorical_columns.append(fc.indicator_column(cat_col))
# create Inputs for model
inputs = {colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({colname: tf.keras.layers.Input(
name=colname, shape=(), dtype=tf.string)
for colname in ["plurality", "is_male", "mother_race"]})
# build DenseFeatures for the model
dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs)
# model
h1 = layers.Dense(20, activation="relu")(dnn_inputs)
h2 = layers.Dense(10, activation="relu")(h1)
output = layers.Dense(NBUCKETS, activation="softmax")(h2)
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
# train the model
model.fit(train_data.batch(50), epochs=1)
Explanation: Create the feature columns and build the model.
End of explanation
preds = model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]),
"is_male": tf.convert_to_tensor(["True"]),
"mother_age": tf.convert_to_tensor([28]),
"mother_race": tf.convert_to_tensor(["1.0"]),
"plurality": tf.convert_to_tensor(["Single(1)"])},
steps=1).squeeze()
objects = [str(_) for _ in range(NBUCKETS)]
y_pos = np.arange(len(objects))
predictions = list(preds)
plt.bar(y_pos, predictions, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.title('Baby weight prediction')
plt.show()
Explanation: Make a prediction on the example above.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
MIN_Y = 3
MAX_Y = 20
input_size = 10
inputs = keras.layers.Input(shape=(input_size,))
h1 = keras.layers.Dense(20, 'relu')(inputs)
h2 = keras.layers.Dense(1, 'sigmoid')(h1) # 0-1 range
output = keras.layers.Lambda(lambda y : (y*(MAX_Y-MIN_Y) + MIN_Y))(h2) # scaled
model = keras.Model(inputs, output)
# fit the model
model.compile(optimizer='adam', loss='mse')
batch_size = 2048
for i in range(0, 10):
x = np.random.rand(batch_size, input_size)
y = 0.5*(x[:,0] + x[:,1]) * (MAX_Y-MIN_Y) + MIN_Y
model.fit(x, y)
# verify
min_y = np.finfo(np.float64).max
max_y = np.finfo(np.float64).min
for i in range(0, 10):
x = np.random.randn(batch_size, input_size)
y = model.predict(x)
min_y = min(y.min(), min_y)
max_y = max(y.max(), max_y)
print('min={} max={}'.format(min_y, max_y))
Explanation: Restricting the prediction range
One way to restrict the prediction range is to make the last-but-one activation function sigmoid instead, and add a lambda layer to scale the (0,1) values to the desired range. The drawback is that it will be difficult for the neural network to reach the extreme values.
End of explanation |
11,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to TensorFlow, now leveraging tensors!
In this notebook, we modify our intro to TensorFlow notebook to use tensors in place of our for loop. This is a derivation of Jared Ostmeyer's Naked Tensor code.
The initial steps are identical to the earlier notebook
Step1: Define the cost as a tensor -- more elegant than a for loop and enables distributed computing in TensorFlow
Step2: The remaining steps are also identical to the earlier notebook! | Python Code:
import numpy as np
np.random.seed(42)
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
tf.set_random_seed(42)
xs = [0., 1., 2., 3., 4., 5., 6., 7.]
ys = [-.82, -.94, -.12, .26, .39, .64, 1.02, 1.]
fig, ax = plt.subplots()
_ = ax.scatter(xs, ys)
m = tf.Variable(-0.5)
b = tf.Variable(1.0)
Explanation: Introduction to TensorFlow, now leveraging tensors!
In this notebook, we modify our intro to TensorFlow notebook to use tensors in place of our for loop. This is a derivation of Jared Ostmeyer's Naked Tensor code.
The initial steps are identical to the earlier notebook
End of explanation
ys_model = m*xs+b
total_error = # DEFINE
Explanation: Define the cost as a tensor -- more elegant than a for loop and enables distributed computing in TensorFlow
End of explanation
optimizer_operation = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(total_error)
initializer_operation = tf.global_variables_initializer()
with tf.Session() as session:
session.run(initializer_operation)
n_epochs = 1000
for iteration in range(n_epochs):
session.run(optimizer_operation)
slope, intercept = session.run([m, b])
slope
intercept
y_hat = intercept + slope*np.array(xs)
pd.DataFrame(list(zip(ys, y_hat)), columns=['y', 'y_hat'])
fig, ax = plt.subplots()
ax.scatter(xs, ys)
x_min, x_max = ax.get_xlim()
y_min, y_max = intercept, intercept + slope*(x_max-x_min)
ax.plot([x_min, x_max], [y_min, y_max])
_ = ax.set_xlim([x_min, x_max])
Explanation: The remaining steps are also identical to the earlier notebook!
End of explanation |
11,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Question 3
Display first 5 rows of the loaded data
Step2: ...and do a short summary about the data;
The resultant table comes from the CSV served by the URI.
The data set consists of a list of geographical
locations with GPS coordinates of each.
The data has a spread of 839 days starting 22nd Jan 2020 to 9th May 2022 - about a period of 2 years 4 months.
Question 4
Get daily cases worldwide ( hint
Step3: Question 5
Get daily increasement of death cases via defining a function (hint
Step4: Question 6
Visualize the data obtained in task 4 with library matplotlib | Python Code:
import pandas as pd
deaths_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv')
Explanation: <a href="https://colab.research.google.com/github/timomwa/50ForReel/blob/master/ITEC610_Python_Codes_And_Comments.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Answers for;
UTEC610 Python Fundamentals for data science, Semester 1, 2022 Assignment number (2)
Assessment Artefact: Python Codes and Comments
Weighing [30%]
Question 2
Download csv data with pandas with below code
```
import pandas as pd
deaths_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv')
```
End of explanation
# Question 3
# Display first 5 rows
# of the loaded data
deaths_df.head(5)
Explanation: Question 3
Display first 5 rows of the loaded data
End of explanation
# Yank out 3 unecessary columns
# i.e 'Lat','Long','Province/State'
# Leaving 'Country/Region' & deaths per day
# Columns
death_cases_worldwide = deaths_df.drop(['Lat','Long','Province/State'], axis=1);
death_cases_worldwide
Explanation: ...and do a short summary about the data;
The resultant table comes from the CSV served by the URI.
The data set consists of a list of geographical
locations with GPS coordinates of each.
The data has a spread of 839 days starting 22nd Jan 2020 to 9th May 2022 - about a period of 2 years 4 months.
Question 4
Get daily cases worldwide ( hint: summarising daily death cases over all countries
End of explanation
death_cases_worldwide_ = death_cases_worldwide.head(5)
def calc_increment(x):
current_col_idx = death_cases_worldwide_.columns.get_loc(x.name)
if current_col_idx > 1:
prev_column_idx = current_col_idx - 1;
prev_column = death_cases_worldwide_.iloc[:, (current_col_idx-1):]
death_cases_of_today = x.iloc[0]
death_cases_of_yesterday = prev_column.iloc[0].iloc[0]
increment = int(death_cases_of_today) - int(death_cases_of_yesterday)
x.iloc[0] = increment
death_cases_worldwide_.apply( calc_increment, axis=0 )
# print(death_cases_worldwide_)
Explanation: Question 5
Get daily increasement of death cases via defining a function (hint: use the death cases of today minus the death cases of yesterday from the data obtained in task 4).
End of explanation
# Import library
import matplotlib.pyplot as plt
# Import numpy
import numpy as np
#Specify X axis to be that of Country/Region
death_cases_worldwide.plot(x='Country/Region')
#Finally Show the graph
plt.show()
Explanation: Question 6
Visualize the data obtained in task 4 with library matplotlib
End of explanation |
11,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computer programs
A program is a sequence of instructions that specifies how to perform a computation. The computation might be something mathematical, such as solving a system of equations or finding the roots of a polynomial, but it can also be a symbolic computation, such as searching and replacing text in a document or something graphical, like processing an image or playing a video. (Downey, 2015)
References
Much of this notebook, where indicated and elsewhere, is copied from the following book
Step1: Unambiguous and literal
The meaning of a computer program is unambiguous and literal, and can be understood entirely by analysis of the tokens and structure.
Formal languages are more dense than natural languages, so it takes longer to read them. Also, the structure is important, so it is not always best to read from top to bottom, left to right. Instead, learn to parse the program in your head, identifying the tokens and interpreting the structure. Finally, the details matter. Small errors in spelling and punctuation, which you can get away with in natural languages, can make a big difference in a formal language. (Downey, 2015)
Comments
Partly because programming languages have rather terse syntax (some are worse than others!), it is considered a good custom to annotate the computational โbusinessโ-portions of programs with comments. Comments are portions of the code that are not parsed by either the interpreter or the compiler, i.e., these are โleft outโ from the translation to machine instructions. Comments are thus exempt of syntax-checking and meaning-parsing applied to all other code.
Here is an example of a comment in the Python language. Whenever the hash-sign (โ#โ) is encountered, parsing of the current line is stopped and the parser (interpreter or compiler) moves to the next line.
Step2: Compiled programs/languages
This is the โtraditionalโ way of thinking about programming
Step3: The programmer is free to choose the names of the variables she wishes to use in the code, as long as they do not violate the languageโs syntactic rules. Common to most languages is that the following variable names are syntactically incorrect.
Never begin the name of a variable with a digit, or pretty much any other non-letter character (there are a few exceptions, such as underscore, _)
Step4: In Python, to see what a variable contains (maybe we forgot!), we can simply write the name of the variable on a line and execute it, or use the built-in print-command
Step5: Note that a variable name is like a pointer to whatever data the assignment operation dictates. In interpreted languages, one does not need to pre-define what the variable is to contain
Step6: Manipulating variables
The main objective of variables is that they can be manipulated, i.e. change
Typical manipulations
Step7: NB Keywords cannot be used as variable names! Try, and you will get a Syntax Error.
Expressions and operators
An expression is a combination of values, variables, and operators. An expression gets evaluated when it is executed, and a value is found for it
Step8: A statement is a unit of code that has an effect, like creating a variable or displaying a value (there are three statements in the code block above). When you type a statement, the interpreter executes it, which means that it does whatever the statement says.
Operators are one of the ways we can manipulate the data referenced by our variables. The usual mathematical operators are omnipresent, but things get interesting when we start applying operations to non-numeric objects. In Python, we could e.g. multiply a string by an integer, or in Matlab divide a 2D matrix of numbers by a 1D vector to obtain the least-squares estimate of the solution to a set of linear equations! But we digress...
Data types
Integers
a.k.a. whole numbers. Recall that everything in a computer system is represented in bits, corresponding to successive powers of 2. The largest integer that can be represented in a single 64-bit memory register is $2^{63} - 1$
Step9: which is called
Step10: There is all manner of mayhem associated with character strings in programming languages. The thorniest issue is related to encoding
Step11: Lists and other collections
A list is an ordered sequence of elements of another type. In python, we can assign a list to a variable using the following syntax
Step12: The word โlistโ is usually reserved for an ordered set of elements. Some languages also have the concept of a โcollectionโ of (unordered) elements (in Python, this is known as a โsetโ).
List can be "sliced", i.e. take a some of the entries
Step13: Depending on the language, the power of lists ranges from great to humongous.
Step14: Dictionaries, hashes, etc.
The use of more advanced structures often enables efficient implementation of complex computations (such as signal-processing operations on large datasets). The types of data structures available are highly language-dependent, and using them becomes natural if and when the domain-specific problem calls for it. No need to learn something you wonโt use, however...
Vectors and arrays
When operating with real-world measurements of some physical process or phenomenon (MRI, EEG, DNA/protein microarrays, ...), each individual data point usually has a meaningful positional relationship to other points. For example, a voxel in an MR image shares a face with 6 neighbouring voxels. The data point of a single EEG channel acquired at time t is "to the left of" that acquired at time t+1. Column 563 of a microarray plate is associated with the expression level of gene AbC-123. Etc. The relationship between data points can be highly non-trivial and, most importantly, the processing tasks applied to the data both expect a particular structure and exploit it during computations.
Numerical computations on measurement data are often (in practice
Step15: Exercises
Syntax
Try running these code blocks. Some of them have syntax appropriate to the Python programming language, others are invalid; can you predict which is which?
Step16: Variables
Assign some numerical values to variables x and y. In math notation you can multiply x and y like this
Step17: Manipulating variables
set a equal to 2, b equal 5
add a and b
multiply a by b
devide b by a
Step18: When the duration of each operation counts
You've implemented an analysis for quantifying the "functional connectivity" between brain regions, based on fMRI scans. Your plan is to compute the connectivity of each voxel in the brain, to each other voxel. The image dimension is $128 \times 128 \times 50$ voxels. How long will your analysis take to run if the time required for a single voxel is just 1 second?
Step19: In situations like these, a combination of re-thinking the problem (domain knowledge) and optimising the implementation (e.g. using array containers for the data) is usually called for.
Lists
Make a list of ten subject names and print one of them.
Change the subject you print.
Print several subjects at a time, i.e. slice
Step20: Large datasets and the limits of memory
The following example is shamelessly ripped off from this source.
Let's assume you work with human genome data. Even an engineer knows that the genome consists of four "bases" | Python Code:
# example of a syntax error
Computer: please write my thesis.
# example of an error in structure
'a' + 1
Explanation: Computer programs
A program is a sequence of instructions that specifies how to perform a computation. The computation might be something mathematical, such as solving a system of equations or finding the roots of a polynomial, but it can also be a symbolic computation, such as searching and replacing text in a document or something graphical, like processing an image or playing a video. (Downey, 2015)
References
Much of this notebook, where indicated and elsewhere, is copied from the following book:
Allen Downey, Think Python: How to think like a computer scientist, 2nd edition, Green Tea Press, 2015.
Quotes from the book, which may be freely downloaded here, are used under the terms of the Creative Commons Attribution-NonCommercial 3.0 Unported License, which is available at http://creativecommons.org/licenses/by-nc/3.0/.
Programming languages
Natural languages are the languages people speak, such as English, Spanish, and French. They were not designed by people (although people try to impose some order on them); they evolved naturally.
Formal languages are languages that are designed by people for specific applications. For example, the notation that mathematicians use is a formal language that is particularly good at denoting relationships among numbers and symbols. Chemists use a formal language to represent the chemical structure of molecules. And most importantly:
Programming languages are formal languages that have been designed to express computations.
Textual โcodeโ files
The set of formal statements that constitute a program are known as code. The programmer writes code into files that are saved using a file extension that indicates the language the code is written in; examples include:
.py for Python
.c for C
.m for Matlab
Note that they are textual, i.e., human-readable format (as opposed to machine code: a binary representation of the code).
See the file fddhs.py for an example
Syntax
The formal programming languages have associated syntax rules that come in two flavors: one pertaining to tokens and another to structure [how tokens may be combined]. Tokens are the basic elements of the language, such as words, numbers, and chemical elements. Programming languges differ greatly in both the specific form of the tokens used and the structures they may form.
One of the most common error messages a programmer encounters when beginning to use a new language is the infamous: Syntax Error! For learners of natural languages, structural errors much more common are ;)
End of explanation
2 # this is the number two
Explanation: Unambiguous and literal
The meaning of a computer program is unambiguous and literal, and can be understood entirely by analysis of the tokens and structure.
Formal languages are more dense than natural languages, so it takes longer to read them. Also, the structure is important, so it is not always best to read from top to bottom, left to right. Instead, learn to parse the program in your head, identifying the tokens and interpreting the structure. Finally, the details matter. Small errors in spelling and punctuation, which you can get away with in natural languages, can make a big difference in a formal language. (Downey, 2015)
Comments
Partly because programming languages have rather terse syntax (some are worse than others!), it is considered a good custom to annotate the computational โbusinessโ-portions of programs with comments. Comments are portions of the code that are not parsed by either the interpreter or the compiler, i.e., these are โleft outโ from the translation to machine instructions. Comments are thus exempt of syntax-checking and meaning-parsing applied to all other code.
Here is an example of a comment in the Python language. Whenever the hash-sign (โ#โ) is encountered, parsing of the current line is stopped and the parser (interpreter or compiler) moves to the next line.
End of explanation
message = "Is it time for a break yet?"
n = 42
electron_mass_MeV = 0.511
Explanation: Compiled programs/languages
This is the โtraditionalโ way of thinking about programming: a two-stage process of execution
a compiler (e.g., gcc) passes through all code, checks it (for syntax & structure), then writes out machine code
the binary machine code is executed by the CPU as a โprogramโ
The โcompilationโ stage allows, amongst other things, the compiler to optimize the execution of the CPU-level instructions with respect to more-or-less detailed knowledge of the processor and memory layout of the computer performing the computations. Note also that once a program has been compiled to be executable, it is no longer human-readable.
Examples of (important) compiled languages include:
Fortran
Java
C / C++ / C#
Interpreted languages and REPL
These are executed line-by-line, in a read-evaluate-print-loop (REPL).
bash (and other shells)
the CLI
Javascript
used to make webpages interactive and provide online programs, including video games.
Matlab
GUI by Mathworks (costly license)
easy to get going!
Numerical algorithms are actually in the public domain
Python
a 'real' programming language (Dropbox, Youtube, Netflix, Google...)
a popular 'scripting' language
the 'glue' to stitch together other tools
most Linux-distributions rely on it
massive 'standard library'
chances are the task/problem you have was already solved!
open-source, free software
IPython
an enhanced python interpreter
optimised for interactivity (REPL)
command-line interface
prettyfied by, e.g., Jupyter notebooks
Scripts
When working in a field with computational elements, you are likely to come across the term script. Just like in a movie or a play, the script is an explicit description of what operations are going to be run, and in which order. Another analogy is that of a cooking recipe.
Are Jupyter notebooks 'scripts'?
Yes, you could call them that. It is in fact possible to execute a notebook, which in effect means excuting each cell of the notebook from top to bottom.
Reproducible science
Data science pipelines are often specified as scripts and then executed. Because the script will run in exactly the same fashion every time it is run, the pipeline is thus rendered reproducible. This is necessary precondition for any findings your may report being reproducible. If in addition to writing your analyses as scripts, you also share them with your peers, well, then your work also becomes open.
Variables, expressions and statements
Following Chapter 2 in (Downey, 2015).
Assignment & variables
One of the most powerful features of a programming language is the ability to manipulate variables. A variable is a name that refers to a value." (Downey, 2015)
Variables are assigned values; in most programming languages, the assignment operator is the equal-sign (=). Assignment statements are read from left-to-right, here are a couple of python-assignments:
End of explanation
2fast = 140
@home = False
no spaces = 'a space'
Explanation: The programmer is free to choose the names of the variables she wishes to use in the code, as long as they do not violate the languageโs syntactic rules. Common to most languages is that the following variable names are syntactically incorrect.
Never begin the name of a variable with a digit, or pretty much any other non-letter character (there are a few exceptions, such as underscore, _):
End of explanation
n
print(message)
Explanation: In Python, to see what a variable contains (maybe we forgot!), we can simply write the name of the variable on a line and execute it, or use the built-in print-command
End of explanation
a = 0
b = 1
c = a + b
print(c)
a = b
b = c
c = a + b
print(c)
a = b
b = c
c = a + b
print(c)
a = b
b = c
c = a + b
print(c)
a = b
b = c
c = a + b
print(c)
a = b
b = c
c = a + b
print(c)
Explanation: Note that a variable name is like a pointer to whatever data the assignment operation dictates. In interpreted languages, one does not need to pre-define what the variable is to contain: the interpreter figures this out on-the-fly. Similarly, the assignment to a variable can happen multiple times in the โlifetimeโ of a program: each assignment simply moves the pointer to a new data object.
The following lines of code calculate and print the first few values of the Fibonacci sequence; make sure you understand what values are being assigned to the variables a, b and c.
End of explanation
two_greater_than_one = (2 > 1)
print(two_greater_than_one)
if two_greater_than_one:
print('Correct')
Explanation: Manipulating variables
The main objective of variables is that they can be manipulated, i.e. change
Typical manipulations:
addition
subtraction
multiplication
division
NB Variable names in Stata
Macros?!
Variable names:
The Rules (for python, varies from language to language)
Variables names must start with a letter or an underscore, such as:
_underscore
underscore_
The remainder of your variable name may consist of letters, numbers and underscores.
password1
n00b
un_der_scores
Names are case sensitive.
case_sensitive,
CASE_SENSITIVE,
Variable names: conventions
Readability is very important. Which of the following is easiest to read? Iโm hoping youโll say the first example.
python_puppet
pythonpuppet
pythonPuppet
Descriptive names are very useful.
total_bad_puns
super_bad
Avoid using the lowercase letter โlโ, uppercase โOโ, and uppercase โIโ. Why? Because the l and the I look a lot like each other and the number 1. And O looks a lot like 0.
Keywords
Each programming language โreservesโ some portion of typically the English (natural) language as keywords that are associated with specific (typically low-level) computational operations.
Practically all languages use the keywords if, else and for for โcontrol flowโ of program execution. In Python, not, True, False and None are used to define logic:
End of explanation
n = 42
m = n + 25
print(m)
Explanation: NB Keywords cannot be used as variable names! Try, and you will get a Syntax Error.
Expressions and operators
An expression is a combination of values, variables, and operators. An expression gets evaluated when it is executed, and a value is found for it:
End of explanation
pow(2, 63) - 1
Explanation: A statement is a unit of code that has an effect, like creating a variable or displaying a value (there are three statements in the code block above). When you type a statement, the interpreter executes it, which means that it does whatever the statement says.
Operators are one of the ways we can manipulate the data referenced by our variables. The usual mathematical operators are omnipresent, but things get interesting when we start applying operations to non-numeric objects. In Python, we could e.g. multiply a string by an integer, or in Matlab divide a 2D matrix of numbers by a 1D vector to obtain the least-squares estimate of the solution to a set of linear equations! But we digress...
Data types
Integers
a.k.a. whole numbers. Recall that everything in a computer system is represented in bits, corresponding to successive powers of 2. The largest integer that can be represented in a single 64-bit memory register is $2^{63} - 1$:
End of explanation
a_character = '*' # a single character
one_string = 'I am a ' # several characters in a row = a string
another_string = "Snowflake" # single or double quotes work in Python
print(one_string + another_string)
print(3 * another_string + a_character)
Explanation: which is called: "nine quintillion two hundred twenty-three quadrillion three hundred seventy-two trillion thirty-six billion eight hundred fifty-four million seven hundred seventy-five thousand eight hundred seven" (in case you were wondering).
In practice, computers can perform arithmetic on arbitrarily large integers, as long as the language implemeting the computing suitably keeps track of what's going on in memory
Floating point numbers
a.k.a real numbers, or just "floats". What is the powers-of-two representation of the real number 0.1? What about the value of $\pi$ (an irrational number)? There are none! If you're interested precisely how floating point numbers are represented in binary form, check out this wiki page on the topic, but bewarned: it's hairy!
The most relevant parameter for floats is not its size, but its precision.
<img src="imgs/float_precision.jpg" width=400>
Image source.
You can think of floating point precision as the number of digits after the decimal comma. For example:
$\frac{2}{3} \sim 0.6667$ is a more precise representation than $\frac{2}{3} \sim 0.7$
Like integers, real number need to be stored using some number of bits (and a scheme for representing it, as discussed above). You are likely to come across the terms โfloatโ and โsingleโ: these refer to real numbers represented using 32-bit resolution. A โdoubleโ refers to a 64-bit representation of a number. It is beyond the scope of this course to go into the details of under which conditions one representation/precision is more appropriate than the other. Suffice it to say: itโs better to err on the side of caution and use doubles. This is indeed what all mainstream interpreted languages do.
We'll return to the issue of floats vs. doubles below in the context of the limits of memory (not yours, the computer's).
Characters and strings
A string is an ordered sequence of (one or more) characters. In python, strings are particularly nimble (donโt try this in other languages!):
End of explanation
print("The binary representation of the letter 'G' is:", bin(ord('G'))[2:])
print("The ASCII character representation of the binary code 1001000 is:", chr(int('1001000', 2)))
Explanation: There is all manner of mayhem associated with character strings in programming languages. The thorniest issue is related to encoding: just like floats need to be represented as a sequence of bits, so do characters.
The most simple character encoding scheme is ASCII: an 8-bit translation table between a number between one and 255 ($=2^8-1$), and the corresponding character.
End of explanation
one_to_five = [1, 2, 3, 4, 5]
Explanation: Lists and other collections
A list is an ordered sequence of elements of another type. In python, we can assign a list to a variable using the following syntax
End of explanation
kitchen_utilities = ["pot", "pan", "knives", "forks"] # Notice the hard brackets
print(kitchen_utilities[0]) # 0 for the first entry, use 1 or 2 for the other entries.
print(kitchen_utilities[1:3]) # 0 for the first entry, use 1 or 2 for the other entries.
Explanation: The word โlistโ is usually reserved for an ordered set of elements. Some languages also have the concept of a โcollectionโ of (unordered) elements (in Python, this is known as a โsetโ).
List can be "sliced", i.e. take a some of the entries
End of explanation
# lists can contain any objects, including other lists...
a_crazy_list_in_python = ['one', 2, 3.0, one_to_five]
print(a_crazy_list_in_python)
Explanation: Depending on the language, the power of lists ranges from great to humongous.
End of explanation
import numpy as np
data = np.random.rand(10,10)
print(data.shape)
print(data)
print(data[4, 2])
Explanation: Dictionaries, hashes, etc.
The use of more advanced structures often enables efficient implementation of complex computations (such as signal-processing operations on large datasets). The types of data structures available are highly language-dependent, and using them becomes natural if and when the domain-specific problem calls for it. No need to learn something you wonโt use, however...
Vectors and arrays
When operating with real-world measurements of some physical process or phenomenon (MRI, EEG, DNA/protein microarrays, ...), each individual data point usually has a meaningful positional relationship to other points. For example, a voxel in an MR image shares a face with 6 neighbouring voxels. The data point of a single EEG channel acquired at time t is "to the left of" that acquired at time t+1. Column 563 of a microarray plate is associated with the expression level of gene AbC-123. Etc. The relationship between data points can be highly non-trivial and, most importantly, the processing tasks applied to the data both expect a particular structure and exploit it during computations.
Numerical computations on measurement data are often (in practice: always) performed on optimised data types known as arrays. Arrays are N-dimensional cartesian "blocks" of data, sampled along some dimension (space, time, frequency, gene/protein ID, subject ID, ...).
<img src="imgs/numpy_array_dims.png" width=200>
Vector is just a special name given for a 1-dimensional array. Common to all arrays, and the programming languages that implement them (Matlab, the NumPy-module in Python, ...), is that the data they contain are laid out efficiently in memory: data belonging to an array are stored in a cluster of memory, instead of being randomly spread out over all possible addresses. The (huge) advantage of this approach is that whenever the data is to be operated on, the CPU only needs to know where (in memory) the array starts, and what it's dimensions are; there is no time wasted on locating each element of the array before operating on them.
End of explanation
3 + 2
Add together the numbers 3 and 2
sum(3, 2) # this is tricky, weโll return to it later
3 + -2
3 + 2 .
3 +* 2
3 ** 2
Explanation: Exercises
Syntax
Try running these code blocks. Some of them have syntax appropriate to the Python programming language, others are invalid; can you predict which is which?
End of explanation
cp=60
b=229
dc=0.4
sh=49
ad=3
print(cp*b*(1-dc) + sh + (cp-1)*ad)
Explanation: Variables
Assign some numerical values to variables x and y. In math notation you can multiply x and y like this: xy. What happens if you try that in Python? Why?
Suppose the cover price of a book is 229 kr, but bookstores get a 40% discount. Shipping costs 49 kr for the first copy and 3 kr for each additional copy. What is the total wholesale cost for 60 copies? Write a sequence of statements, using variable assignments and expressions/operators, and print the answer.
End of explanation
# code goes here
Explanation: Manipulating variables
set a equal to 2, b equal 5
add a and b
multiply a by b
devide b by a
End of explanation
nof_voxels = 128*128*50
time_per_voxel = 1 # seconds
seconds_per_day = 60 * 60 * 24
time_in_days = # write expression here!
print('Come back in', time_in_days, 'days')
Explanation: When the duration of each operation counts
You've implemented an analysis for quantifying the "functional connectivity" between brain regions, based on fMRI scans. Your plan is to compute the connectivity of each voxel in the brain, to each other voxel. The image dimension is $128 \times 128 \times 50$ voxels. How long will your analysis take to run if the time required for a single voxel is just 1 second?
End of explanation
# code goes here
Explanation: In situations like these, a combination of re-thinking the problem (domain knowledge) and optimising the implementation (e.g. using array containers for the data) is usually called for.
Lists
Make a list of ten subject names and print one of them.
Change the subject you print.
Print several subjects at a time, i.e. slice
End of explanation
nof_bases = 3000000000 # better: 3e9
bits_per_base = 2
megabytes_in_bits = 1024 * 1024 * 8
genome_size_MB =
print('Raw genome size:', genome_size_MB, 'MB')
Explanation: Large datasets and the limits of memory
The following example is shamelessly ripped off from this source.
Let's assume you work with human genome data. Even an engineer knows that the genome consists of four "bases": A, C, T and G. Four is a great number: 1-4 can be represented (encoded) using just two bits, for example: A=00, C=01, T=10 and G=11. Assuming the genome consists of a sequence of 3 billion letters, how many million bytes do we need to represent them all?
End of explanation |
11,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fit the poly-spline-tccd acquisition probability model in 2018-04
Fit values here were computed 2018-Apr-03. This is a candidate for the FLIGHT model.
This notebook fits the flight acquisition data using the poly-spline-tccd model.
This uses starting fit values from the accompanying fit_acq_prob_model-2018-04-poly-tccd.ipynb notebook.
This model is a 15-parameter fit for acquisition probability as a function of magnitude and CCD temperature.
See the final cells for best-fit values.
Step1: Get acq stats data and clean
Step4: Model definition
Step5: Plotting and validation
Step6: Color != 1.5 fit (this is MOST acq stars)
Step7: Focus on 10.3 to 10.6 mag bin for recent times
Step8: Focus on 10.0 to 10.3 mag bin for recent times
Step9: Color == 1.5 fit
Step10: Histogram of warm pixel fraction
Step11: Compare with flight model circa Mar-2018
Note
Step12: Final fit values for chandra_aca.star_probs | Python Code:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table
from astropy.time import Time
import tables
from scipy import stats
import tables3_api
from scipy.interpolate import CubicSpline
from Chandra.Time import DateTime
%matplotlib inline
Explanation: Fit the poly-spline-tccd acquisition probability model in 2018-04
Fit values here were computed 2018-Apr-03. This is a candidate for the FLIGHT model.
This notebook fits the flight acquisition data using the poly-spline-tccd model.
This uses starting fit values from the accompanying fit_acq_prob_model-2018-04-poly-tccd.ipynb notebook.
This model is a 15-parameter fit for acquisition probability as a function of magnitude and CCD temperature.
See the final cells for best-fit values.
End of explanation
with tables.open_file('/proj/sot/ska/data/acq_stats/acq_stats.h5', 'r') as h5:
cols = h5.root.data.cols
names = {'tstart': 'guide_tstart',
'obsid': 'obsid',
'obc_id': 'acqid',
'halfwidth': 'halfw',
'warm_pix': 'n100_warm_frac',
'mag_aca': 'mag_aca',
'mag_obs': 'mean_trak_mag',
'known_bad': 'known_bad',
'color': 'color1',
'img_func': 'img_func',
'ion_rad': 'ion_rad',
'sat_pix': 'sat_pix',
'agasc_id': 'agasc_id',
't_ccd': 'ccd_temp',
'slot': 'slot'}
acqs = Table([getattr(cols, h5_name)[:] for h5_name in names.values()],
names=list(names.keys()))
year_q0 = 1999.0 + 31. / 365.25 # Jan 31 approximately
acqs['year'] = Time(acqs['tstart'], format='cxcsec').decimalyear.astype('f4')
acqs['quarter'] = (np.trunc((acqs['year'] - year_q0) * 4)).astype('f4')
acqs['color_1p5'] = np.where(acqs['color'] == 1.5, 1, 0)
# Create 'fail' column, rewriting history as if the OBC always
# ignore the MS flag in ID'ing acq stars.
#
# UPDATE: is ion_rad being ignored on-board?
#
obc_id = acqs['obc_id']
obc_id_no_ms = (acqs['img_func'] == 'star') & ~acqs['sat_pix'] & ~acqs['ion_rad']
acqs['fail'] = np.where(obc_id | obc_id_no_ms, 0.0, 1.0)
acqs['fail_mask'] = acqs['fail'].astype(bool)
# Allow for defining a 'mag' column that is the observed mag
# if available else the catalog mag. This is a tempting thing
# to do for calibration, but it is incorrect because we want
# probabilities based on the information we have (catalog mag)
# not the information we wish we had (actual mag). Thus the
# acq prob model folds in catalog mag uncertainty. This is
# especially apparent for color=1.5.
USE_OBSERVED_MAG = False
if USE_OBSERVED_MAG:
acqs['mag'] = np.where(acqs['fail_mask'], acqs['mag_aca'], acqs['mag_obs'])
else:
acqs['mag'] = acqs['mag_aca']
# Filter for year and mag (previously used data through 2007:001)
#
# UPDATE this to be between 4 to 5 years from time of recalibration.
#
# The mag range is restricted to 8.5 < mag < 10.7 because the model
# is only calibrated in that range. Above 10.7 there is concern that
# stats are actually unreliable (fraction of imposters that happen to
# is high?) This upper limit is something to play with.
#
ok = (acqs['year'] > 2014.0) & (acqs['mag'] > 8.5) & (acqs['mag'] < 10.7)
# Filter known bad obsids
print('Filtering known bad obsids, start len = {}'.format(np.count_nonzero(ok)))
bad_obsids = [
# Venus
2411,2414,6395,7306,7307,7308,7309,7311,7312,7313,7314,7315,7317,7318,7406,583,
7310,9741,9742,9743,9744,9745,9746,9747,9749,9752,9753,9748,7316,15292,16499,
16500,16501,16503,16504,16505,16506,16502,
]
for badid in bad_obsids:
ok = ok & (acqs['obsid'] != badid)
print('Filtering known bad obsids, end len = {}'.format(np.count_nonzero(ok)))
data_all = acqs[ok]
del data_all['img_func']
data_all.sort('year')
# Adjust probability (in probit space) for box size. See:
# https://github.com/sot/skanb/blob/master/pea-test-set/fit_box_size_acq_prob.ipynb
b1 = 0.96
b2 = -0.30
box0 = (data_all['halfwidth'] - 120) / 120 # normalized version of box, equal to 0.0 at nominal default
data_all['box_delta'] = b1 * box0 + b2 * box0**2
data_all = data_all.group_by('quarter')
data_mean = data_all.groups.aggregate(np.mean)
Explanation: Get acq stats data and clean
End of explanation
spline_mags = np.array([8.5, 9.25, 10.0, 10.4, 10.7])
def p_fail(pars, mag,
tc, tc2=None,
box_delta=0):
Acquisition probability model
:param pars: 7 parameters (3 x offset, 3 x scale, p_fail for bright stars)
:param wp: warm fraction
:param box_delta: search box half width (arcsec)
p_bright_fail = 0.03 # For now
p0s, p1s, p2s = pars[0:5], pars[5:10], pars[10:15]
if tc2 is None:
tc2 = tc ** 2
# Make sure box_delta has right dimensions
tc, box_delta = np.broadcast_arrays(tc, box_delta)
p0 = CubicSpline(spline_mags, p0s, bc_type=((1, 0.0), (2, 0.0)))(mag)
p1 = CubicSpline(spline_mags, p1s, bc_type=((1, 0.0), (2, 0.0)))(mag)
p2 = CubicSpline(spline_mags, p2s, bc_type=((1, 0.0), (2, 0.0)))(mag)
probit_p_fail = p0 + p1 * tc + p2 * tc2 + box_delta
p_fail = stats.norm.cdf(probit_p_fail) # transform from probit to linear probability
return p_fail
def p_acq_fail(data=None):
Sherpa fit function wrapper to ensure proper use of data in fitting.
if data is None:
data = data_all
tc = data['t_ccd'] - (-12)
tc2 = tc ** 2
box_delta = data['box_delta']
mag = data['mag']
def sherpa_func(pars, x=None):
return p_fail(pars, mag, tc, tc2, box_delta)
return sherpa_func
def fit_poly_spline_model(data_mask=None):
from sherpa import ui
data = data_all if data_mask is None else data_all[data_mask]
comp_names = [f'p{i}{j}' for i in range(3) for j in range(5)]
# Approx starting values based on plot of p0, p1, p2
# spline_mags = np.array([8.5, 9.25, 10.0, 10.4, 10.7])
spline_p = {}
spline_p[0] = np.array([-2.6, -2.0, -0.7, 0.3, 1.1])
spline_p[1] = np.array([0.0, 0.0, 0.4, 0.7, 0.5])
spline_p[2] = np.array([0.0, 0.0, 0.1, 0.1, 0.1])
data_id = 1
ui.set_method('simplex')
ui.set_stat('cash')
ui.load_user_model(p_acq_fail(data), 'model')
ui.add_user_pars('model', comp_names)
ui.set_model(data_id, 'model')
ui.load_arrays(data_id, np.array(data['year']), np.array(data['fail'], dtype=np.float))
# Initial fit values from fit of all data
fmod = ui.get_model_component('model')
for i in range(3):
for j in range(5):
comp_name = f'p{i}{j}'
setattr(fmod, comp_name, spline_p[i][j])
comp = getattr(fmod, comp_name)
comp.max = 10
comp.min = -4.0 if i == 0 else 0.0
ui.fit(data_id)
# conf = ui.get_confidence_results()
return ui.get_fit_results()
Explanation: Model definition
End of explanation
def plot_fit_grouped(pars, group_col, group_bin, mask=None, log=False, colors='br', label=None, probit=False):
data = data_all if mask is None else data_all[mask]
data['model'] = p_acq_fail(data)(pars)
group = np.trunc(data[group_col] / group_bin)
data = data.group_by(group)
data_mean = data.groups.aggregate(np.mean)
len_groups = np.diff(data.groups.indices)
data_fail = data_mean['fail']
model_fail = np.array(data_mean['model'])
fail_sigmas = np.sqrt(data_fail * len_groups) / len_groups
# Possibly plot the data and model probabilities in probit space
if probit:
dp = stats.norm.ppf(np.clip(data_fail + fail_sigmas, 1e-6, 1-1e-6))
dm = stats.norm.ppf(np.clip(data_fail - fail_sigmas, 1e-6, 1-1e-6))
data_fail = stats.norm.ppf(data_fail)
model_fail = stats.norm.ppf(model_fail)
fail_sigmas = np.vstack([data_fail - dm, dp - data_fail])
# plt.errorbar(data_mean[group_col], data_fail, yerr=fail_sigmas,
# fmt='.' + colors[1:], label=label, markersize=8)
# plt.plot(data_mean[group_col], model_fail, '-' + colors[0])
line, = plt.plot(data_mean[group_col], model_fail, '-')
plt.errorbar(data_mean[group_col], data_fail, yerr=fail_sigmas,
fmt='.', color=line.get_color(), label=label, markersize=8)
if log:
ax = plt.gca()
ax.set_yscale('log')
def mag_filter(mag0, mag1):
ok = (data_all['mag'] > mag0) & (data_all['mag'] < mag1)
return ok
def t_ccd_filter(t_ccd0, t_ccd1):
ok = (data_all['t_ccd'] > t_ccd0) & (data_all['t_ccd'] < t_ccd1)
return ok
def wp_filter(wp0, wp1):
ok = (data_all['warm_pix'] > wp0) & (data_all['warm_pix'] < wp1)
return ok
def plot_fit_all(parvals, mask=None, probit=False):
if mask is None:
mask = np.ones(len(data_all), dtype=bool)
mt = mag_filter(8.5, 10.7) & mask
plt.figure(figsize=(12, 4))
for probit in True, False:
plt.subplot(1, 2, int(probit) + 1)
for v0, v1, colors in ((-12, -11, 'gk'),
(-13, -12, 'cm'),
(-14, -13, 'br'),
(-15, -14, 'gk')):
plot_fit_grouped(parvals, 'mag', 0.25, t_ccd_filter(v0, v1) & mt,
colors=colors, label=f'{v0} < t_ccd < {v1}', probit=probit)
plt.legend(loc='upper left')
plt.ylim(-3, 3) if probit else plt.ylim(-0.1, 1.1)
plt.ylabel('p_fail')
plt.xlabel('year')
plt.tight_layout()
plt.grid()
mt = t_ccd_filter(-16, -10) & mask
plt.figure(figsize=(12, 4))
for probit in True, False:
plt.subplot(1, 2, int(probit) + 1)
for v0, v1, colors in ((10.3, 10.7, 'gk'),
(10, 10.3, 'cm'),
(9.5, 10, 'br'),
(9, 9.5, 'gk')):
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(v0, v1) & mt,
colors=colors, label=f'{v0} < mag < {v1}', probit=probit)
plt.legend(loc='upper left')
plt.ylim(-3, 3) if probit else plt.ylim(-0.1, 1.1)
plt.ylabel('p_fail')
plt.xlabel('mag_aca')
plt.tight_layout()
plt.grid()
mt = t_ccd_filter(-16, -10) & mask
plt.figure(figsize=(12, 4))
for probit in True, False:
plt.subplot(1, 2, int(probit) + 1)
for v0, v1, colors in ((10.3, 10.7, 'gk'),
(10, 10.3, 'cm'),
(9.5, 10, 'br'),
(9, 9.5, 'gk')):
plot_fit_grouped(parvals, 't_ccd', 0.5, mag_filter(v0, v1) & mt,
colors=colors, label=f'{v0} < mag < {v1}', probit=probit)
plt.legend(loc='upper left')
plt.ylim(-3, 3) if probit else plt.ylim(-0.1, 1.1)
plt.xlabel('t_ccd')
plt.ylabel('p_fail')
plt.tight_layout()
plt.grid()
def plot_splines(pars):
mag = np.arange(8.5, 11.01, 0.1)
n = len(spline_mags)
p0 = CubicSpline(spline_mags, pars[0:n], bc_type=((1, 0.0), (2, 0.0)))(mag)
p1 = CubicSpline(spline_mags, pars[n:2*n], bc_type=((1, 0.0), (2, 0.0)))(mag)
p2 = CubicSpline(spline_mags, pars[2*n:3*n], bc_type=((1, 0.0), (2, 0.0)))(mag)
plt.plot(mag, p0, label='p0')
plt.plot(mag, p1, label='p1')
plt.plot(mag, p2, label='p2')
plt.grid()
plt.legend();
def print_model_coeffs(parvals):
n = len(spline_mags)
print(f'spline_mags = np.array({spline_mags.tolist()})')
ln = 'spline_vals = np.array(['
print(ln, end='')
print(', '.join(f'{val:.4f}' for val in parvals[0:n]) + ',')
print(' ' * len(ln) + ', '.join(f'{val:.4f}' for val in parvals[n:2*n]) + ',')
print(' ' * len(ln) + ', '.join(f'{val:.4f}' for val in parvals[2*n:3*n]) + '])')
Explanation: Plotting and validation
End of explanation
mask_no_1p5 = data_all['color'] != 1.5
fit_no_1p5 = fit_poly_spline_model(mask_no_1p5)
plot_splines(fit_no_1p5.parvals)
plot_fit_all(fit_no_1p5.parvals, mask_no_1p5)
Explanation: Color != 1.5 fit (this is MOST acq stars)
End of explanation
plot_fit_grouped(fit_no_1p5.parvals, 'year', 0.10, mag_filter(10.3, 10.6) & mask_no_1p5,
colors='gk', label='10.3 < mag < 10.6')
plt.xlim(2016.0, None)
y0, y1 = plt.ylim()
x = DateTime('2017-10-01T00:00:00').frac_year
plt.plot([x, x], [y0, y1], '--r', alpha=0.5)
plt.grid();
plot_fit_grouped(fit_no_1p5.parvals, 'year', 0.10, mag_filter(10.5, 10.7) & mask_no_1p5,
colors='gk', label='10.5 < mag < 10.7')
plt.xlim(2016.0, None)
y0, y1 = plt.ylim()
x = DateTime('2017-10-01T00:00:00').frac_year
plt.plot([x, x], [y0, y1], '--r', alpha=0.5)
plt.grid();
Explanation: Focus on 10.3 to 10.6 mag bin for recent times
End of explanation
plot_fit_grouped(fit_no_1p5.parvals, 'year', 0.10, mag_filter(10.0, 10.3) & mask_no_1p5,
colors='gk', label='10.0 < mag < 10.3')
plt.xlim(2016.0, None)
y0, y1 = plt.ylim()
x = DateTime('2017-10-01T00:00:00').frac_year
plt.plot([x, x], [y0, y1], '--r', alpha=0.5)
plt.grid();
Explanation: Focus on 10.0 to 10.3 mag bin for recent times
End of explanation
print('Hang tight, this could take a few minutes')
mask_1p5 = data_all['color'] == 1.5
fit_1p5 = fit_poly_spline_model(mask_1p5)
plot_splines(fit_1p5.parvals)
plot_fit_all(fit_1p5.parvals, mask=mask_1p5)
Explanation: Color == 1.5 fit
End of explanation
plt.plot(data_all['year'], data_all['t_ccd'])
plt.title('ACA CCD temperature trend')
plt.grid();
plt.hist(data_all['t_ccd'], bins=24)
plt.grid()
plt.xlabel('ACA CCD temperature');
plt.hist(data_all['mag'], bins=np.arange(8.0, 11.1, 0.1))
plt.grid()
plt.xlabel('Mag_aca');
ok = ~data_all['fail'].astype(bool)
dok = data_all[ok]
plt.plot(dok['mag_aca'], dok['mag_obs'] - dok['mag_aca'], '.')
plt.plot(dok['mag_aca'], dok['mag_obs'] - dok['mag_aca'], ',', alpha=0.3)
plt.grid()
da = data_all
ok = ~da['fail'].astype(bool)
fuzz = np.random.uniform(-0.3, 0.3, np.count_nonzero(ok))
plt.plot(da['t_ccd'][ok] + fuzz, da['mag_aca'][ok], '.C0', markersize=4)
ok = da['fail'].astype(bool)
fuzz = np.random.uniform(-0.3, 0.3, np.count_nonzero(ok))
plt.plot(da['t_ccd'][ok] + fuzz, da['mag_aca'][ok], '.C1')
plt.xlabel('T_ccd (C)')
plt.ylabel('Mag_aca')
plt.grid()
Explanation: Histogram of warm pixel fraction
End of explanation
from chandra_aca.star_probs import acq_success_prob
t_ccds = np.linspace(-18, -10, 20)
mag_acas = [8.5, 9.5, 10.0, 10.3, 10.6]
for ii, mag_aca in enumerate(reversed(mag_acas)):
flight_probs = 1 - acq_success_prob(date='2018-05-01T00:00:00', t_ccd=t_ccds, mag=mag_aca)
new_probs = p_fail(fit_no_1p5.parvals, mag_aca, t_ccds + 12)
plt.plot(t_ccds, flight_probs, '--', label=f'mag_aca={mag_aca}', color=f'C{ii}')
plt.plot(t_ccds, new_probs, '-', color=f'C{ii}')
plt.xlim(-16, None)
plt.legend()
plt.xlabel('T_ccd')
plt.ylabel('Probability')
plt.grid()
Explanation: Compare with flight model circa Mar-2018
Note: this code will likely give different results when new model is promoted to flight. Will need to specify SOTA model if re-run later.
End of explanation
def print_parvals(parvals, label):
print('{:s} = np.array([{:s}, # P0 values'
.format(label, ', '.join(str(round(x, 5)) for x in parvals[0:5])))
print(' ' * len(label) + ' {:s}, # P1 values'
.format(', '.join(str(round(x, 5)) for x in parvals[5:10])))
print(' ' * len(label) + ' {:s}]) # P2 values'
.format(', '.join(str(round(x, 5)) for x in parvals[0:5])))
print_parvals(fit_no_1p5.parvals, 'fit_no_1p5')
print_parvals(fit_1p5.parvals, 'fit_1p5')
print('spline_mags = np.array([{:s}])'.format(', '.join(str(x) for x in spline_mags)))
Explanation: Final fit values for chandra_aca.star_probs
End of explanation |
11,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Align paralogous contigs to reference
If you applied the --keep-paralogs flag in the SECAPR find_target_contigs function, the function will print a text file with paralogous information into the subfolder of each sample. This file contains the information about the locus id (first column) in the reference file, followed by a list of contig headers that were matched to this locus
Step1: We can use the SECAPR paralogs_to_ref function to extract the sequences of all these contigs and align them with their respective reference sequence. This will give an idea of where the contigs map on the reference and can help decide if these are truly potential paralogous or more likely the result of non-optimal contig assembly (clusters of homologous reads are broken into separate contigs) or if contigs from other parts of the genome map to the reference due to the presence of e.g. repetitive regions or other common sequence patterns.
We need to provide the function the following input items | Python Code:
%%bash
head -n 10 ../../data/processed/target_contigs_paralogs/1061/info_paralogous_loci.txt
Explanation: Align paralogous contigs to reference
If you applied the --keep-paralogs flag in the SECAPR find_target_contigs function, the function will print a text file with paralogous information into the subfolder of each sample. This file contains the information about the locus id (first column) in the reference file, followed by a list of contig headers that were matched to this locus:
End of explanation
from IPython.display import Image, display
img1 = Image("../../images/paralog_contig_alignment.png",width=1000)
display(img1)
Explanation: We can use the SECAPR paralogs_to_ref function to extract the sequences of all these contigs and align them with their respective reference sequence. This will give an idea of where the contigs map on the reference and can help decide if these are truly potential paralogous or more likely the result of non-optimal contig assembly (clusters of homologous reads are broken into separate contigs) or if contigs from other parts of the genome map to the reference due to the presence of e.g. repetitive regions or other common sequence patterns.
We need to provide the function the following input items:
- path to the de novo contig files
- path to the reference fasta file, which was used to extract target contigs
- path to the extracted target contigs (output of find_target_contigs function)
The command looks as follows:
secapr paralogs_to_ref --contigs ../../data/processed/contigs --reference ../../data/raw/palm_reference_sequences.fasta --target_contigs ../../data/processed/target_contigs_paralogs_info --output ../../data/processed/paralogs_to_reference
Depending on how many paralogous loci were identified in your samples this can take several minutes. The script will store the final alignments of the contigs and the reference sequences for each sample in the paralog_alignments folder in the provided output path. Let's look at one exemplarly alignment. You can view alignments using alignment viewers such as e.g. AliView:
End of explanation |
11,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
U.S. Business Cycle Data
This notebook downloads, manages, and exports several data series for studying business cycles in the US. Four files are created in the csv directory
Step1: Download and manage data
Download the following series from FRED
Step2: Compute capital stock for US using the perpetual inventory method
Next, compute the quarterly capital stock series for the US using the perpetual inventory method. The discrete-time Solow growth model is given by
Step3: Compute total factor productivity
Use the Cobb-Douglas production function
Step4: Additional data management
Now that we have used the aggregate production data to compute an implied capital stock and TFP, we can scale the production data and M2 by the population.
Step5: Plot aggregate data
Step6: Compute HP filter of data
Step7: Plot aggregate data with trends
Step8: Plot cyclical components of the data
Step9: Create data files | Python Code:
import pandas as pd
import numpy as np
import fredpy as fp
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
# Export path: Set to empty string '' if you want to export data to current directory
export_path = '../Csv/'
# Load FRED API key
fp.api_key = fp.load_api_key('fred_api_key.txt')
Explanation: U.S. Business Cycle Data
This notebook downloads, manages, and exports several data series for studying business cycles in the US. Four files are created in the csv directory:
File name | Description |
---------------------------------------------|------------------------------------------------------|
rbc_data_actual_trend.csv | RBC data with actual and trend values |
rbc_data_actual_trend_cycle.csv | RBC data with actual, trend, and cycle values |
business_cycle_data_actual_trend.csv | Larger data set with actual and trend values |
business_cycle_data_actual_trend_cycle.csv | Larger data set with actual, trend, and cycle values |
The first two files are useful for studying basic RBC models. The second two contain all of the RBC data plus money, inflation, and inflation data.
End of explanation
# Download data
gdp = fp.series('GDP')
consumption = fp.series('PCEC')
investment = fp.series('GPDI')
government = fp.series('GCE')
exports = fp.series('EXPGS')
imports = fp.series('IMPGS')
net_exports = fp.series('NETEXP')
hours = fp.series('HOANBS')
deflator = fp.series('GDPDEF')
pce_deflator = fp.series('PCECTPI')
cpi = fp.series('CPIAUCSL')
m2 = fp.series('M2SL')
tbill_3mo = fp.series('TB3MS')
unemployment = fp.series('UNRATE')
# Base year for NIPA deflators
cpi_base_year = cpi.units.split(' ')[1].split('=')[0]
# Base year for CPI
nipa_base_year = deflator.units.split(' ')[1].split('=')[0]
# Convert monthly M2, 3-mo T-Bill, and unemployment to quarterly
m2 = m2.as_frequency('Q')
tbill_3mo = tbill_3mo.as_frequency('Q')
unemployment = unemployment.as_frequency('Q')
cpi = cpi.as_frequency('Q')
# Deflate GDP, consumption, investment, government expenditures, net exports, and m2 with the GDP deflator
def deflate(series,deflator):
deflator, series = fp.window_equalize([deflator, series])
series = series.divide(deflator).times(100)
return series
gdp = deflate(gdp,deflator)
consumption = deflate(consumption,deflator)
investment = deflate(investment,deflator)
government = deflate(government,deflator)
net_exports = deflate(net_exports,deflator)
exports = deflate(exports,deflator)
imports = deflate(imports,deflator)
m2 = deflate(m2,deflator)
# pce inflation as percent change over past year
pce_deflator = pce_deflator.apc()
# cpi inflation as percent change over past year
cpi = cpi.apc()
# GDP deflator inflation as percent change over past year
deflator = deflator.apc()
# Convert unemployment, 3-mo T-Bill, pce inflation, cpi inflation, GDP deflator inflation data to rates
unemployment = unemployment.divide(100)
tbill_3mo = tbill_3mo.divide(100)
pce_deflator = pce_deflator.divide(100)
cpi = cpi.divide(100)
deflator = deflator.divide(100)
# Make sure that the RBC data has the same data range
gdp,consumption,investment,government,exports,imports,net_exports,hours = fp.window_equalize([gdp,consumption,investment,government,exports,imports,net_exports,hours])
# T-Bill data doesn't neet to go all the way back to 1930s
tbill_3mo = tbill_3mo.window([gdp.data.index[0],'2222'])
metadata = pd.Series(dtype=str,name='Values')
metadata['nipa_base_year'] = nipa_base_year
metadata['cpi_base_year'] = cpi_base_year
metadata.to_csv(export_path+'/business_cycle_metadata.csv')
Explanation: Download and manage data
Download the following series from FRED:
FRED series ID | Name | Frequency |
---------------|------|-----------|
GDP | Gross Domestic Product | Q |
PCEC | Personal Consumption Expenditures | Q |
GPDI | Gross Private Domestic Investment | Q |
GCE | Government Consumption Expenditures and Gross Investment | Q |
EXPGS | Exports of Goods and Services | Q |
IMPGS | Imports of Goods and Services | Q |
NETEXP | Net Exports of Goods and Services | Q |
HOANBS | Nonfarm Business Sector: Hours Worked for All Employed Persons | Q |
GDPDEF | Gross Domestic Product: Implicit Price Deflator | Q |
PCECTPI | Personal Consumption Expenditures: Chain-type Price Index | Q |
CPIAUCSL | Consumer Price Index for All Urban Consumers: All Items in U.S. City Average | M |
M2SL | M2 | M |
TB3MS | 3-Month Treasury Bill Secondary Market Rate | M |
UNRATE | Unemployment Rate | M |
Monthly series (M2, T-Bill, unemployment rate) are converted to quarterly frequencies. CPI and PCE inflation rates are computed as the percent change in the indices over the previous year. GDP, consumption, investment, government expenditures, net exports and M2 are deflated by the GDP deflator. The data ranges for nataional accounts series (GDP, consumption, investment, government expenditures, net exports) and hours are equalized to the largest common date range.
End of explanation
# Set the capital share of income
alpha = 0.35
# Average saving rate
s = np.mean(investment.data/gdp.data)
# Average quarterly labor hours growth rate
n = (hours.data[-1]/hours.data[0])**(1/(len(hours.data)-1)) - 1
# Average quarterly real GDP growth rate
g = ((gdp.data[-1]/gdp.data[0])**(1/(len(gdp.data)-1)) - 1) - n
# Compute annual depreciation rate
depA = fp.series('M1TTOTL1ES000')
gdpA = fp.series('gdpa')
gdpA = gdpA.window([gdp.data.index[0],gdp.data.index[-1]])
gdpA,depA = fp.window_equalize([gdpA,depA])
deltaKY = np.mean(depA.data/gdpA.data)
delta = (n+g)*deltaKY/(s-deltaKY)
# print calibrated values:
print('Avg saving rate: ',round(s,5))
print('Avg annual labor growth:',round(4*n,5))
print('Avg annual gdp growth: ',round(4*g,5))
print('Avg annual dep rate: ',round(4*delta,5))
# Construct the capital series. Note that the GPD and investment data are reported on an annualized basis
# so divide by 4 to get quarterly data.
capital = np.zeros(len(gdp.data))
capital[0] = gdp.data[0]/4*s/(n+g+delta)
for t in range(len(gdp.data)-1):
capital[t+1] = investment.data[t]/4 + (1-delta)*capital[t]
# Save in a fredpy series
capital = fp.to_fred_series(data = capital,dates =gdp.data.index,units = gdp.units,title='Capital stock of the US',frequency='Quarterly')
Explanation: Compute capital stock for US using the perpetual inventory method
Next, compute the quarterly capital stock series for the US using the perpetual inventory method. The discrete-time Solow growth model is given by:
\begin{align}
Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{1}\
C_t & = (1-s)Y_t \tag{2}\
Y_t & = C_t + I_t \tag{3}\
K_{t+1} & = I_t + (1-\delta)K_t \tag{4}\
A_{t+1} & = (1+g)A_t \tag{5}\
L_{t+1} & = (1+n)L_t \tag{6}.
\end{align}
Here the model is assumed to be quarterly so $n$ is the quarterly growth rate of labor hours, $g$ is the quarterly growth rate of TFP, and $\delta$ is the quarterly rate of depreciation of the capital stock. Given a value of the quarterly depreciation rate $\delta$, an investment series $I_t$, and an initial capital stock $K_0$, the law of motion for the capital stock, Equation (4), can be used to compute an implied capital series. But we don't know $K_0$ or $\delta$ so we'll have to calibrate these values using statistics computed from the data that we've already obtained.
Let lowercase letters denote a variable that's been divided by $A_t^{1/(1-\alpha)}L_t$. E.g.,
\begin{align}
y_t = \frac{Y_t}{A_t^{1/(1-\alpha)}L_t}\tag{7}
\end{align}
Then (after substituting consumption from the model), the scaled version of the model can be written as:
\begin{align}
y_t & = k_t^{\alpha} \tag{8}\
i_t & = sy_t \tag{9}\
k_{t+1} & = i_t + (1-\delta-n-g')k_t,\tag{10}
\end{align}
where $g' = g/(1-\alpha)$ is the growth rate of $A_t^{1/(1-\alpha)}$. In the steady state:
\begin{align}
k & = \left(\frac{s}{\delta+n+g'}\right)^{\frac{1}{1-\alpha}} \tag{11}
\end{align}
which means that the ratio of capital to output is constant:
\begin{align}
\frac{k}{y} & = \frac{s}{\delta+n+g'} \tag{12}
\end{align}
and therefore the steady state ratio of depreciation to output is:
\begin{align}
\overline{\delta K/ Y} & = \frac{\delta s}{\delta + n + g'} \tag{13}
\end{align}
where $\overline{\delta K/ Y}$ is the long-run average ratio of depreciation to output. We can use Equation (13) to calibrate $\delta$ given $\overline{\delta K/ Y}$, $s$, $n$, and $g'$.
Furthermore, in the steady state, the growth rate of output is constant:
\begin{align}
\frac{\Delta Y}{Y} & = n + g' \tag{14}
\end{align}
Assume $\alpha = 0.35$.
Calibrate $s$ as the average of ratio of investment to GDP.
Calibrate $n$ as the average quarterly growth rate of labor hours.
Calibrate $g'$ as the average quarterly growth rate of real GDP minus n.
Calculate the average ratio of depreciation to GDP $\overline{\delta K/ Y}$ and use the result to calibrate $\delta$. That is, find the average ratio of Current-Cost Depreciation of Fixed Assets (FRED series ID: M1TTOTL1ES000) to GDP (FRED series ID: GDPA). Then calibrate $\delta$ from the following steady state relationship:
\begin{align}
\delta & = \frac{\left( \overline{\delta K/ Y} \right)\left(n + g' \right)}{s - \left( \overline{\delta K/ Y} \right)} \tag{15}
\end{align}
Calibrate $K_0$ by asusming that the capital stock is initially equal to its steady state value:
\begin{align}
K_0 & = \left(\frac{s}{\delta + n + g'}\right) Y_0 \tag{16}
\end{align}
Then, armed with calibrated values for $K_0$ and $\delta$, compute $K_1, K_2, \ldots$ recursively. See Timothy Kehoe's notes for more information on the perpetual inventory method:
http://users.econ.umn.edu/~tkehoe/classes/GrowthAccountingNotes.pdf
End of explanation
# Compute TFP
tfp = gdp.data/capital.data**alpha/hours.data**(1-alpha)
tfp = fp.to_fred_series(data = tfp,dates =gdp.data.index,units = gdp.units,title='TFP of the US',frequency='Quarterly')
Explanation: Compute total factor productivity
Use the Cobb-Douglas production function:
\begin{align}
Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{17}
\end{align}
and data on GDP, capital, and hours with $\alpha=0.35$ to compute an implied series for $A_t$.
End of explanation
# Convert real GDP, consumption, investment, government expenditures, net exports and M2
# into thousands of dollars per civilian 16 and over
gdp = gdp.per_capita(civ_pop=True).times(1000)
consumption = consumption.per_capita(civ_pop=True).times(1000)
investment = investment.per_capita(civ_pop=True).times(1000)
government = government.per_capita(civ_pop=True).times(1000)
exports = exports.per_capita(civ_pop=True).times(1000)
imports = imports.per_capita(civ_pop=True).times(1000)
net_exports = net_exports.per_capita(civ_pop=True).times(1000)
hours = hours.per_capita(civ_pop=True).times(1000)
capital = capital.per_capita(civ_pop=True).times(1000)
m2 = m2.per_capita(civ_pop=True).times(1000)
# Scale hours per person to equal 100 on October (Quarter III) of GDP deflator base year.
hours.data = hours.data/hours.data.loc[nipa_base_year+'-10-01']*100
Explanation: Additional data management
Now that we have used the aggregate production data to compute an implied capital stock and TFP, we can scale the production data and M2 by the population.
End of explanation
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp.data)
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][1].plot(consumption.data)
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][2].plot(investment.data)
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][3].plot(government.data)
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][0].plot(capital.data)
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][1].plot(hours.data)
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+nipa_base_year+'=100)')
axes[1][2].plot(tfp.data)
axes[1][2].set_title('TFP')
axes[1][3].plot(m2.data)
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[2][0].plot(tbill_3mo.data*100)
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator.data*100)
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi.data*100)
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment.data*100)
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent');
Explanation: Plot aggregate data
End of explanation
# HP filter to isolate trend and cyclical components
gdp_log_cycle,gdp_log_trend= gdp.log().hp_filter()
consumption_log_cycle,consumption_log_trend= consumption.log().hp_filter()
investment_log_cycle,investment_log_trend= investment.log().hp_filter()
government_log_cycle,government_log_trend= government.log().hp_filter()
exports_log_cycle,exports_log_trend= exports.log().hp_filter()
imports_log_cycle,imports_log_trend= imports.log().hp_filter()
# net_exports_log_cycle,net_exports_log_trend= net_exports.log().hp_filter()
capital_log_cycle,capital_log_trend= capital.log().hp_filter()
hours_log_cycle,hours_log_trend= hours.log().hp_filter()
tfp_log_cycle,tfp_log_trend= tfp.log().hp_filter()
deflator_cycle,deflator_trend= deflator.hp_filter()
pce_deflator_cycle,pce_deflator_trend= pce_deflator.hp_filter()
cpi_cycle,cpi_trend= cpi.hp_filter()
m2_log_cycle,m2_log_trend= m2.log().hp_filter()
tbill_3mo_cycle,tbill_3mo_trend= tbill_3mo.hp_filter()
unemployment_cycle,unemployment_trend= unemployment.hp_filter()
Explanation: Compute HP filter of data
End of explanation
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp.data)
axes[0][0].plot(np.exp(gdp_log_trend.data),c='r')
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][1].plot(consumption.data)
axes[0][1].plot(np.exp(consumption_log_trend.data),c='r')
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][2].plot(investment.data)
axes[0][2].plot(np.exp(investment_log_trend.data),c='r')
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][3].plot(government.data)
axes[0][3].plot(np.exp(government_log_trend.data),c='r')
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][0].plot(capital.data)
axes[1][0].plot(np.exp(capital_log_trend.data),c='r')
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][1].plot(hours.data)
axes[1][1].plot(np.exp(hours_log_trend.data),c='r')
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+nipa_base_year+'=100)')
axes[1][2].plot(tfp.data)
axes[1][2].plot(np.exp(tfp_log_trend.data),c='r')
axes[1][2].set_title('TFP')
axes[1][3].plot(m2.data)
axes[1][3].plot(np.exp(m2_log_trend.data),c='r')
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[2][0].plot(tbill_3mo.data*100)
axes[2][0].plot(tbill_3mo_trend.data*100,c='r')
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator.data*100)
axes[2][1].plot(pce_deflator_trend.data*100,c='r')
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi.data*100)
axes[2][2].plot(cpi_trend.data*100,c='r')
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment.data*100)
axes[2][3].plot(unemployment_trend.data*100,c='r')
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent')
ax = fig.add_subplot(1,1,1)
ax.axis('off')
ax.plot(0,0,label='Actual')
ax.plot(0,0,c='r',label='Trend')
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),ncol=2)
Explanation: Plot aggregate data with trends
End of explanation
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp_log_cycle.data)
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][1].plot(consumption_log_cycle.data)
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][2].plot(investment_log_cycle.data)
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][3].plot(government_log_cycle.data)
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][0].plot(capital_log_cycle.data)
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][1].plot(hours_log_cycle.data)
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+nipa_base_year+'=100)')
axes[1][2].plot(tfp_log_cycle.data)
axes[1][2].set_title('TFP')
axes[1][3].plot(m2_log_cycle.data)
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[2][0].plot(tbill_3mo_cycle.data)
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator_cycle.data)
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi_cycle.data)
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment_cycle.data)
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent');
Explanation: Plot cyclical components of the data
End of explanation
# Create a DataFrame with actual and trend data
data = pd.DataFrame({
'gdp':gdp.data,
'gdp_trend':np.exp(gdp_log_trend.data),
'gdp_cycle':gdp_log_cycle.data,
'consumption':consumption.data,
'consumption_trend':np.exp(consumption_log_trend.data),
'consumption_cycle':consumption_log_cycle.data,
'investment':investment.data,
'investment_trend':np.exp(investment_log_trend.data),
'investment_cycle':investment_log_cycle.data,
'government':government.data,
'government_trend':np.exp(government_log_trend.data),
'government_cycle':government_log_cycle.data,
'exports':exports.data,
'exports_trend':np.exp(exports_log_trend.data),
'exports_cycle':exports_log_cycle.data,
'imports':imports.data,
'imports_trend':np.exp(imports_log_trend.data),
'imports_cycle':imports_log_cycle.data,
'hours':hours.data,
'hours_trend':np.exp(hours_log_trend.data),
'hours_cycle':hours_log_cycle.data,
'capital':capital.data,
'capital_trend':np.exp(capital_log_trend.data),
'capital_cycle':capital_log_cycle.data,
'tfp':tfp.data,
'tfp_trend':np.exp(tfp_log_trend.data),
'tfp_cycle':tfp_log_cycle.data,
'real_m2':m2.data,
'real_m2_trend':np.exp(m2_log_trend.data),
'real_m2_cycle':m2_log_cycle.data,
't_bill_3mo':tbill_3mo.data,
't_bill_3mo_trend':tbill_3mo_trend.data,
't_bill_3mo_cycle':tbill_3mo_cycle.data,
'cpi_inflation':cpi.data,
'cpi_inflation_trend':cpi_trend.data,
'cpi_inflation_cycle':cpi_cycle.data,
'pce_inflation':pce_deflator.data,
'pce_inflation_trend':pce_deflator_trend.data,
'pce_inflation_cycle':pce_deflator_cycle.data,
'unemployment':unemployment.data,
'unemployment_trend':unemployment_trend.data,
'unemployment_cycle':unemployment_cycle.data,
})
# RBC data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].dropna().to_csv(export_path+'rbc_data_actual_trend.csv',index=True)
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].dropna().to_csv(export_path+'rbc_data_actual_trend_cycle.csv',index=True)
# More comprehensive Business Cycle Data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].dropna().to_csv(export_path+'business_cycle_data_actual_trend.csv',index=True)
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].dropna().to_csv(export_path+'business_cycle_data_actual_trend_cycle.csv')
Explanation: Create data files
End of explanation |
11,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conv2DTranspose
[convolutional.Conv2DTranspose.0] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=False
Step1: [convolutional.Conv2DTranspose.1] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=True
Step2: [convolutional.Conv2DTranspose.2] 4 3x3 filters on 4x4x2 input, strides=(2,2), padding='valid', data_format='channels_last', activation='relu', use_bias=True
Step3: [convolutional.Conv2DTranspose.3] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True
Step4: [convolutional.Conv2DTranspose.4] 5 3x3 filters on 4x4x2 input, strides=(2,2), padding='same', data_format='channels_last', activation='relu', use_bias=True
Step5: [convolutional.Conv2DTranspose.5] 3 2x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True
Step6: export for Keras.js tests | Python Code:
data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(4, (3,3), strides=(1,1),
padding='valid', data_format='channels_last',
activation='linear', use_bias=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(150)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
# print('b shape:', weights[1].shape)
# print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: Conv2DTranspose
[convolutional.Conv2DTranspose.0] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=False
End of explanation
data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(4, (3,3), strides=(1,1),
padding='valid', data_format='channels_last',
activation='linear', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(151)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2DTranspose.1] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=True
End of explanation
data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(4, (3,3), strides=(2,2),
padding='valid', data_format='channels_last',
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(152)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2DTranspose.2] 4 3x3 filters on 4x4x2 input, strides=(2,2), padding='valid', data_format='channels_last', activation='relu', use_bias=True
End of explanation
data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(4, (3,3), strides=(1,1),
padding='same', data_format='channels_last',
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(153)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2DTranspose.3] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True
End of explanation
data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(5, (3,3), strides=(2,2),
padding='same', data_format='channels_last',
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(154)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2DTranspose.4] 5 3x3 filters on 4x4x2 input, strides=(2,2), padding='same', data_format='channels_last', activation='relu', use_bias=True
End of explanation
data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(3, (2,3), strides=(1,1),
padding='same', data_format='channels_last',
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(155)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2DTranspose.5] 3 2x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True
End of explanation
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
11,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some simple ERDDAP timing tests
Time how long it takes to get data via ERDDAP.
Here we are using the IOOS NERACOOS ERDDAP service at http
Step1: Try CSV response for 1 week of accelerometer (wave) data from NERACOOS Buoy B off Portland, ME
Step2: How about 1 year of data?
Step3: Try JSON response -- might want to use this in a web app | Python Code:
%matplotlib inline
import urllib, json
import pandas as pd
Explanation: Some simple ERDDAP timing tests
Time how long it takes to get data via ERDDAP.
Here we are using the IOOS NERACOOS ERDDAP service at http://www.neracoos.org/erddap.
Lots of factors affect timing, as described at the end.
But for these tests it's pretty darn speedy!
End of explanation
url = 'http://www.neracoos.org/erddap/tabledap/B01_accelerometer_all.csv?time,significant_wave_height&time>"now-7days"'
print(url)
%%timeit
df_sb = pd.read_csv(url,index_col='time',parse_dates=True,skiprows=[1]) # skip the units row
df_sb = pd.read_csv(url,index_col='time',parse_dates=True,skiprows=[1]) # skip the units row
df_sb.plot(figsize=(12,4),grid='on');
Explanation: Try CSV response for 1 week of accelerometer (wave) data from NERACOOS Buoy B off Portland, ME
End of explanation
url = 'http://www.neracoos.org/erddap/tabledap/B01_accelerometer_all.csv?time,significant_wave_height&time>"now-365days"'
print(url)
%%timeit
df_sb = pd.read_csv(url,index_col='time',parse_dates=True,skiprows=[1]) # skip the units row
df_sb = pd.read_csv(url,index_col='time',parse_dates=True,skiprows=[1]) # skip the units row
df_sb.plot(figsize=(12,4),grid='on');
Explanation: How about 1 year of data?
End of explanation
url = 'http://www.neracoos.org/erddap/tabledap/B01_accelerometer_all.json?time,significant_wave_height&time>"now-7days"'
print(url)
%%timeit
response = urllib.urlopen(url)
data = json.loads(response.read())
response = urllib.urlopen(url)
data = json.loads(response.read())
data
Explanation: Try JSON response -- might want to use this in a web app
End of explanation |
11,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 1
Step1: The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables)
Step2: Question
Step3: Question
- How many columns of data does each table have (sorted by most to least?)
- Which table has the most columns of data?
Step4: Previewing sample rows of data values
In the BigQuery UI, find the Resources panel and search for catalog_sales. You may need to add the qwiklabs-resources project to your UI by clicking + Add Data -> Pin a project and entering qwiklabs-resources
Click on the catalog_sales table name for the tpcds_2t_baseline dataset under qwiklabs-resources
Question
- How many rows are in the table?
- How large is the table in TB?
Hint
Step5: A note on our data
Step6: Running the first benchmark test
Now let's run the first query against our dataset and note the execution time. Tip
Step7: It should execute in just a few seconds. Then try running it again and see if you get the same performance. BigQuery will automatically cache the results from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics.
Viewing BigQuery job statistics
Let's list our five most recent query jobs run on BigQuery using the bq command line interface. Then we will get even more detail on our most recent job with the bq show command. Be sure to replace the job id with your own.
Step9: Looking at the job statistics we can see our most recent query hit cache
- cacheHit
Step11: 132 GB will be processed. At the time of writing, BigQuery pricing is \$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \$0.66 to run.
Now let's run it an ensure we're not pulling from cache so we get an accurate time-to-completion benchmark.
Step12: If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below
Step13: Use the BigQuery Data Transfer Service to copy an existing dataset
Enable the BigQuery Data Transfer Service API
Navigate to the BigQuery console and the existing qwiklabs-resources dataset
Click Copy Dataset
In the pop-up, choose your project name and the newly created dataset name from the previous step
Click Copy
Wait for the transfer to complete
Verify you now have the baseline data in your project
Run the below query and confirm you see data. Note that if you omit the project-id ahead of the dataset name in the FROM clause, BigQuery will assume your default project.
Step14: Setup an automated test
Running each of the 99 queries manually via the Console UI would be a tedious effort. We'll show you how you can run all 99 programmatically and automatically log the output (time and GB processed) to a log file for analysis.
Below is a shell script that
Step15: Viewing the benchmark results
As part of the benchmark test, we stored the processing time of each query into a new perf BigQuery table. We can query that table and get some performance stats for our test.
First are each of the tests we ran
Step16: And finally, the overall statistics for the entire test
Step17: Benchmarking all 99 queries
As we mentioned before, we already ran all 99 queries and recorded the results and made them available for you to query in a public table
Step18: And the results of the complete test | Python Code:
%%bigquery
SELECT
dataset_id,
table_id,
-- Convert bytes to GB.
ROUND(size_bytes/pow(10,9),2) as size_gb,
-- Convert UNIX EPOCH to a timestamp.
TIMESTAMP_MILLIS(creation_time) AS creation_time,
TIMESTAMP_MILLIS(last_modified_time) as last_modified_time,
row_count,
CASE
WHEN type = 1 THEN 'table'
WHEN type = 2 THEN 'view'
ELSE NULL
END AS type
FROM
`qwiklabs-resources.tpcds_2t_baseline.__TABLES__`
ORDER BY size_gb DESC
Explanation: Lab 1: Explore and Benchmark a BigQuery Dataset for Performance
Overview
In this lab you will take an existing 2TB+ TPC-DS benchmark dataset and learn the data warehouse optimization methods you can apply to the dataset in BigQuery to improve performance.
What you'll do
In this lab, you will learn how to:
Use BigQuery to access and query the TPC-DS benchmark dataset
Run pre-defined queries to establish baseline performance benchmarks
Prerequisites
This is an advanced level SQL lab. Before taking it, you should have experience with SQL. Familiarity with BigQuery is also highly recommended. If you need to get up to speed in these areas, you should take this Data Analyst series of labs first:
Quest: BigQuery for Data Analysts
Once you're ready, scroll down to learn about the services you will be using and how to properly set up your lab environment.
BigQuery
BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without managing infrastructure or needing a database administrator. BigQuery uses SQL and takes advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
TPC-DS Background
In order to benchmark the performance of a data warehouse we first must get tables and data to run queries against. There is a public organization, TPC, that provides large benchmarking datasets to companies explicitly for this purpose. The purpose of TPC benchmarks is to provide relevant, objective performance data to industry users.
The TPC-DS Dataset we will be using comprises of 25 tables and 99 queries that simulate common data analysis tasks. View the full documentation here.
Exploring TPC-DS in BigQuery
The TPC-DS tables have been loaded into BigQuery and you will explore ways to optimize the performance of common queries by using BigQuery data warehousing best practices. We have limited the size to 2TB for the timing of this lab but the dataset itself can be expanded as needed.
Note: The TPC Benchmark and TPC-DS are trademarks of the Transaction Processing Performance Council (http://www.tpc.org). The Cloud DW benchmark is derived from the TPC-DS Benchmark and as such is not comparable to published TPC-DS results.
Exploring the Schema with SQL
Question:
- How many tables are in the dataset?
- What is the name of the largest table (in GB)? How many rows does it have?
End of explanation
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
Explanation: The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables):
These systems are driven by the core functions of our retail business. As you can see, our store accepts sales from online (web), mail-order (catalog), and in-store. The business must keep track of inventory and can offer promotional discounts on items sold.
Exploring all available columns of data
Question:
- How many columns of data are in the entire dataset (all tables)?
End of explanation
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
WHERE
is_partitioning_column = 'YES' OR clustering_ordinal_position IS NOT NULL
Explanation: Question:
- Are any of the columns of data in this baseline dataset partitioned or clustered?
End of explanation
%%bigquery
SELECT
COUNT(column_name) AS column_count,
table_name
FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
GROUP BY table_name
ORDER BY column_count DESC, table_name
Explanation: Question
- How many columns of data does each table have (sorted by most to least?)
- Which table has the most columns of data?
End of explanation
%%bigquery --verbose
SELECT
cs_item_sk,
COUNT(cs_order_number) AS total_orders,
SUM(cs_quantity) AS total_quantity,
SUM(cs_ext_sales_price) AS total_revenue,
SUM(cs_net_profit) AS total_profit
FROM
`qwiklabs-resources.tpcds_2t_baseline.catalog_sales`
GROUP BY
cs_item_sk
ORDER BY
total_orders DESC
LIMIT
100
Explanation: Previewing sample rows of data values
In the BigQuery UI, find the Resources panel and search for catalog_sales. You may need to add the qwiklabs-resources project to your UI by clicking + Add Data -> Pin a project and entering qwiklabs-resources
Click on the catalog_sales table name for the tpcds_2t_baseline dataset under qwiklabs-resources
Question
- How many rows are in the table?
- How large is the table in TB?
Hint: Use the Details button in the web UI to quickly access table metadata
Question:
- Preview the data and find the Catalog Sales Extended Sales Price cs_ext_sales_price field (which is calculated based on product quantity * sales price)
- Are there any missing data values for Catalog Sales Quantity (cs_quantity)?
- Are there any missing values for cs_ext_ship_cost? For what type of product could this be expected? (Digital products)
Create an example sales report
Write a query that shows key sales stats for each item sold from the Catalog and execute it in the BigQuery UI:
- total orders
- total unit quantity
- total revenue
- total profit
- sorted by total orders highest to lowest, limit 100
End of explanation
!head --lines=50 'sql/example_baseline_queries.sql'
Explanation: A note on our data: The TPC-DS benchmark allows data warehouse practicioners to generate any volume of data programmatically. Since the rows of data are system generated, they may not make the most sense in a business context (like why are we selling our top product at such a huge profit loss!).
The good news is that to benchmark our performance we care most about the volume of rows and columns to run our benchmark against.
Analyzing query performance
Click on Execution details
Refer to the chart below (which should be similar to your results) and answer the following questions.
Question
- How long did it take the query to run? 5.1s
- How much data in GB was processed? 150GB
- How much slot time was consumed? 1hr 24min
- How many rows were input? 2,881,495,086
- How many rows were output as the end result (before the limit)? 23,300
- What does the output rows mean in the context of our query? (23,300 unique cs_item_sk)
Side note: Slot Time
We know the query took 5.1 seconds to run so what does the 1hr 24 min slot time metric mean?
Inside of the BigQuery service are lots of virtual machines that massively process your data and query logic in parallel. These workers, or "slots", work together to process a single query job really quickly. For accounts with on-demand pricing, you can have up to 2,000 slots.
So say we had 30 minutes of slot time or 1800 seconds. If the query took 20 seconds in total to run,
but it was 1800 seconds worth of work, how many workers at minimum worked on it?
1800/20 = 90
And that's assuming each worker instantly had all the data it needed (no shuffling of data between workers) and was at full capacity for all 20 seconds!
In reality, workers have a variety of tasks (waiting for data, reading it, performing computations, and writing data)
and also need to compare notes with eachother on what work was already done on the job. The good news for you is
that you don't need to worry about optimizing these workers or the underlying data to run perfectly in parallel. That's why BigQuery is a managed service -- there's an entire team dedicated to hardware and data storage optimization.
In case you were wondering, the worker limit for your project is 2,000 slots at once.
Running a performance benchmark
To performance benchmark our data warehouse in BigQuery we need to create more than just a single SQL report. The good news is the TPC-DS dataset ships with 99 standard benchmark queries that we can run and log the performance outcomes.
In this lab, we are doing no adjustments to the existing data warehouse tables (no partitioning, no clustering, no nesting) so we can establish a performance benchmark to beat in future labs.
Viewing the 99 pre-made SQL queries
We have a long SQL file with 99 standard queries against this dataset stored in our /sql/ directory.
Let's view the first 50 lines of those baseline queries to get familiar with how we will be performance benchmarking our dataset.
End of explanation
%%bigquery --verbose
# start query 1 in stream 0 using template query96.tpl
select count(*)
from `qwiklabs-resources.tpcds_2t_baseline.store_sales` as store_sales
,`qwiklabs-resources.tpcds_2t_baseline.household_demographics` as household_demographics
,`qwiklabs-resources.tpcds_2t_baseline.time_dim` as time_dim,
`qwiklabs-resources.tpcds_2t_baseline.store` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
Explanation: Running the first benchmark test
Now let's run the first query against our dataset and note the execution time. Tip: You can use the --verbose flag in %%bigquery magics to return the job and completion time.
End of explanation
!bq ls -j -a -n 5
!bq show --format=prettyjson -j 612a4b28-cb5c-4e0b-ad5b-ebd51c3b2439
Explanation: It should execute in just a few seconds. Then try running it again and see if you get the same performance. BigQuery will automatically cache the results from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics.
Viewing BigQuery job statistics
Let's list our five most recent query jobs run on BigQuery using the bq command line interface. Then we will get even more detail on our most recent job with the bq show command. Be sure to replace the job id with your own.
End of explanation
%%bash
bq query \
--dry_run \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
# Convert bytes to GB
132086388641 / 1e+9
Explanation: Looking at the job statistics we can see our most recent query hit cache
- cacheHit: true and therefore
- totalBytesProcessed: 0.
While this is great in normal uses for BigQuery (you aren't charged for queries that hit cache) it kind of ruins our performance test. While cache is super useful we want to disable it for testing purposes.
Disabling Cache and Dry Running Queries
As of the time this lab was created, you can't pass a flag to %%bigquery iPython notebook magics to disable cache or to quickly see the amount of data processed. So we will use the traditional bq command line interface in bash.
First we will do a dry run of the query without processing any data just to see how many bytes of data would be processed. Then we will remove that flag and ensure nouse_cache is set to avoid hitting cache as well.
End of explanation
%%bash
bq query \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
Explanation: 132 GB will be processed. At the time of writing, BigQuery pricing is \$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \$0.66 to run.
Now let's run it an ensure we're not pulling from cache so we get an accurate time-to-completion benchmark.
End of explanation
%%bash
export PROJECT_ID=$(gcloud config list --format 'value(core.project)')
export BENCHMARK_DATASET_NAME=tpcds_2t_baseline # Name of the dataset you want to create
## Create a BigQuery dataset for tpcds_2t_flat_part_clust if it doesn't exist
datasetexists=$(bq ls -d | grep -w $BENCHMARK_DATASET_NAME)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset $BENCHMARK_DATASET_NAME already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: $BENCHMARK_DATASET_NAME"
bq --location=US mk --dataset \
--description 'Benchmark Dataset' \
$PROJECT:$BENCHMARK_DATASET_NAME
echo "\nHere are your current datasets:"
bq ls
fi
Explanation: If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below:
It's a matter of preference whether you do your work in the Web UI or the command line -- each has it's advantages.
One major advantage of using the bq command line interface is the ability to create a script that will run the remaining 98 benchmark queries for us and log the results.
Copy the qwiklabs-resources dataset into your own GCP project
We will use the new BigQuery Transfer Service to quickly copy our large dataset from the qwiklabs-resources GCP project into your own so you can perform the benchmarking.
Create a new baseline dataset in your project
End of explanation
%%bigquery
SELECT COUNT(*) AS store_transaction_count
FROM tpcds_2t_baseline.store_sales
Explanation: Use the BigQuery Data Transfer Service to copy an existing dataset
Enable the BigQuery Data Transfer Service API
Navigate to the BigQuery console and the existing qwiklabs-resources dataset
Click Copy Dataset
In the pop-up, choose your project name and the newly created dataset name from the previous step
Click Copy
Wait for the transfer to complete
Verify you now have the baseline data in your project
Run the below query and confirm you see data. Note that if you omit the project-id ahead of the dataset name in the FROM clause, BigQuery will assume your default project.
End of explanation
%%bash
# runs the SQL queries from the TPCDS benchmark
# Pull the current Google Cloud Platform project name
BQ_DATASET="tpcds_2t_baseline" # let's start by benchmarking our baseline dataset
QUERY_FILE_PATH="./sql/example_baseline_queries.sql" # the full test is on 99_baseline_queries but that will take 80+ mins to run
IFS=";"
# create perf table to keep track of run times for all 99 queries
printf "\033[32;1m Housekeeping tasks... \033[0m\n\n";
printf "Creating a reporting table perf to track how fast each query runs...";
perf_table_ddl="CREATE TABLE IF NOT EXISTS $BQ_DATASET.perf(performance_test_num int64, query_num int64, elapsed_time_sec int64, ran_on int64)"
bq rm -f $BQ_DATASET.perf
bq query --nouse_legacy_sql $perf_table_ddl
start=$(date +%s)
index=0
for select_stmt in $(<$QUERY_FILE_PATH)ใ
do
# run the test until you hit a line with the string 'END OF BENCHMARK' in the file
if [[ "$select_stmt" == *'END OF BENCHMARK'* ]]; then
break
fi
printf "\n\033[32;1m Let's benchmark this query... \033[0m\n";
printf "$select_stmt";
SECONDS=0;
bq query --use_cache=false --nouse_legacy_sql $select_stmt # critical to turn cache off for this test
duration=$SECONDS
# get current timestamp in milliseconds
ran_on=$(date +%s)
index=$((index+1))
printf "\n\033[32;1m Here's how long it took... \033[0m\n\n";
echo "Query $index ran in $(($duration / 60)) minutes and $(($duration % 60)) seconds."
printf "\n\033[32;1m Writing to our benchmark table... \033[0m\n\n";
insert_stmt="insert into $BQ_DATASET.perf(performance_test_num, query_num, elapsed_time_sec, ran_on) values($start, $index, $duration, $ran_on)"
printf "$insert_stmt"
bq query --nouse_legacy_sql $insert_stmt
done
end=$(date +%s)
printf "Benchmark test complete"
Explanation: Setup an automated test
Running each of the 99 queries manually via the Console UI would be a tedious effort. We'll show you how you can run all 99 programmatically and automatically log the output (time and GB processed) to a log file for analysis.
Below is a shell script that:
1. Accepts a BigQuery dataset to benchmark
2. Accepts a list of semi-colon separated queries to run
3. Loops through each query and calls the bq query command
4. Records the execution time into a separate BigQuery performance table perf
Execute the below statement and follow along with the results as you benchmark a few example queries (don't worry, we've already ran the full 99 recently so you won't have to).
After executing, wait 1-2 minutes for the benchmark test to complete
End of explanation
%%bigquery
SELECT * FROM tpcds_2t_baseline.perf
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
ORDER BY ran_on
Explanation: Viewing the benchmark results
As part of the benchmark test, we stored the processing time of each query into a new perf BigQuery table. We can query that table and get some performance stats for our test.
First are each of the tests we ran:
End of explanation
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
MAX(performance_test_num) AS latest_performance_test_num,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec
FROM
tpcds_2t_baseline.perf
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
Explanation: And finally, the overall statistics for the entire test:
End of explanation
%%bigquery
SELECT
TIMESTAMP_SECONDS(performance_test_num) AS test_date,
query_num,
TIMESTAMP_SECONDS(ran_on) AS query_ran_on,
TIMESTAMP_SECONDS(ran_on + elapsed_time_sec) AS query_completed_on,
elapsed_time_sec
FROM `qwiklabs-resources.tpcds_2t_baseline.perf` # public table
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM `qwiklabs-resources.tpcds_2t_baseline.perf`)
ORDER BY ran_on
Explanation: Benchmarking all 99 queries
As we mentioned before, we already ran all 99 queries and recorded the results and made them available for you to query in a public table:
End of explanation
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
ROUND(SUM(elapsed_time_sec)/60,2) AS total_time_min,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec,
ROUND(AVG(elapsed_time_sec),2) AS avg_query_time_sec
FROM
`qwiklabs-resources.tpcds_2t_baseline.perf`
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM `qwiklabs-resources.tpcds_2t_baseline.perf`)
Explanation: And the results of the complete test:
End of explanation |
11,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Roadrunner Methoden
Antimony Modell aus Modell-Datenbank abfragen
Step1: Erstelle eine Instanz von roadrunner, indem du gleichzeitig den Repressilator als Modell lรคdst. Benutze dazu loada() von tellurium.
Step2: Im folgenden Teil wollen wir einige der Methoden von telluriums roadrunner ausprobieren.
Lass dir dazu das Modell als Antimony oder SBML anzeigen. Dies erreichst du mit getAntimony() oder getSBML().
Step3: Solver Methoden
Achtung
Step4: รndere den verwendeten Solver von 'CVODE' auf Runge-Kutta 'rk45' und lass dir die Settings nochmals anzeigen.
Verwende dazu setIntegrator() und getIntegrator().
Was fรคllt auf?
Step5: Simuliere den Repressilator von 0s bis 1000s und plotte die Ergebnisse fรผr verschiedene steps-Werte (z.b. steps = 10 oder 10000) in der simulate-Methode. Was macht das Argument steps?
Step6: Benutze weiterhin 'CVODE' und verรคndere den Paramter 'relative_tolerance' des Solvers (z.b. 1 oder 10).
Verwendete dabei steps = 10000 in simulate().
Was fรคllt auf?
Hinweis - die nรถtige Methode lautet roadrunner.getIntegrator().setValue().
Step7: ODE-Modell als Objekt in Python
Oben haben wir gesehen, dass tellurium eine Instanz von RoadRunner erzeugt, wenn ein Modell eingelesen wird.
Auรerdem ist der Zugriff auf das eigentliche Modell mรถglich. Unter Verwendung von .model gibt es zusรคtzliche Methoden um das eigentliche Modell zu manipulieren
Step8: Aufgabe 1 - Parameterscan
Step9: Aufgabe 2 - (Initial value)-scan | Python Code:
Repressilator = urllib2.urlopen('http://antimony.sourceforge.net/examples/biomodels/BIOMD0000000012.txt').read()
Explanation: Roadrunner Methoden
Antimony Modell aus Modell-Datenbank abfragen:
Lade mithilfe von urllib2 das Antimony-Modell des "Repressilator" herunter. Benutze dazu die urllib2 Methoden urlopen() und read()
Die URL fรผr den Repressilator lautet:
http://antimony.sourceforge.net/examples/biomodels/BIOMD0000000012.txt
Elowitz, M. B., & Leibler, S. (2000). A synthetic oscillatory network of transcriptional regulators. Nature, 403(6767), 335-338.
End of explanation
rr = te.loada(Repressilator)
Explanation: Erstelle eine Instanz von roadrunner, indem du gleichzeitig den Repressilator als Modell lรคdst. Benutze dazu loada() von tellurium.
End of explanation
print rr.getAntimony()
print rr.getSBML()
Explanation: Im folgenden Teil wollen wir einige der Methoden von telluriums roadrunner ausprobieren.
Lass dir dazu das Modell als Antimony oder SBML anzeigen. Dies erreichst du mit getAntimony() oder getSBML().
End of explanation
rr = te.loada(Repressilator)
print rr.getIntegrator()
Explanation: Solver Methoden
Achtung: Obwohl resetToOrigin() das Modell in den ursprรผnglichen Zustand zurรผck setzt, bleiben Solver-spezifische Einstellungen erhalten. Daher benutze am besten immer te.loada() als vollstรคndigen Reset!
Mit getIntegrator() ist es mรถglich, den Solver und dessen gesetzte Einstellungen anzeigen zu lassen.
End of explanation
rr = te.loada(Repressilator)
rr.setIntegrator('rk45')
print rr.getIntegrator()
Explanation: รndere den verwendeten Solver von 'CVODE' auf Runge-Kutta 'rk45' und lass dir die Settings nochmals anzeigen.
Verwende dazu setIntegrator() und getIntegrator().
Was fรคllt auf?
End of explanation
rr = te.loada(Repressilator)
rr.simulate(0,1000,1000)
rr.plot()
Explanation: Simuliere den Repressilator von 0s bis 1000s und plotte die Ergebnisse fรผr verschiedene steps-Werte (z.b. steps = 10 oder 10000) in der simulate-Methode. Was macht das Argument steps?
End of explanation
rr = te.loada(Repressilator)
rr.getIntegrator().setValue('relative_tolerance',0.0000001)
rr.getIntegrator().setValue('relative_tolerance',1)
rr.simulate(0,1000,1000)
rr.plot()
Explanation: Benutze weiterhin 'CVODE' und verรคndere den Paramter 'relative_tolerance' des Solvers (z.b. 1 oder 10).
Verwendete dabei steps = 10000 in simulate().
Was fรคllt auf?
Hinweis - die nรถtige Methode lautet roadrunner.getIntegrator().setValue().
End of explanation
rr = te.loada(Repressilator)
print type(rr)
print type(rr.model)
Explanation: ODE-Modell als Objekt in Python
Oben haben wir gesehen, dass tellurium eine Instanz von RoadRunner erzeugt, wenn ein Modell eingelesen wird.
Auรerdem ist der Zugriff auf das eigentliche Modell mรถglich. Unter Verwendung von .model gibt es zusรคtzliche Methoden um das eigentliche Modell zu manipulieren:
End of explanation
import matplotlib.pyplot as plt
import numpy as np
fig_phase = plt.figure(figsize=(5,5))
rr = te.loada(Repressilator)
for l,i in enumerate([1.0,1.7,3.0,10.]):
fig_phase.add_subplot(2,2,l+1)
rr.n = i
rr.reset()
result = rr.simulate(0,500,500,selections=['time','X','PX'])
plt.plot(result['X'],result['PX'],label='n = %s' %i)
plt.xlabel('X')
plt.ylabel('PX')
plt.legend()
plt.tight_layout()
fig_timecourse= plt.figure(figsize=(5,5))
rr = te.loada(Repressilator)
for l,i in enumerate([1.0,1.7,3.0,10.]):
rr.n = i
rr.reset()
result = rr.simulate(0,500,500,selections=['time','X','PX'])
plt.plot(result['time'],result['PX'],label='PX; n = %s' %i)
plt.xlabel('time')
plt.ylabel('Species amounts')
plt.legend()
plt.tight_layout()
Explanation: Aufgabe 1 - Parameterscan:
A) Sieh dir die Implementierung des Modells 'Repressilator' an, welche Paramter gibt es?
B) Erstelle einen Parameterscan, welcher den Wert des Paramters mit der Bezeichnung 'n' im Repressilator รคndert.
(Beispielsweise fรผr n=1,n=2,n=3,...)
Lasse das Modell fรผr jedes gewรคhlte 'n' simulieren.
Beantworte dazu folgende Fragen:
a) Welchen Zweck erfรผllt 'n' im Modell im Hinblick auf die Reaktion, in der 'n' auftaucht?
b) Im Gegensatz dazu, welchen Effekt hat 'n' auf das Modellverhalten?
c) Kannst du einen Wert fรผr 'n' angeben, bei dem sich das Verhalten des Modells qualitativ รคndert?
C) Visualisiere die Simulationen. Welche Art von Plot ist gรผnstig, um die Modellsimulation darzustellen? Es gibt mehrere geeignete Varianten, aber beschrรคnke die Anzahl der Graphen im Plot(z.b. wรคhle eine Spezies und plotte).
Hinweise:
Nutze die "Autovervollstรคndigung" des Python-Notebook und auรerdem die offizielle Dokumentation von RoadRunner, um die Methoden zu finden, die fรผr die Implementierung eines Parameterscans notwendig sind. Natรผrlich kannst du auch das Notebook von der Tellurium Einfรผhrung als Hilfe benutzen.
Ziehe in Erwรคgung, dass das Modell einen oder mehrere Resets benรถtigt. รberlege, an welcher Stelle deiner implementierung und welche Reset-Methode du idealerweise einsetzen solltest.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
rr = te.loada(Repressilator)
print rr.model.getFloatingSpeciesInitAmountIds()
print rr.model.getFloatingSpeciesInitAmounts()
for l,i in enumerate([1,5,10,20]):
# Auswahl einiger Varianten (es gibt noch mehr Mรถglichkeiten...)
#Variante1 - Falsch
#rr.Y=i
#Variante2 - Falsch
#rr.Y=i
#rr.reset()
#Variante3 - Richtig
rr.model["init(Y)"] = i
rr.reset()
result = rr.simulate(0,10,1000,selections=['Y','PY'])
#plt.plot(result[:,0],result['PY'],label='n = %s' %i)
plt.plot(result['Y'],label='initial Y = %s' %i)
plt.xlabel('time')
plt.ylabel('Species in amounts')
plt.axhline(y=i,linestyle = ':',color='black')
plt.legend()
Explanation: Aufgabe 2 - (Initial value)-scan:
Erstelle einen "Scan", der den Anfwangswert von der Spezies Y รคndert.
Das Modellverhalten ist hierbei weniger interessant.
Achte vielmehr darauf, die Resets so zu setzen, dass 'Y' bei der Simulation tatsรคchlich beim gesetzten Wert startet.
End of explanation |
11,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
Step2: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient)
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Now, run backward propagation.
Step12: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still
Step14: Expected output | Python Code:
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
Explanation: Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
### START CODE HERE ### (approx. 1 line)
J = theta * x
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
Explanation: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
$\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
End of explanation
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
Explanation: Expected Output:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial Jย }{ \partial \theta} = x$.
End of explanation
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
Explanation: Expected Output:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
Instructions:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
End of explanation
def forward_propagation_n(X, Y, parameters):
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
Explanation: Expected Output:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation().
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
End of explanation
def backward_propagation_n(X, Y, cache):
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T) * 2
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
Explanation: Now, run backward propagation.
End of explanation
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 1e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
Explanation: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
Exercise: Implement gradient_check_n().
Instructions: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute J_plus[i]:
1. Set $\theta^{+}$ to np.copy(parameters_values)
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )).
- To compute J_minus[i]: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
End of explanation
def backward_propagation_n(X, Y, cache):
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
Explanation: Expected output:
<table>
<tr>
<td> ** There is a mistake in the backward propagation!** </td>
<td> difference = 0.285093156781 </td>
</tr>
</table>
It seems that there were errors in the backward_propagation_n code we gave you! Good that you've implemented the gradient check. Go back to backward_propagation and try to find/correct the errors (Hint: check dW2 and db1). Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining backward_propagation_n() if you modify the code.
Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented.
Note
- Gradient Checking is slow! Approximating the gradient with $\frac{\partial J}{\partial \theta} \approx \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct.
- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout.
Congrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :)
<font color='blue'>
What you should remember from this notebook:
- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).
- Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process.
Correct the backward propagation and rerun the gradient check:
End of explanation |
11,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random grids
Because sometimes you just want some random data.
This module implements 2D fractal noise, combining one or more octaves of Perlin noise.
Step1: The noise does not have to be isotropic | Python Code:
import gio
g = gio.generate_random_surface(200, res=3, octaves=3)
g.plot()
Explanation: Random grids
Because sometimes you just want some random data.
This module implements 2D fractal noise, combining one or more octaves of Perlin noise.
End of explanation
g = gio.generate_random_surface(200, res=(2, 5), octaves=3)
g.plot()
Explanation: The noise does not have to be isotropic:
End of explanation |
11,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Control de flujo
Hasta ahora hemos programado en Python intrucciones sencillas que solo tenรญan en cuenta una posibilidad. No evaluรกbamos nada, simplemente dรกbamos รณrdenes y Python obedecรญa. Como mucho, cuando cometรญamos algรบn error de sintaxis, Python se quejaba, sin mรกs.
A partir de ahora vamos a aprender a manejar el control de flujo. Vamor a tener en cuenta resultados mรบltiples y vamos a seleccionar uno concreto dependiendo del valor que tengan las variables o de lo que estรฉ ocurriendo en el programa en un momento concreto.
Comparadores
Los elementos mรกs sencillos del control de flujo son los comparadores
Step1: Condicionales
En Python podemos evaluar condiciones con la intrucciรณn if. if forma lo que se denomina un bloque de cรณdigo, que tiene una sintaxis particular
Step2: Podemos evaluar condiciones mรกs complejas definiciendo por ejemplo una alternativa cuando el resultado de la comparaciรณnn era False. Para ello, usamos instrucciones del tipo if-else, con la siguiente sintaxis
Step3: Por รบltimo, podemos evaluar distintas condiciones con instrucciones del tipo if-elif-else, con la siguiente sintaxis
Step4: Operadores lรณgicos (o booleanos)
Los operadores lรณgicos son palabras usadas para conectar oraciones de Python de forma gramaticalmente correcta, casi igual que si lo hiciรฉsamos en lenguaje natural. Existen tres operadores lรณgicos
Step5: A continuaciรณn, hay mรกs ejemplos de evaluaciรณn de condiciones con bloques de cรณdigo if con dudas que surgen en clase. Pactica para entender cรณmo funcionan. | Python Code:
# asignamos unos cuantos valores a variables
numero1 = 2
numero2 = 34
print(numero1 == numero2)
print(numero1 != numero2)
print(numero1 == numero1)
print(numero2 <= 10)
print(19 >= (10 * numero1))
print("------------------")
print(10 == (5*2))
print(MiVariable != 10)
Explanation: Control de flujo
Hasta ahora hemos programado en Python intrucciones sencillas que solo tenรญan en cuenta una posibilidad. No evaluรกbamos nada, simplemente dรกbamos รณrdenes y Python obedecรญa. Como mucho, cuando cometรญamos algรบn error de sintaxis, Python se quejaba, sin mรกs.
A partir de ahora vamos a aprender a manejar el control de flujo. Vamor a tener en cuenta resultados mรบltiples y vamos a seleccionar uno concreto dependiendo del valor que tengan las variables o de lo que estรฉ ocurriendo en el programa en un momento concreto.
Comparadores
Los elementos mรกs sencillos del control de flujo son los comparadores:
Igual a ==.
No es igual a !=.
Menor que <.
Menor o igual que <=.
Mayor que >.
Mayor o igual que >=.
ยกOJO! No confundas el comparador de igualdad == con el sรญmbolo = que utilizamos para asignar valores a variables.
Cuando utilizamos un comparador para comparar dos expresiones, el resultado que nos devuelve la comparaciรณn es un valor booleano: True o False.
End of explanation
nombre = 'Vรญctor'
if nombre == 'Vรญctor':
print('ยกHey! Te llamas igual que yo.')
print("Pues muy bien")
diadelasemana = 'jueves'
if diadelasemana != 'domingo':
# hoy no es domingo
print("No vas a misa")
if diadelasemana == 'viernes':
print("Empieza el fin de semana")
print("Esta es la รบltima lรญnea.")
if 10 == 5*2:
print('10 es igual a 5 veces 2')
Explanation: Condicionales
En Python podemos evaluar condiciones con la intrucciรณn if. if forma lo que se denomina un bloque de cรณdigo, que tiene una sintaxis particular:
if COMPARACIรN:
# si la comparaciรณn es True, ejecuta lo siguiente
INSTRUCCIONES_1
Fรญjate bien en dos cosas:
los bloques de cรณdigo comienzan cuando una instrucciรณn termina en dos puntos :
el cรณdigo dentro del bloque aparece indentado o sangrado.
Esta indentaciรณn se introduce pulsando el tabulador o tecleando espacios (habitualmente, cuatro espacios). Es muy importante mantener correctamente alineadas las instrucciones que aparecen dentro del mismo bloque de cรณdigo. De lo contrario, podemos encontrar un error de sintaxis.
End of explanation
# prueba cambiando el valor asignado a la variable edad
edad = 44
if edad >= 65:
print('ยกEnhorabuena, estรกs jubilado!')
else:
print('Deberรญas estar trabajando, si te dejan.')
Explanation: Podemos evaluar condiciones mรกs complejas definiciendo por ejemplo una alternativa cuando el resultado de la comparaciรณnn era False. Para ello, usamos instrucciones del tipo if-else, con la siguiente sintaxis:
if COMPARACIรN:
# si la comparaciรณn es True, ejecuta lo siguiente
INSTRUCCIONES_1
else:
# por el contrario, si la comparaciรณn es False, ejecuta
INSTRUCCIONES_2
End of explanation
# prueba varias ejecuciones de esta celda cambiando el valor asignado a la variable temperatura
temperatura = 22
if temperatura <= 0:
print('ยกEstรก helando!')
elif 1 <= temperatura <= 10:
print('ยกHace frescuni!')
elif 11 <= temperatura <= 25:
print('ยกYa es primavera!')
else:
print('ยกBuff, quรฉ calor!')
Explanation: Por รบltimo, podemos evaluar distintas condiciones con instrucciones del tipo if-elif-else, con la siguiente sintaxis:
if COMPARACIรN1:
# si la comparaciรณn es True, ejecuta lo siguiente
INSTRUCCIONES_1
elif COMPARACIรN2:
# si esta comparaciรณn es True, ejecuta lo siguiente
INSTRUCCIONES_2
else:
# por el contrario, si ninguna comparaciรณn es True, ejecuta
INSTRUCCIONES_3
Piensa en elif como en una abreviatura de else + if.
End of explanation
nombre = 'Vรญctor'
edad = 37
if nombre == 'Vรญctor' and edad == 38:
print('ยกHey! ยกEres yo! ยฟQuiรฉn te envรญa?')
elif nombre == 'Vรญctor' or edad == 38:
print('Te pareces a mรญ en algo.')
if not nombre == 'Vรญctor' and not edad == 38: # esto es equivalente a if nombre != "Vรญctor" and edad != 34:
print('No tienes nada que ver conmigo')
alumnos = ["Pepito", "Raul", "Ana", "Antonio", "Maria"]
print(alumnos)
if 'Paco' in alumnos and 'Ana' in alumnos:
print('Paco y Ana estรกn en clase.')
else:
print('No es cierto que Paco y Ana estรฉn en clase')
if "Paco" in alumnos or "Ana" in alumnos:
print('Paco o Ana, uno de los dos o ambos, estรก en clase.')
else:
print('No ha venido ninguno')
Explanation: Operadores lรณgicos (o booleanos)
Los operadores lรณgicos son palabras usadas para conectar oraciones de Python de forma gramaticalmente correcta, casi igual que si lo hiciรฉsamos en lenguaje natural. Existen tres operadores lรณgicos:
la conjunciรณn and:
la disyunciรณn or.
la negaciรณn not.
Al igual que con las comparaciones, los operadores lรณgicos generan valores booleanos: True o False.
End of explanation
alumnos = ["Pepito", "Raul", "Ana", "Antonio", "Maria"]
hoy = "viernes"
if "Pepito" in alumnos:
print("Pepito estรก en clase")
if "Ana" in alumnos:
print("Ana estรก en clase")
if "Pepito" in alumnos and "Ana" not in alumnos and hoy == "viernes":
print("Pepito ha venido, pero Ana no. Menos mal que es viernes")
if "Pepito" in alumnos and "Ana" not in alumnos:
print("Pepito ha venido, pero Ana no")
if hoy == "viernes":
print("Menos mal que es viernes")
alumnos = 'Pepito Raul Ana Antonio Maria'.split()
print(alumnos)
numeros = "1 2 3 4 5 6 7 8 9 10".split()
print(numeros)
print(int(numeros[0]) + int(numeros[-1]))
# algunos ejemplos para distinguir el funcionamiento de == y de in
palabras_minusculas = "hola casa amigo wertyj"
palabras_mayusculas = "AMIGO VIERNES CAFร"
print("casa" in palabras_minusculas)
print("O V" in palabras_mayusculas)
print("amigo" in "amigo")
print("amigo" == "amigo")
print("amigo" in "amigos")
print("amigo" == "amigos")
Explanation: A continuaciรณn, hay mรกs ejemplos de evaluaciรณn de condiciones con bloques de cรณdigo if con dudas que surgen en clase. Pactica para entender cรณmo funcionan.
End of explanation |
11,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
์ปฌ๋ผ ์ด๋ฆ ์ถ๋ ฅ์ ํ๊ณ ์ถ์ ๊ฒฝ์ฐ df.columns
์ปฌ๋ผ ๊ฐ์ ์ถ๋ ฅ์ ํ๊ณ ์ถ์ ๊ฒฝ์ฐ len(df.columns)
Step1: ๊ณ ์ ํ ๊ฐ์ด ์ด๋ป๊ฒ ๋๋์ง ์๊ณ ์ถ๋ค๋ฉด
np.unique(df[col].astype(str))
col๊ฐ์ ๋ณ์๋ก ๋๊ณ for๋ฌธ์ ๋๋ฆฌ๋ฉด ๋ฉ๋๋ค-!
Step2: ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆด ๋ ๋ํ์ง๋ฅผ ์ด๋ป๊ฒ ๊ทธ๋ฆด๊ฒ์ธ๊ฐ-!!1
plt.subplots(figsize=(10,5))
f => figure
ax => axes
countplot
Step3: ํน์ ๊ฐ์ ๊ฐ์๋ฅผ ์๊ณ ์ถ๋ค๋ฉด..!!
trn[col].value_counts()
http
Step4: ์ซ์๊ฐ ๋ฌธ์๋ก ๋์ด์๋ ๊ฒฝ์ฐ, ๊ณต๋ฐฑ์ ์ ๊ฑฐํ๋ ํ
ํฌ๋
int()๋ก ํํํ๋ฉด ๋ชจ๋ ์ฌ๋ผ์ง-!
Step5: ์ ์ฒ๋ฆฌ
df[col].replace("NA", 0, inplace=True) => NA๊ฐ์ 0์ผ๋ก ๋ฐ๊พผ ํ,
df[col] = df[col]].astype(int) => intํ ์ํจํ ๋ค์ ๋ฃ๊ธฐ!
Step6: tolist()
=> list๋ก ๋์ค๋๋ array๋ก ๋์ค๋๋..!!!
Step7: ํ์ด์ฌ pandas DataFrame์ iloc, loc, ix์ ์ฐจ์ด
.iloc
integer positon๋ฅผ ํตํด ๊ฐ์ ์ฐพ์ ์ ์๋ค. label๋ก๋ ์ฐพ์ ์ ์๋ค
.loc
label ์ ํตํด ๊ฐ์ ์ฐพ์ ์ ์๋ค. integer position๋ก๋ ์ฐพ์ ์ ์๋ค.
.ix
integer position๊ณผ label๋ชจ๋ ์ฌ์ฉ ํ ์ ์๋ค. ๋ง์ฝ label์ด ์ซ์๋ผ๋ฉด label-based index๋ง ๋๋ค.
http
Step8: Label encoding
lb = LabelEncoder()
lb.fit_transform(data) | Python Code:
print(trn.columns)
print(len(trn.columns))
Explanation: ์ปฌ๋ผ ์ด๋ฆ ์ถ๋ ฅ์ ํ๊ณ ์ถ์ ๊ฒฝ์ฐ df.columns
์ปฌ๋ผ ๊ฐ์ ์ถ๋ ฅ์ ํ๊ณ ์ถ์ ๊ฒฝ์ฐ len(df.columns)
End of explanation
np.unique(trn["sexo"].astype(str))
Explanation: ๊ณ ์ ํ ๊ฐ์ด ์ด๋ป๊ฒ ๋๋์ง ์๊ณ ์ถ๋ค๋ฉด
np.unique(df[col].astype(str))
col๊ฐ์ ๋ณ์๋ก ๋๊ณ for๋ฌธ์ ๋๋ฆฌ๋ฉด ๋ฉ๋๋ค-!
End of explanation
f, ax = plt.subplots(figsize=(10, 5))
sns.countplot(x=trn["fecha_dato"], data=trn, alpha=0.5)
plt.show()
Explanation: ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆด ๋ ๋ํ์ง๋ฅผ ์ด๋ป๊ฒ ๊ทธ๋ฆด๊ฒ์ธ๊ฐ-!!1
plt.subplots(figsize=(10,5))
f => figure
ax => axes
countplot : ์นดํ
๊ณ ๋ฆฌ์ ๋ฐ์ดํฐ ๊ฐฏ์๋ฅผ ์๊ฐํ
barplot : ๋ง๋ ๊ทธ๋ํ
jointplot : ๋ ๋ณ์์ ๊ด๊ณ๋ฅผ ๋ํ๋ด๋ ๊ทธ๋ํ kind์ ๋ฐ๋ผ ๋ค๋ฆ
End of explanation
trn["fecha_dato"].value_counts()
Explanation: ํน์ ๊ฐ์ ๊ฐ์๋ฅผ ์๊ณ ์ถ๋ค๋ฉด..!!
trn[col].value_counts()
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html
End of explanation
aaaaa = " 2 "
int(aaaaa)
Explanation: ์ซ์๊ฐ ๋ฌธ์๋ก ๋์ด์๋ ๊ฒฝ์ฐ, ๊ณต๋ฐฑ์ ์ ๊ฑฐํ๋ ํ
ํฌ๋
int()๋ก ํํํ๋ฉด ๋ชจ๋ ์ฌ๋ผ์ง-!
End of explanation
train_data[col].replace(' NA',0,inplace=True)
train_data[col] = train_data[col].astype(int)
Explanation: ์ ์ฒ๋ฆฌ
df[col].replace("NA", 0, inplace=True) => NA๊ฐ์ 0์ผ๋ก ๋ฐ๊พผ ํ,
df[col] = df[col]].astype(int) => intํ ์ํจํ ๋ค์ ๋ฃ๊ธฐ!
End of explanation
np.unique(trn["sexo"].astype(str)).tolist()
np.unique(trn["sexo"].astype(str))
label_cols = trn.columns[24:] .tolist()
trn.groupby(['fecha_dato'])[label_cols[i]].agg('sum')
label_cols
# np.asarray
# Convert the input to an array.
# Parameters
# ----------
# a : array_like
# Input data, in any form that can be converted to an array. This
# includes lists, lists of tuples, tuples, tuples of tuples, tuples
# of lists and ndarrays.
# dtype : data-type, optional
# By default, the data-type is inferred from the input data.
# order : {'C', 'F'}, optional
# Whether to use row-major (C-style) or
# column-major (Fortran-style) memory representation.
# Defaults to 'C'.
# Returns
# -------
# out : ndarray
# Array interpretation of `a`. No copy is performed if the input
# is already an ndarray. If `a` is a subclass of ndarray, a base
# class ndarray is returned.
# See Also
# --------
# asanyarray : Similar function which passes through subclasses.
# ascontiguousarray : Convert input to a contiguous array.
# asfarray : Convert input to a floating point ndarray.
# asfortranarray : Convert input to an ndarray with column-major
# memory order.
# asarray_chkfinite : Similar function which checks input for NaNs and Infs.
# fromiter : Create an array from an iterator.
# fromfunction : Construct an array by executing a function on grid
# positions.
# Examples
# --------
# Convert a list into an array:
# >>> a = [1, 2]
# >>> np.asarray(a)
# array([1, 2])
# Existing arrays are not copied:
# >>> a = np.array([1, 2])
# >>> np.asarray(a) is a
# True
# If `dtype` is set, array is copied only if dtype does not match:
# >>> a = np.array([1, 2], dtype=np.float32)
# >>> np.asarray(a, dtype=np.float32) is a
# True
# >>> np.asarray(a, dtype=np.float64) is a
# False
# Contrary to `asanyarray`, ndarray subclasses are not passed through:
# >>> issubclass(np.matrix, np.ndarray)
# True
# >>> a = np.matrix([[1, 2]])
# >>> np.asarray(a) is a
# False
# >>> np.asanyarray(a) is a
# True
Explanation: tolist()
=> list๋ก ๋์ค๋๋ array๋ก ๋์ค๋๋..!!!
End of explanation
# data = []
# for ind, (run, row) in enumerate(train.iterrows()):
# for i in range(24):
# if row[24+i] == 1:
# temp = row[:24].values.tolist()
# temp.append(i)
# data.append(temp)
Explanation: ํ์ด์ฌ pandas DataFrame์ iloc, loc, ix์ ์ฐจ์ด
.iloc
integer positon๋ฅผ ํตํด ๊ฐ์ ์ฐพ์ ์ ์๋ค. label๋ก๋ ์ฐพ์ ์ ์๋ค
.loc
label ์ ํตํด ๊ฐ์ ์ฐพ์ ์ ์๋ค. integer position๋ก๋ ์ฐพ์ ์ ์๋ค.
.ix
integer position๊ณผ label๋ชจ๋ ์ฌ์ฉ ํ ์ ์๋ค. ๋ง์ฝ label์ด ์ซ์๋ผ๋ฉด label-based index๋ง ๋๋ค.
http://yeyej.blogspot.kr/2016/02/pandas-dataframe-iloc-loc-ix.html
End of explanation
# lb = LabelEncoder()
# skip_cols = ['fecha_dato','ncodpers','target']
# # histogram of features
# for col in trn.columns:
# if col in skip_cols:
# continue
# print('='*50)
# print('col : ', col)
# # check category or number
# if col in category_cols:
# x = lb.fit_transform(trn[col])
# sns.jointplot(x,np.asarray(trn['target'])*1.0, kind="kde")
# else:
# x = trn[col]
# sns.jointplot(x,trn['target'], kind="kde")
# plt.show()
Explanation: Label encoding
lb = LabelEncoder()
lb.fit_transform(data)
End of explanation |
11,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building an optical system using GaussOpt
GaussOpt can be used to analyze quasioptical systems using Gaussian beam analysis. In this notebook, we walk through the basics of setting up a Gaussian beam telescope.
Step1: Gaussian beam telescopes
Gaussian beam telescopes use two parabolic mirrors to couple energy between two horn antennas. If the mirrors have focal lengths $f$, then the mirrors should be separated by $2\,f$ and the distance between each horn's beam waist and its respective mirror should be $f$ (as shown below).
We will now build a quick script to analyze a Gaussian beam telescope.
Define frequency sweep
For this notebook, we will sweep the frequency from 150 GHz to 300 GHz. This is done using using the gaussopt.Frequency class.
Step2: Define horn antennas
Horn antennas are defined by their slant length, aperture radius and horn factor. See Chapter 7 of "Quasioptical Systems" by Paul Goldsmith for more information.
We will use the gaussopt.Horn class to generate the transmitting horn (horn_tx) and then copy this horn to generate the receiving horn (horn_rx). We will use a corrugated circular feed horn, which has a horn factor of 0.64.
Step3: Define optical components
Now it is time to build the rest of the circuit, i.e., everything between the transmitting and receiving horns. We can define empty space using the gaussopt.Freespace class and mirrors using the gaussopt.Mirror class.
Step4: The distances between the horns and the mirrors have to be reduced because the actual beam waist is behind the horn aperture.
Step5: Build Optical System
We can now combine all of the individual optical components to build our optical system. This is normally done by creating a list of optical components (component_list below), starting from the transmitting horn and then listing each component to the receiving horn (in order).
The component list is then passed to the gaussopt.System class, along with the two horns, to calculate the system properties.
Step6: Plot Coupling
The coupling between the two horns can be plotted using the plot_coupling method. There should be perfect coupling at the center frequency.
Step7: Plot Beam Propagation
The beam waist can be plotted through the entire chain using the plot_system command. This method will also plot the aperture of each component to ensure that there isn't too much edge taper anywhere in the system. | Python Code:
%matplotlib inline
from gaussopt import *
import matplotlib.pyplot as plt
# Formatting for Matplotlib (optional)
# pip install SciencePlots
plt.style.use(["science", "notebook"])
Explanation: Building an optical system using GaussOpt
GaussOpt can be used to analyze quasioptical systems using Gaussian beam analysis. In this notebook, we walk through the basics of setting up a Gaussian beam telescope.
End of explanation
freq = Frequency(start=150, stop=300, npts=151, units='GHz')
Explanation: Gaussian beam telescopes
Gaussian beam telescopes use two parabolic mirrors to couple energy between two horn antennas. If the mirrors have focal lengths $f$, then the mirrors should be separated by $2\,f$ and the distance between each horn's beam waist and its respective mirror should be $f$ (as shown below).
We will now build a quick script to analyze a Gaussian beam telescope.
Define frequency sweep
For this notebook, we will sweep the frequency from 150 GHz to 300 GHz. This is done using using the gaussopt.Frequency class.
End of explanation
slen = 22.64 # slant length (in mm)
arad = 3.6 # aperture radius (in mm)
hfac = 0.64 # horn factor (see chp. 7 of Quasioptical Systems)
horn_tx = Horn(freq, slen, arad, hfac, units='mm', comment='Trasmitting')
horn_rx = horn_tx.copy(comment='Receiving')
Explanation: Define horn antennas
Horn antennas are defined by their slant length, aperture radius and horn factor. See Chapter 7 of "Quasioptical Systems" by Paul Goldsmith for more information.
We will use the gaussopt.Horn class to generate the transmitting horn (horn_tx) and then copy this horn to generate the receiving horn (horn_rx). We will use a corrugated circular feed horn, which has a horn factor of 0.64.
End of explanation
# 16 cm of freespace (air)
d = FreeSpace(160)
# Mirrors with a focal length of 16 cm
m1 = Mirror(16, units='cm', radius=8, comment='M1')
m2 = m1.copy(comment='M2')
Explanation: Define optical components
Now it is time to build the rest of the circuit, i.e., everything between the transmitting and receiving horns. We can define empty space using the gaussopt.Freespace class and mirrors using the gaussopt.Mirror class.
End of explanation
# Offset distance (i.e., distance from beam waist to horn aperture) at 230 GHz
z_offset = horn_tx.z_offset(units='mm')[freq.idx(230, units='GHz')]
# Ideal distance between horn aperture and mirror
d_red = FreeSpace(160 - z_offset, comment='reduced')
Explanation: The distances between the horns and the mirrors have to be reduced because the actual beam waist is behind the horn aperture.
End of explanation
component_list = (d_red, m1, d, d, m2, d_red)
system = System(horn_tx, component_list, horn_rx)
Explanation: Build Optical System
We can now combine all of the individual optical components to build our optical system. This is normally done by creating a list of optical components (component_list below), starting from the transmitting horn and then listing each component to the receiving horn (in order).
The component list is then passed to the gaussopt.System class, along with the two horns, to calculate the system properties.
End of explanation
system.plot_coupling()
system.print_best_coupling()
Explanation: Plot Coupling
The coupling between the two horns can be plotted using the plot_coupling method. There should be perfect coupling at the center frequency.
End of explanation
fig, ax = plt.subplots(figsize=(8,5))
system.plot_system(ax=ax)
Explanation: Plot Beam Propagation
The beam waist can be plotted through the entire chain using the plot_system command. This method will also plot the aperture of each component to ensure that there isn't too much edge taper anywhere in the system.
End of explanation |
11,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Expecting the unexpected
To handle errors properly deserves a chapter on its own in any programming book. Python gives us many ways do deal with errors fatal and otherwise
Step1: This shows how exceptions are raised and caught, but this approach is somewhat limited. Suppose now, that we weren't expecting this expected unexpected behaviour and we wanted to compute everything before displaying our results.
Step2: Ooops! Let's fix that.
Step3: That's also not what we want. We wasted all this time computing nice reciprocals of numbers, only to find all of our results being thrown away because of one stupid zero in the input list. We can fix this.
Step4: That's better! We skipped right over the error and continued to more interesting results. So how are we going to make this solution more generic? Subsequent functions may not know how to handle that little nan in our list.
Step5: Hmmmpf. There we go again.
Step7: This seems Ok, but there are two problems here. For one, it feels like we're doing too much work! We have a repeating code pattern here. That's always a moment to go back and consider making parts of our code more generic. At the same time, this is when we need some more advanced Python concepts to get us out of trouble. We're going to define a function in a function!
Step8: Consider what happens here. The function secure_function takes a function something_dangerous as an argument and returns a new function something_safe. This new function executes something_dangerous within a try-except block to deal with the possibility of failure. Let's see how this works.
Step11: Ok, so that works! However, the documentation of safe_sqrt is not yet very useful. There is a nice library routine that may help us here
Step12: Now it is very easy to also rewrite our function computing the reciprocals safely
Step15: There is a second problem to this approach, which is a bit more subtle. How do we know where the error occured? We got two values of nan and are desperate to find out what went wrong. We'll need a little class to capture all aspects of failure.
Step18: We will adapt our earlier design for secure_function. If the given argument is a Fail, we don't even attempt to run the next function. In stead, we extend the trace of the failure, so that we can see what happened later on.
Step19: Now we can rewrite our little program entirely from scratch
Step20: See how we retain a trace of the functions that were involved in creating the failed state, even though the execution of that produced those values is entirely decoupled. This is exactly what we need to trace errors in Noodles.
Handling errors in Noodles
Noodles has the functionality of secure_function build in by the name of maybe. The following code implements the above example in terms of noodles.maybe
Step21: The maybe decorator works well together with schedule. The following workflow is full of errors!
Step22: Both the reciprocal and the square root functions will fail. Noodles is smart enough to report on both errors.`
Step23: Example
Step24: If a file does note exist, stat returns an error-code of 1.
Step25: We can wrap the execution of the stat command in a helper function.
Step26: The run function runs the given command and returns a CompletedProcess object. The check=True argument enables checking for return value of the child process. If the return value is any other then 0, a CalledProcessError is raised. Because we decorated our function with noodles.maybe, such an error will be caught and a Fail object will be returned.
Step27: We can now run this workflow and print the output in a table. | Python Code:
import sys
def something_dangerous(x):
print("computing reciprocal of", x)
return 1 / x
try:
for x in [2, 1, 0, -1]:
print("1/{} = {}".format(x, something_dangerous(x)))
except ArithmeticError as error:
print("Something went terribly wrong:", error)
Explanation: Expecting the unexpected
To handle errors properly deserves a chapter on its own in any programming book. Python gives us many ways do deal with errors fatal and otherwise: try, except, assert, if ...
Using these mechanisms in a naive way may lead to code that is littered with safety if statements and try-except blocks, just because we need to account for errors at every level in a program.
In this tutorial we'll see how we can use exceptions in a more effective way. As an added bonus we learn how to use exceptions in a manner that is compatible with the Noodles programming model. Let's try something dangerous! We'll compute the reciprocal of a list of numbers. To see what is happening, the function something_dangerous contains a print statement.
End of explanation
input_list = [2, 1, 0, -1]
reciprocals = [something_dangerous(item)
for item in input_list]
print("The reciprocal of", input_list, "is", reciprocals)
Explanation: This shows how exceptions are raised and caught, but this approach is somewhat limited. Suppose now, that we weren't expecting this expected unexpected behaviour and we wanted to compute everything before displaying our results.
End of explanation
try:
reciprocals = [something_dangerous(item)
for item in input_list]
except ArithmeticError as error:
print("Something went terribly wrong:", error)
else:
print("The reciprocal of\n\t", input_list,
"\nis\n\t", reciprocals)
Explanation: Ooops! Let's fix that.
End of explanation
import math
def something_safe(x):
try:
return something_dangerous(x)
except ArithmeticError as error:
return math.nan
reciprocals = [something_safe(item)
for item in input_list]
print("The reciprocal of\n\t", input_list,
"\nis\n\t", reciprocals)
Explanation: That's also not what we want. We wasted all this time computing nice reciprocals of numbers, only to find all of our results being thrown away because of one stupid zero in the input list. We can fix this.
End of explanation
square_roots = [math.sqrt(item) for item in reciprocals]
Explanation: That's better! We skipped right over the error and continued to more interesting results. So how are we going to make this solution more generic? Subsequent functions may not know how to handle that little nan in our list.
End of explanation
def safe_sqrt(x):
try:
return math.sqrt(x)
except ValueError as error:
return math.nan
[safe_sqrt(item) for item in reciprocals]
Explanation: Hmmmpf. There we go again.
End of explanation
def secure_function(dangerous_function):
def something_safe(x):
A safer version of something dangerous.
try:
return dangerous_function(x)
except (ArithmeticError, ValueError):
return math.nan
return something_safe
Explanation: This seems Ok, but there are two problems here. For one, it feels like we're doing too much work! We have a repeating code pattern here. That's always a moment to go back and consider making parts of our code more generic. At the same time, this is when we need some more advanced Python concepts to get us out of trouble. We're going to define a function in a function!
End of explanation
safe_sqrt = secure_function(math.sqrt)
print("โท2 =", safe_sqrt(2))
print("โท-1 =", safe_sqrt(-1))
print()
help(safe_sqrt)
Explanation: Consider what happens here. The function secure_function takes a function something_dangerous as an argument and returns a new function something_safe. This new function executes something_dangerous within a try-except block to deal with the possibility of failure. Let's see how this works.
End of explanation
import functools
def secure_function(dangerous_function):
Create a function that doesn't raise ValueErrors.
@functools.wraps(dangerous_function)
def something_safe(x):
A safer version of something dangerous.
try:
return dangerous_function(x)
except (ArithmeticError, ValueError):
return math.nan
return something_safe
safe_sqrt = secure_function(math.sqrt)
help(safe_sqrt)
Explanation: Ok, so that works! However, the documentation of safe_sqrt is not yet very useful. There is a nice library routine that may help us here: functools.wraps; this utility function sets the correct name and doc-string to our new function.
End of explanation
something_safe = secure_function(something_dangerous)
[safe_sqrt(something_safe(item)) for item in input_list]
Explanation: Now it is very easy to also rewrite our function computing the reciprocals safely:
End of explanation
class Fail:
Keep track of failures.
def __init__(self, exception, trace):
self.exception = exception
self.trace = trace
def extend_trace(self, f):
Grow a stack trace.
self.trace.append(f)
return self
def __str__(self):
return "Fail in " + " -> ".join(
f.__name__ for f in reversed(self.trace)) \
+ ":\n\t" + type(self.exception).__name__ \
+ ": " + str(self.exception)
Explanation: There is a second problem to this approach, which is a bit more subtle. How do we know where the error occured? We got two values of nan and are desperate to find out what went wrong. We'll need a little class to capture all aspects of failure.
End of explanation
def secure_function(dangerous_function):
Create a function that doesn't raise ValueErrors.
@functools.wraps(dangerous_function)
def something_safe(x):
A safer version of something dangerous.
if isinstance(x, Fail):
return x.extend_trace(dangerous_function)
try:
return dangerous_function(x)
except Exception as error:
return Fail(error, [dangerous_function])
return something_safe
Explanation: We will adapt our earlier design for secure_function. If the given argument is a Fail, we don't even attempt to run the next function. In stead, we extend the trace of the failure, so that we can see what happened later on.
End of explanation
@secure_function
def reciprocal(x):
return 1 / x
@secure_function
def square_root(x):
return math.sqrt(x)
reciprocals = map(reciprocal, input_list)
square_roots = map(square_root, reciprocals)
for x, result in zip(input_list, square_roots):
print("sqrt( 1 /", x, ") =", result)
Explanation: Now we can rewrite our little program entirely from scratch:
End of explanation
import noodles
import math
from noodles.tutorial import display_workflows
@noodles.maybe
def reciprocal(x):
return 1 / x
@noodles.maybe
def square_root(x):
return math.sqrt(x)
results = [square_root(reciprocal(x)) for x in [2, 1, 0, -1]]
for result in results:
print(str(result))
Explanation: See how we retain a trace of the functions that were involved in creating the failed state, even though the execution of that produced those values is entirely decoupled. This is exactly what we need to trace errors in Noodles.
Handling errors in Noodles
Noodles has the functionality of secure_function build in by the name of maybe. The following code implements the above example in terms of noodles.maybe:
End of explanation
@noodles.schedule
@noodles.maybe
def add(a, b):
return a + b
workflow = add(noodles.schedule(reciprocal)(0),
noodles.schedule(square_root)(-1))
display_workflows(arithmetic=workflow, prefix='errors')
Explanation: The maybe decorator works well together with schedule. The following workflow is full of errors!
End of explanation
result = noodles.run_single(workflow)
print(result)
Explanation: Both the reciprocal and the square root functions will fail. Noodles is smart enough to report on both errors.`
End of explanation
!stat -t -c '%A %10s %n' /dev/null
Explanation: Example: parallel stat
Let's do an example that works with external processes. The UNIX command stat gives the status of a file
End of explanation
!stat -t -c '%A %10s %n' does-not-exist
Explanation: If a file does note exist, stat returns an error-code of 1.
End of explanation
from subprocess import run, PIPE, CalledProcessError
@noodles.schedule
@noodles.maybe
def stat_file(filename):
p = run(['stat', '-t', '-c', '%A %10s %n', filename],
check=True, stdout=PIPE, stderr=PIPE)
return p.stdout.decode().strip()
Explanation: We can wrap the execution of the stat command in a helper function.
End of explanation
files = ['/dev/null', 'does-not-exist', '/home', '/usr/bin/python3']
workflow = noodles.gather_all(stat_file(f) for f in files)
display_workflows(stat=workflow, prefix='errors')
Explanation: The run function runs the given command and returns a CompletedProcess object. The check=True argument enables checking for return value of the child process. If the return value is any other then 0, a CalledProcessError is raised. Because we decorated our function with noodles.maybe, such an error will be caught and a Fail object will be returned.
End of explanation
result = noodles.run_parallel(workflow, n_threads=4)
for file, stat in zip(files, result):
print('stat {:18} -> {}'.format(
file, stat if not noodles.failed(stat)
else 'failed: ' + stat.exception.stderr.decode().strip()))
Explanation: We can now run this workflow and print the output in a table.
End of explanation |
11,520 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm Looking for a generic way of turning a DataFrame to a nested dictionary | Problem:
import pandas as pd
df = pd.DataFrame({'name': ['A', 'A', 'B', 'C', 'B', 'A'],
'v1': ['A1', 'A2', 'B1', 'C1', 'B2', 'A2'],
'v2': ['A11', 'A12', 'B12', 'C11', 'B21', 'A21'],
'v3': [1, 2, 3, 4, 5, 6]})
def g(df):
if len(df.columns) == 1:
if df.values.size == 1: return df.values[0][0]
return df.values.squeeze()
grouped = df.groupby(df.columns[0])
d = {k: g(t.iloc[:, 1:]) for k, t in grouped}
return d
result = g(df.copy()) |
11,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: In a world where data can be collected continuously and storage costs are cheap, issues related to the growing size of interesting datasets can pose a problem unless we have the right tools for the task. Indeed, in the event where we have streaming data it might be impossible to wait until the "end" before fitting our model since it may never come. Alternatively it might be problematic to even store all of the data, scattered across many different servers, in memory before using it. Instead it would be preferable to do an update each time some new data (or a small batch of it) arrives. Similarly we might find ourselves in an offline situation where the number of training examples is very large and traditional approaches, such as gradient descent, start to become too slow for our needs.
Stochastic gradient descent (SGD) offers an easy solution to all of these problems.
In this post we explore the convergence properties of stochastic gradient descent and a few of its variants, namely
Polyak-Ruppert averaged stochastic gradient;
Adaptive Gradient, also known as AdaGrad;
Stochastic Average Gradient, also known as SAG.
Stochastic Gradient Descent for Logistic Regression
In this post we will consider the classification dataset quantum.mat that I used during an assignment for the course CPSC 540 - Machine Learning at UBC. It contains a matrix $X$ with dimensions $50000 \times 79$, as well as a vector $y$ taking values $\pm 1$, which we will classify using logistic regression. The true minimum of our model's cost function lies at $f(w) = 2.7068 \times 10^4$, but various implementations of SGD have drastically different performances as we will see.
The cost function for logistic regression with L2-regularization can be written in a form well-suited to our minimization objective
Step3: Regular Stochastic Gradient Descent
Schematically the standard SGD algorithm takes the form
Initialize weights $w$ and learning rate $\eta_t$.
Randomly permute training examples.
For $i = 1
Step4: Polyak-Ruppert Averaged Stochastic Gradient
Rather than use the information contained in the weights $w$ at iteration $t$ to determine the descent direction, it is often an improvement to use a running average instead, which keeps a memory of previous iterations
\begin{equation}
\overline{w}t = \overline{w}{t-1} - \frac{1}{t} \left( \overline{w}_{t-1} - w_t \right).
\end{equation}
Doing so results in a slight improvement over regular SGD. Note that convergence improves if we start averaging after nAvg $\geq 2$ passes in order to smooth out the initial irregularities.
Step5: Adaptive Gradient (AdaGrad)
One of the main drawbacks of the stochastic optimization methods outlined above is the need to manually choose the optimal learning rate for the problem at hand. AdaGrad, an algorithm proposed in 2011, eschews this problem by computing an appropriate learning rate for each direction $\hat{w_a} \in \mathbb{R}^d$.
AdaGrad automatically assigns a higher learning rate to rare/sparse features, which typically have a higher predictive power than common ones. We can understand this intuitively by thinking about words in a story
Step6: Stochastic Adaptive Gradient (SAG)
Last but not least, we now discuss the SAG algorithm, a variant on batching in SGD that was published in 2015. The basic implementation of this method can be explained schematically as follows | Python Code:
import numpy as np
from scipy.io import loadmat
# load data from MATLAB file
datamat = loadmat('quantum.mat')
X = datamat['X']
y = datamat['y']
class LogisticRegressionSGD(object):
def __init__(self, X, y, progTol=1e-4, nEpochs=10):
self.X = X
self.y = y
self.n, self.d = X.shape
# define convergence parameters here so all methods can use them
self.progTol = progTol
self.nEpochs = nEpochs
def LogReg(self, w, Lambda, i=None):
Logistic regression cost function with L2-regularization
Outputs negative log-likelihood (nll) and gradient (grad)
if isinstance(i, np.int64): # for individual training examples
X = self.X[[i], :]
y = self.y[[i]]
else:
X = self.X
y = self.y
yXw = y*X.dot(w)
sigmoid = 1/(1 + np.exp(-yXw))
nll = -np.sum(np.log(sigmoid)) + 0.5*Lambda*w.T.dot(w)
grad = -X.T.dot(y*(1-sigmoid)) + Lambda*w
return nll, grad
Explanation: In a world where data can be collected continuously and storage costs are cheap, issues related to the growing size of interesting datasets can pose a problem unless we have the right tools for the task. Indeed, in the event where we have streaming data it might be impossible to wait until the "end" before fitting our model since it may never come. Alternatively it might be problematic to even store all of the data, scattered across many different servers, in memory before using it. Instead it would be preferable to do an update each time some new data (or a small batch of it) arrives. Similarly we might find ourselves in an offline situation where the number of training examples is very large and traditional approaches, such as gradient descent, start to become too slow for our needs.
Stochastic gradient descent (SGD) offers an easy solution to all of these problems.
In this post we explore the convergence properties of stochastic gradient descent and a few of its variants, namely
Polyak-Ruppert averaged stochastic gradient;
Adaptive Gradient, also known as AdaGrad;
Stochastic Average Gradient, also known as SAG.
Stochastic Gradient Descent for Logistic Regression
In this post we will consider the classification dataset quantum.mat that I used during an assignment for the course CPSC 540 - Machine Learning at UBC. It contains a matrix $X$ with dimensions $50000 \times 79$, as well as a vector $y$ taking values $\pm 1$, which we will classify using logistic regression. The true minimum of our model's cost function lies at $f(w) = 2.7068 \times 10^4$, but various implementations of SGD have drastically different performances as we will see.
The cost function for logistic regression with L2-regularization can be written in a form well-suited to our minimization objective:
\begin{equation}
f(w) = \sum_{i=1}^n f_i(w), \;\;\;\;\; \text{with} \;\; f_i(w) = \log \Big( 1 + \exp \left( - y_i X_{ia} w_a \right) \Big) + \frac{\lambda}{2 n} w^2.
\end{equation}
The key idea behind SGD is to approximate the true gradient $\nabla f(w)$ by the successive gradients at individual training examples $\nabla f_i(w)$, which naturally befits online learning. The rationale for doing so is that it is very costly to compute the exact direction of the gradient, whereas a good but noisy estimate can be obtained by looking at a few examples only. This noise has the added benefit of preventing SGD from getting stuck in the shallow local minima that might be present for non-convex optimization objectives (such as neural networks), at the cost of never truly converging to a minimum but rather in a neighborhood around it.
We now proceed to build a LogisticRegressionSGD() class that implements the logistic regression cost function written above. We will then include different stochastic optimization methods and see how close they can get to the true minimum after 10 epochs.
End of explanation
def RegularSGD(self, case):
# Initialize
n = self.n
w = np.zeros((self.d, 1))
w_old = w
Lambda = 1
# Randomly shuffle training data
arr = np.arange(0, n)
np.random.shuffle(arr) # shuffles arr directly
for t in range(1, self.nEpochs*n + 1):
# Compute nll and grad for one random training example
nll, grad = self.LogReg(w=w, Lambda=Lambda/n, i=arr[np.mod(t, n)])
if case==1:
eta = 1/(Lambda*t)
elif case==2:
eta = 1e-4
elif case==3:
eta = np.sqrt(n)/(np.sqrt(n) + t) # step size
else:
print("The variable 'case' is not specified correctly; abort.")
break
w = w - eta*grad # gradient step
# One epoch has passed: check for convergence
if np.mod(t, n) == 0:
change = np.linalg.norm(w-w_old, ord=np.inf)
print('Passes = %d, function = %e, change = %f' %((t+1)/n, self.LogReg(w=w, Lambda=Lambda)[0], change))
if change < self.progTol:
print('Parameters changed b less than progress tolerance on pass')
break
np.random.shuffle(arr) # reshuffle
w_old = w
# Add method to our class
LogisticRegressionSGD.RegularSGD = RegularSGD
print('---------------------------------------------')
print(' Case 1: best rate for worst case scenario ')
print('---------------------------------------------')
LogisticRegressionSGD(X, y).RegularSGD(case=1)
print('---------------------------------------------')
print(' Case 2: small and constant step size ')
print('---------------------------------------------')
LogisticRegressionSGD(X, y).RegularSGD(case=2)
print('---------------------------------------------')
print(' Case 3: monotonously decreasing step size ')
print('---------------------------------------------')
LogisticRegressionSGD(X, y).RegularSGD(case=3)
Explanation: Regular Stochastic Gradient Descent
Schematically the standard SGD algorithm takes the form
Initialize weights $w$ and learning rate $\eta_t$.
Randomly permute training examples.
For $i = 1 : n$ do $w \leftarrow w - \eta_t \nabla f_i(w)$.
Repeat step 2 until convergence.
Many theory papers use the step size $\eta_t = 1/\lambda t$, which offers the best convergence rate in the worst case scenario. However choosing this learning rate typically leads the SGD algorithm in regions of parameter space afflicted by numerical overflow of the cost function before it ultimately "converges" to $f(w) \geq 4.5 \times 10^4$, far away from the global minimum.
Alternatives include choosing a constant learning rate (we found that $\eta = 10^{-4}$ gave reasonable results) or an iteration-dependent rate that slowly converges to 0, such as
\begin{equation}
\eta_t = \frac{\sqrt{n}}{\sqrt{n} + t}.
\end{equation}
We find that the last two approaches yield similar results, although the latter requires the fine-tuning of both the numerator and denominator in order to work optimally, and also makes it hard to decide when to stop since later iterations move very slowly.
End of explanation
def AverageSGD(self, nAvg=2):
# Initialize
n = self.n
w = np.zeros((self.d, 1))
w_old = w
w_avg = w # averaged weights
Lambda = 1
nPasses = 0
# Randomly shuffle training data
arr = np.arange(0, n)
np.random.shuffle(arr) # shuffles arr directly
for t in range(1, self.nEpochs*n + 1):
# Compute nll and grad for one random training example
nll, grad = self.LogReg(w=w, Lambda=Lambda/n, i=arr[np.mod(t, n)])
eta = 1e-4 # step size
w = w - eta*grad # gradient step
if nPasses >= nAvg:
w_avg = w_avg - 1/(t-nAvg*n+1)*(w_avg - w)
# One epoch has passed: check for convergence
if np.mod(t, n) == 0:
nPasses = nPasses + 1
change = np.linalg.norm(w-w_old, ord=np.inf)
print('Passes = %d, function = %e, change = %f' %((t+1)/n, self.LogReg(w=w, Lambda=Lambda)[0], change))
if change < self.progTol:
print('Parameters changed b less than progress tolerance on pass')
break
np.random.shuffle(arr) # reshuffle
w_old = w
LogisticRegressionSGD.AverageSGD = AverageSGD
LogisticRegressionSGD(X, y).AverageSGD()
Explanation: Polyak-Ruppert Averaged Stochastic Gradient
Rather than use the information contained in the weights $w$ at iteration $t$ to determine the descent direction, it is often an improvement to use a running average instead, which keeps a memory of previous iterations
\begin{equation}
\overline{w}t = \overline{w}{t-1} - \frac{1}{t} \left( \overline{w}_{t-1} - w_t \right).
\end{equation}
Doing so results in a slight improvement over regular SGD. Note that convergence improves if we start averaging after nAvg $\geq 2$ passes in order to smooth out the initial irregularities.
End of explanation
def AdaGrad(self, eta = 0.025, delta=1e-3):
# Initialize
n = self.n
w = np.zeros((self.d, 1))
w_old = w
Lambda = 1
# keep sum of squared gradients in memory
sumGrad_sq = 0
# Randomly shuffle training data
arr = np.arange(0, n)
np.random.shuffle(arr) # shuffles arr directly
for t in range(1, self.nEpochs*n + 1):
# Compute nll and grad for one random training example
nll, grad = self.LogReg(w=w, Lambda=Lambda/n, i=arr[np.mod(t, n)])
sumGrad_sq = sumGrad_sq + grad**2
D = np.diag(1/np.sqrt(delta + sumGrad_sq.ravel()))
w = w - eta*D.dot(grad) # gradient step
# One epoch has passed: check for convergence
if np.mod(t, n) == 0:
change = np.linalg.norm(w-w_old, ord=np.inf)
print('Passes = %d, function = %e, change = %f' %((t+1)/n, self.LogReg(w=w, Lambda=Lambda)[0], change))
if change < self.progTol:
print('Parameters changed b less than progress tolerance on pass')
break
np.random.shuffle(arr) # reshuffle
w_old = w
LogisticRegressionSGD.AdaGrad = AdaGrad
LogisticRegressionSGD(X, y).AdaGrad()
Explanation: Adaptive Gradient (AdaGrad)
One of the main drawbacks of the stochastic optimization methods outlined above is the need to manually choose the optimal learning rate for the problem at hand. AdaGrad, an algorithm proposed in 2011, eschews this problem by computing an appropriate learning rate for each direction $\hat{w_a} \in \mathbb{R}^d$.
AdaGrad automatically assigns a higher learning rate to rare/sparse features, which typically have a higher predictive power than common ones. We can understand this intuitively by thinking about words in a story: rare words like Daenerys and dragons provide significantly more information and context for the audience of Game of Thrones than common ones such as the or a. Therefore AdaGrad ensures that the most predictive features have larger updates (i.e. the associated weights increase/decrease proportionally to their importance) than the ones providing irrelevant information.
The weight update for AdaGrad is given by
\begin{equation}
w_{t+1} = w_t - \eta_t D_t \nabla f(w_t),
\end{equation}
where the diagonal matrix $D_t$ has elements
\begin{equation}
(D_t){jj} = \frac{1}{\sqrt{\delta + \sum{k=0}^t \nabla_j f_{i_k}(w_k)^2}}.
\end{equation}
Here $i_k$ denotes example $i$ chosen randomly on iteration $k$, $\nabla_j$ is the $j$th element of the gradient, and $\delta$ is a small number to prevent division by 0. All we need to do is fiddle with the constant learning rate $\eta_t = \eta$ since $D_t$ automatically takes care of assigning higher importance to sparse features.
End of explanation
def SAG(self, case=1):
# Initialize
n = self.n
d = self.d
w = np.zeros((d, 1))
w_old = w
Lambda = 1
# Randomly shuffle training data
arr = np.arange(0, n)
np.random.shuffle(arr) # shuffles arr directly
# SAG parameters
G = np.zeros((n, d))
dvec = np.zeros((d, 1))
L = 0.25*np.max(np.sum(self.X**2, axis=1)) + Lambda
eta = 1/L
# strange property of random numbers with SAG
if case==1:
# much faster to generate all at once
arr = np.random.randint(n, size=(n,))
elif case==2:
arr = np.arange(0, n)
np.random.shuffle(arr) # shuffles arr directly
else:
print("The variable 'case' is not specified correctly; abort.")
return
for t in range(1, self.nEpochs*n + 1):
# Compute grad for one random training example
i = arr[np.mod(t, n)]
# i = np.random.randint(n)
grad = self.LogReg(w=w, Lambda=Lambda/n, i=i)[1]
# SAG algorithm
dvec = dvec - G[[i], :].T + grad
G[[i], :] = grad.T
w = w - eta*dvec/n
# One epoch has passed: check for convergence
if np.mod(t, n) == 0:
change = np.linalg.norm(w-w_old, ord=np.inf)
print('Passes = %d, function = %e, change = %f' %((t+1)/n, self.LogReg(w=w, Lambda=Lambda)[0], change))
if change < self.progTol:
print('Parameters changed by less than progress tolerance on pass')
break
w_old = w
# careful with random numbers
if case==1:
arr = np.random.randint(n, size=(n,))
elif case==2:
np.random.shuffle(arr) # shuffles arr directly
LogisticRegressionSGD.SAG = SAG
print('-----------------------------------------------------------------')
print(' Case 1: completely random walk through training examples ')
print('-----------------------------------------------------------------')
LogisticRegressionSGD(X, y).SAG(case=1)
print('-----------------------------------------------------------------')
print(' Case 2: visiting every training example exactly once per pass ')
print('-----------------------------------------------------------------')
LogisticRegressionSGD(X, y).SAG(case=2)
Explanation: Stochastic Adaptive Gradient (SAG)
Last but not least, we now discuss the SAG algorithm, a variant on batching in SGD that was published in 2015. The basic implementation of this method can be explained schematically as follows:
Randomly select $i_t$ and compute the gradient $\nabla f_{i_t} (w_t)$.
Update the weights by taking a step towards the average of all the gradients computed so far
$$ w_{t+1} = w_t - \eta_t \left( \frac{1}{n} \sum_{i=1}^n G_i^t \right), $$
where $G_i^t$ keeps in memory all the gradients $\nabla f_{i_t} (w)$ computed before iteration $t$ (with replacement if training example $i_t$ is visited repeatedly).
Repeat.
Additionally, in contrast the the methods outlined before, SAG also leverages a property we have not used so far: Lipschitz continuity in the gradient of convex cost functions $f$
$$ \lVert \nabla f(x) - \nabla f(y) \lVert \; \leq \; L \lVert x - y \lVert. $$
By choosing the learning rate to be inversely proportional to the maximal Lipschitz constant over all training examples
$$ L = \frac{1}{4} \max_{1 \leq i \leq n} \left( \lVert x^i \lVert^2 \right) + \lambda, \;\;\; \eta_t = 1/L, $$
(here $x^i$ denotes a row of $X$), SAG achieves vastly superior convergence than all of the methods discussed above. In fact it is the only method of the ones outlined in this post that converges to the global minimum to 5 significant figures.
A caveat on randomness
The implementation of SAG below approaches the random updating of the gradients in two different ways, with surprising consequences.
In case = 1, the index $i_t$ is sampled randomly with replacement, meaning that not all training examples are necessarily visited after an epoch has been completed. This choice of sampling leads to the best convergence properties.
In case = 2, the index $i_t$ is sampled randomly without replacement, such that all training examples are cycled through exactly once during each pass. It turns out that simply reshuffling the cycle after each pass, the method of choice for all the methods above, actually yields a much worse performance for SAG.
It can be verified that random sampling with replacement barely affects the other SGD algorithms, but it remains somewhat of a mystery to me why this choice in randomness affects convergence so drastically.
End of explanation |
11,522 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Sklearn ROC AUC
| Python Code::
from sklearn.metrics import roc_auc_score
roc_auc = roc_auc_score(y_test, y_pred)
|
11,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: Check that the folder exists
Step4: List of data files in data_dir
Step5: Data load
Initial loading of the data
Step6: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step7: We need to define some parameters
Step8: We should check if everithing is OK with an alternation histogram
Step9: If the plot looks good we can apply the parameters with
Step10: Measurements infos
All the measurement data is in the d variable. We can print it
Step11: Or check the measurements duration
Step12: Compute background
Compute the background using automatic threshold
Step13: Burst search and selection
Step14: Preliminary selection and plots
Step15: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods
Step16: Zero threshold on nd
Select bursts with
Step17: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
Step18: Selection 2
Bursts are here weighted using weights $w$
Step19: Selection 3
Bursts are here selected according to
Step20: Save data to file
Step21: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step22: This is just a trick to format the different variables | Python Code:
ph_sel_name = "all-ph"
data_id = "17d"
# ph_sel_name = "all-ph"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:37:24 2017
Duration: 10 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
Explanation: Data folder:
End of explanation
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Check that the folder exists:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
file_list
## Selection for POLIMI 2012-12-6 dataset
# file_list.pop(2)
# file_list = file_list[1:-2]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for P.E. 2012-12-6 dataset
# file_list.pop(1)
# file_list = file_list[:-1]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'AexAem': Ph_sel(Aex='Aem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
Explanation: List of data files in data_dir:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
from mpl_toolkits.axes_grid1 import AxesGrid
import lmfit
print('lmfit version:', lmfit.__version__)
assert d.dir_ex == 0
assert d.leakage == 0
d.burst_search(m=10, F=6, ph_sel=ph_sel)
print(d.ph_sel, d.num_bursts)
ds_sa = d.select_bursts(select_bursts.naa, th1=30)
ds_sa.num_bursts
Explanation: Burst search and selection
End of explanation
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
ds_sas0 = ds_sa.select_bursts(select_bursts.S, S2=0.10)
ds_sas = ds_sa.select_bursts(select_bursts.S, S2=0.15)
ds_sas2 = ds_sa.select_bursts(select_bursts.S, S2=0.20)
ds_sas3 = ds_sa.select_bursts(select_bursts.S, S2=0.25)
ds_st = d.select_bursts(select_bursts.size, add_naa=True, th1=30)
ds_sas.num_bursts
dx = ds_sas0
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas2
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas3
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
plt.title('(nd + na) for A-only population using different S cutoff');
dx = ds_sa
alex_jointplot(dx);
dplot(ds_sa, hist_S)
Explanation: Preliminary selection and plots
End of explanation
dx = ds_sa
bin_width = 0.03
bandwidth = 0.03
bins = np.r_[-0.2 : 1 : bin_width]
x_kde = np.arange(bins.min(), bins.max(), 0.0002)
## Weights
weights = None
## Histogram fit
fitter_g = mfit.MultiFitter(dx.S)
fitter_g.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_g.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_hist_orig = fitter_g.hist_pdf
S_2peaks = fitter_g.params.loc[0, 'p1_center']
dir_ex_S2p = S_2peaks/(1 - S_2peaks)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p)
## KDE
fitter_g.calc_kde(bandwidth=bandwidth)
fitter_g.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak = fitter_g.kde_max_pos[0]
dir_ex_S_kde = S_peak/(1 - S_peak)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=True)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak*100));
## 2-Asym-Gaussian
fitter_ag = mfit.MultiFitter(dx.S)
fitter_ag.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_ag.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.1, p2_center=0.4))
#print(fitter_ag.fit_obj[0].model.fit_report())
S_2peaks_a = fitter_ag.params.loc[0, 'p1_center']
dir_ex_S2pa = S_2peaks_a/(1 - S_2peaks_a)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2pa)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_ag, ax=ax[1])
ax[1].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_a*100));
Explanation: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods:
- Histogram git with 2 Gaussians or with 2 asymmetric Gaussians
(an asymmetric Gaussian has right- and left-side of the peak
decreasing according to different sigmas).
- KDE maximum
In the following we apply these methods using different selection
or weighting schemes to reduce amount of FRET population and make
fitting of the A-only population easier.
Even selection
Here A-only and FRET population are evenly selected.
End of explanation
dx = ds_sa.select_bursts(select_bursts.nd, th1=-100, th2=0)
fitter = bext.bursts_fitter(dx, 'S')
fitter.fit_histogram(model = mfit.factory_gaussian(center=0.1))
S_1peaks_th = fitter.params.loc[0, 'center']
dir_ex_S1p = S_1peaks_th/(1 - S_1peaks_th)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S1p)
mfit.plot_mfit(fitter)
plt.xlim(-0.1, 0.6)
Explanation: Zero threshold on nd
Select bursts with:
$$n_d < 0$$.
End of explanation
dx = ds_sa
## Weights
weights = 1 - mfit.gaussian(dx.S[0], fitter_g.params.loc[0, 'p2_center'], fitter_g.params.loc[0, 'p2_sigma'])
weights[dx.S[0] >= fitter_g.params.loc[0, 'p2_center']] = 0
## Histogram fit
fitter_w1 = mfit.MultiFitter(dx.S)
fitter_w1.weights = [weights]
fitter_w1.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w1.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w1 = fitter_w1.params.loc[0, 'p1_center']
dir_ex_S2p_w1 = S_2peaks_w1/(1 - S_2peaks_w1)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w1)
## KDE
fitter_w1.calc_kde(bandwidth=bandwidth)
fitter_w1.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w1 = fitter_w1.kde_max_pos[0]
dir_ex_S_kde_w1 = S_peak_w1/(1 - S_peak_w1)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w1)
def plot_weights(x, weights, ax):
ax2 = ax.twinx()
x_sort = x.argsort()
ax2.plot(x[x_sort], weights[x_sort], color='k', lw=4, alpha=0.4)
ax2.set_ylabel('Weights');
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w1, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w1*100))
mfit.plot_mfit(fitter_w1, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w1*100));
Explanation: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
End of explanation
## Weights
sizes = dx.nd[0] + dx.na[0] #- dir_ex_S_kde_w3*dx.naa[0]
weights = dx.naa[0] - abs(sizes)
weights[weights < 0] = 0
## Histogram
fitter_w4 = mfit.MultiFitter(dx.S)
fitter_w4.weights = [weights]
fitter_w4.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w4.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w4 = fitter_w4.params.loc[0, 'p1_center']
dir_ex_S2p_w4 = S_2peaks_w4/(1 - S_2peaks_w4)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w4)
## KDE
fitter_w4.calc_kde(bandwidth=bandwidth)
fitter_w4.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w4 = fitter_w4.kde_max_pos[0]
dir_ex_S_kde_w4 = S_peak_w4/(1 - S_peak_w4)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w4)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w4, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w4*100))
mfit.plot_mfit(fitter_w4, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w4*100));
Explanation: Selection 2
Bursts are here weighted using weights $w$:
$$w = n_{aa} - |n_a + n_d|$$
End of explanation
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
print(ds_saw.num_bursts)
dx = ds_saw
## Weights
weights = None
## 2-Gaussians
fitter_w5 = mfit.MultiFitter(dx.S)
fitter_w5.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w5 = fitter_w5.params.loc[0, 'p1_center']
dir_ex_S2p_w5 = S_2peaks_w5/(1 - S_2peaks_w5)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w5)
## KDE
fitter_w5.calc_kde(bandwidth=bandwidth)
fitter_w5.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w5 = fitter_w5.kde_max_pos[0]
S_2peaks_w5_fiterr = fitter_w5.fit_res[0].params['p1_center'].stderr
dir_ex_S_kde_w5 = S_peak_w5/(1 - S_peak_w5)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w5)
## 2-Asym-Gaussians
fitter_w5a = mfit.MultiFitter(dx.S)
fitter_w5a.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5a.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.05, p2_center=0.3))
S_2peaks_w5a = fitter_w5a.params.loc[0, 'p1_center']
dir_ex_S2p_w5a = S_2peaks_w5a/(1 - S_2peaks_w5a)
#print(fitter_w5a.fit_obj[0].model.fit_report(min_correl=0.5))
print('Fitted direct excitation (na/naa) [2-Asym-Gauss]:', dir_ex_S2p_w5a)
fig, ax = plt.subplots(1, 3, figsize=(19, 4.5))
mfit.plot_mfit(fitter_w5, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5*100))
mfit.plot_mfit(fitter_w5, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w5*100));
mfit.plot_mfit(fitter_w5a, ax=ax[2])
mfit.plot_mfit(fitter_g, ax=ax[2], plot_model=False, plot_kde=False)
ax[2].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5a*100));
Explanation: Selection 3
Bursts are here selected according to:
$$n_{aa} - |n_a + n_d| > 30$$
End of explanation
sample = data_id
n_bursts_aa = ds_sas.num_bursts[0]
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_aa dir_ex_S1p dir_ex_S_kde dir_ex_S2p dir_ex_S2pa '
'dir_ex_S2p_w1 dir_ex_S_kde_w1 dir_ex_S_kde_w4 dir_ex_S_kde_w5 dir_ex_S2p_w5 dir_ex_S2p_w5a '
'S_2peaks_w5 S_2peaks_w5_fiterr\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-dir_ex_aa-fit-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
11,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-Fold Cross Validation
Step1: A single train/test split is made easy with the train_test_split function in the cross_validation library
Step2: K-Fold cross validation is just as easy; let's use a K of 5
Step3: Our model is even better than we thought! Can we do better? Let's try a different kernel (poly)
Step4: No! The more complex polynomial kernel produced lower accuracy than a simple linear kernel. The polynomial kernel is overfitting. But we couldn't have told that with a single train/test split | Python Code:
import numpy as np
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn import datasets
from sklearn import svm
iris = datasets.load_iris()
Explanation: K-Fold Cross Validation
End of explanation
# Split the iris data into train/test data sets with 40% reserved for testing
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.4, random_state=0)
# Build an SVC model for predicting iris classifications using training data
clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
# Now measure its performance with the test data
clf.score(X_test, y_test)
Explanation: A single train/test split is made easy with the train_test_split function in the cross_validation library:
End of explanation
# We give cross_val_score a model, the entire data set and its "real" values, and the number of folds:
scores = cross_val_score(clf, iris.data, iris.target, cv=5)
# Print the accuracy for each fold:
print(scores)
# And the mean accuracy of all 5 folds:
print(scores.mean())
Explanation: K-Fold cross validation is just as easy; let's use a K of 5:
End of explanation
clf = svm.SVC(kernel='poly', C=1).fit(X_train, y_train)
scores = cross_val_score(clf, iris.data, iris.target, cv=5)
print(scores)
print(scores.mean())
Explanation: Our model is even better than we thought! Can we do better? Let's try a different kernel (poly):
End of explanation
# Build an SVC model for predicting iris classifications using training data
clf = svm.SVC(kernel='poly', C=1).fit(X_train, y_train)
# Now measure its performance with the test data
clf.score(X_test, y_test)
Explanation: No! The more complex polynomial kernel produced lower accuracy than a simple linear kernel. The polynomial kernel is overfitting. But we couldn't have told that with a single train/test split:
End of explanation |
11,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to Efficiently Read BigQuery Data from TensorFlow 2.3
Learning Objectives
Build a benchmark model.
Find the breakoff point for Keras.
Training a TensorFlow/Keras model that reads from BigQuery.
Load TensorFlow model into BigQuery.
Introduction
In this notebook, you learn
"How to efficiently read BigQuery data from TensorFlow 2.x"
The example problem is to find credit card fraud from the dataset published in
Step1: Find the breakoff point for Keras
When we do the training in Keras & TensorFlow, we need to find the place to split the dataset and how to weight the imbalanced data.
(BigQuery ML did that for us because we specified 'seq' as the split method and auto_class_weights to be True).
Step2: The time cutoff is 144803 and the Keras model's output bias needs to be set at -6.36
The class weights need to be 289.4 and 0.5
Training a TensorFlow/Keras model that reads from BigQuery
Create the dataset from BigQuery
Step3: Create Keras model
Step4: Load TensorFlow model into BigQuery
Now that we have trained a TensorFlow model off BigQuery data ...
let's load the model into BigQuery and use it for batch prediction!
Step5: Now predict with this model (the reason it's called 'd4' is because the output node of my Keras model was called 'd4').
To get probabilities, etc. we'd have to add the corresponding outputs to the Keras model. | Python Code:
%%bash
# create output dataset
bq mk advdata
%%bigquery
CREATE OR REPLACE MODEL advdata.ulb_fraud_detection
TRANSFORM(
* EXCEPT(Amount),
SAFE.LOG(Amount) AS log_amount
)
OPTIONS(
INPUT_LABEL_COLS=['class'],
AUTO_CLASS_WEIGHTS = TRUE,
DATA_SPLIT_METHOD='seq',
DATA_SPLIT_COL='Time',
MODEL_TYPE='logistic_reg'
) AS
SELECT
*
FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection`
%%bigquery
# Use the ML.EVALUATE function to evaluate model metrics
SELECT * FROM TODO: ___________(MODEL advdata.ulb_fraud_detection)
%%bigquery
SELECT predicted_class_probs, Class
# The ML.PREDICT function is used to predict outcomes using the model
FROM TODO: ___________( MODEL advdata.ulb_fraud_detection,
(SELECT * FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection` WHERE Time = 85285.0)
)
Explanation: How to Efficiently Read BigQuery Data from TensorFlow 2.3
Learning Objectives
Build a benchmark model.
Find the breakoff point for Keras.
Training a TensorFlow/Keras model that reads from BigQuery.
Load TensorFlow model into BigQuery.
Introduction
In this notebook, you learn
"How to efficiently read BigQuery data from TensorFlow 2.x"
The example problem is to find credit card fraud from the dataset published in:
<i>
Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015
</i>
and available in BigQuery at <pre>bigquery-public-data.ml_datasets.ulb_fraud_detection</pre>
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Build a benchmark model
In order to compare things, we will do a simple logistic regression in BigQuery ML.
Note that we are using all the columns in the dataset as predictors (except for the Time and Class columns).
The Time column is used to split the dataset 80:20 with the first 80% used for training and the last 20% used for evaluation.
We will also have BigQuery ML automatically balance the weights.
Because the Amount column has a huge range, we take the log of it in preprocessing.
End of explanation
%%bigquery
WITH counts AS (
SELECT
APPROX_QUANTILES(Time, 5)[OFFSET(4)] AS train_cutoff
, COUNTIF(CLASS > 0) AS pos
, COUNTIF(CLASS = 0) AS neg
FROM `bigquery-public-data`.ml_datasets.ulb_fraud_detection
)
SELECT
train_cutoff
, SAFE.LOG(SAFE_DIVIDE(pos,neg)) AS output_bias
, 0.5*SAFE_DIVIDE(pos + neg, pos) AS weight_pos
, 0.5*SAFE_DIVIDE(pos + neg, neg) AS weight_neg
FROM TODO: ___________ # Table Name
Explanation: Find the breakoff point for Keras
When we do the training in Keras & TensorFlow, we need to find the place to split the dataset and how to weight the imbalanced data.
(BigQuery ML did that for us because we specified 'seq' as the split method and auto_class_weights to be True).
End of explanation
# import necessary libraries
import tensorflow as tf
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def features_and_labels(features):
label = features.pop('Class') # this is what we will train for
return features, label
def read_dataset(client, row_restriction, batch_size=2048):
GCP_PROJECT_ID='qwiklabs-gcp-03-5b2f0816822f' # CHANGE
COL_NAMES = ['Time', 'Amount', 'Class'] + ['V{}'.format(i) for i in range(1,29)]
COL_TYPES = [dtypes.float64, dtypes.float64, dtypes.int64] + [dtypes.float64 for i in range(1,29)]
DATASET_GCP_PROJECT_ID, DATASET_ID, TABLE_ID, = 'bigquery-public-data.ml_datasets.ulb_fraud_detection'.split('.')
bqsession = client.read_session(
"projects/" + GCP_PROJECT_ID,
DATASET_GCP_PROJECT_ID, TABLE_ID, DATASET_ID,
COL_NAMES, COL_TYPES,
requested_streams=2,
row_restriction=row_restriction)
dataset = bqsession.parallel_read_rows()
return dataset.prefetch(1).map(features_and_labels).shuffle(batch_size*10).batch(batch_size)
client = BigQueryClient()
temp_df = TODO: ___________(client, 'Time <= 144803', 2) # Function Name
for row in temp_df:
print(row)
break
train_df = read_dataset(client, 'Time <= 144803', 2048)
eval_df = read_dataset(client, 'Time > 144803', 2048)
Explanation: The time cutoff is 144803 and the Keras model's output bias needs to be set at -6.36
The class weights need to be 289.4 and 0.5
Training a TensorFlow/Keras model that reads from BigQuery
Create the dataset from BigQuery
End of explanation
metrics = [
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='roc_auc'),
]
# create inputs, and pass them into appropriate types of feature columns (here, everything is numeric)
inputs = {
'V{}'.format(i) : tf.keras.layers.Input(name='V{}'.format(i), shape=(), dtype='float64') for i in range(1, 29)
}
inputs['Amount'] = tf.keras.layers.Input(name='Amount', shape=(), dtype='float64')
input_fc = [tf.feature_column.numeric_column(colname) for colname in inputs.keys()]
# transformations. only the Amount is transformed
transformed = inputs.copy()
transformed['Amount'] = tf.keras.layers.Lambda(
lambda x: tf.math.log(tf.math.maximum(x, 0.01)), name='log_amount')(inputs['Amount'])
input_layer = tf.keras.layers.DenseFeatures(input_fc, name='inputs')(transformed)
# Deep learning model
d1 = tf.keras.layers.Dense(16, activation='relu', name='d1')(input_layer)
d2 = tf.keras.layers.Dropout(0.25, name='d2')(d1)
d3 = tf.keras.layers.Dense(16, activation='relu', name='d3')(d2)
output = tf.keras.layers.Dense(1, activation='sigmoid', name='d4', bias_initializer=tf.keras.initializers.Constant())(d3)
model = tf.keras.Model(inputs, output)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=metrics)
tf.keras.utils.plot_model(model, rankdir='LR')
class_weight = {0: 0.5, 1: 289.4}
# Trains the model for a fixed number of epochs
history = TODO: _________(train_df, validation_data=eval_df, epochs=20, class_weight=class_weight)
import matplotlib.pyplot as plt
plt.plot(history.history['val_roc_auc']);
plt.xlabel('Epoch');
plt.ylabel('AUC');
Explanation: Create Keras model
End of explanation
BUCKET='<your-bucket>' # CHANGE TO SOMETHING THAT YOU OWN
model.save('gs://{}/bqexample/export'.format(BUCKET))
%%bigquery
CREATE OR REPLACE MODEL advdata.keras_fraud_detection
OPTIONS(model_type='tensorflow', model_path='gs://qwiklabs-gcp-03-5b2f0816822f/bqexample/export/*')
Explanation: Load TensorFlow model into BigQuery
Now that we have trained a TensorFlow model off BigQuery data ...
let's load the model into BigQuery and use it for batch prediction!
End of explanation
%%bigquery
SELECT d4, Class
FROM ML.PREDICT( MODEL advdata.keras_fraud_detection,
(SELECT * FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection` WHERE Time = 85285.0)
)
Explanation: Now predict with this model (the reason it's called 'd4' is because the output node of my Keras model was called 'd4').
To get probabilities, etc. we'd have to add the corresponding outputs to the Keras model.
End of explanation |
11,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Throughout this section we fix a constant_short_rate discounting object.
Step2: geometric_brownian_motion
To instantiate any kind of model class, you need a market_environment object conataining a minimum set of data (depending on the specific model class).
Step3: For the geometric Browniam motion class, the minimum set is as follows with regard to the constant parameter values. Here, we simply make assumptions, in practice the single values would, for example be retrieved from a data service provider like Thomson Reuters or Bloomberg. The frequency parameter is according to the pandas frequency conventions (cf. <a href="http
Step4: Every model class needs a discounting object since this defines the risk-neutral drift of the risk factor.
Step5: The instantiation of a model class is then accomplished by providing a name as a string object and the respective market_environment object.
Step6: The generate_time_grid method generates a ndarray objet of datetime objects given the specifications in the market environment. This represents the discretization of the time interval between the pricing_date and the final_date. This method does not need to be called actively.
Step7: The simulation itself is initiated by a call of the method get_instrument_values. It returns an ndarray object containing the simulated paths for the risk factor.
Step8: These can, for instance, be visualized easily. First some plotting parameter specifications we want to use throughout.
Step9: For easy plotting, we put the data with the time_grid information into a pandas DataFrame object.
Step10: The following visualizes the first 10 paths of the simulation.
Step11: jump_diffusion
The next model is the jump diffusion model from Merton (1976) adding a log-normally distributed jump component to the geometric Brownian motion. Three more parameter values are needed
Step12: The instantiation of the model class and usage then is the same as before.
Step13: stochastic_volatility
Another important financial model is the stochastic volatility model according to Heston (1993). Compared to the geometric Brownian motion, this model class need four more parameter values.
Step14: Again, the instantiation and usage of this model class are essentially the same.
Step15: The following visualizes 10 simulated paths for the risk factor process.
Step16: This model class has a second set of simulated paths, namely for the variance process.
Step17: stoch_vol_jump_diffusion
The next model class, i.e. stoch_vol_jump_diffusion, combines stochastic volatility with a jump diffusion according to Bates (1996). Our market environment object me contains already all parameters needed for the instantiation of this model.
Step18: As with the stochastic_volatility class, this class generates simulated paths for both the risk factor and variance process.
Step19: square_root_diffusion
The square_root_diffusion model class is based on the square-root diffusion according to Cox-Ingersoll-Ross (1985). This class is often used to model stochastic short rates or a volatility process (eg like the VSTOXX volatility index). The model needs the following set of parameters
Step20: As before, the handling of the model class is the same, making it easy to simulate paths given the parameter specifications and visualize them.
Step21: square_root_jump_diffusion
Experimental Status
Building on the square-root diffusion, there is a square-root jump diffusion adding a log-normally distributed jump component. The following parmeters might be for a volatility index, for example. In this case, the major risk might be a large positive jump in the index. The following model parameters are needed
Step22: Once the square_root_jump_diffusion class is instantiated, the handling of the resulting object is the same as with the other model classes.
Step23: square_root_jump_diffusion_plus
Experimental Status
This model class further enhances the square_root_jump_diffusion class by adding a deterministic shift approach according to Brigo-Mercurio (2001) to account for a market given term structure (e.g. in volatility, interest rates). Let us define a simple term structure as follows
Step24: The method generate_shift_base calibrates the square-root diffusion to the given term structure (varying the parameters kappa, theta and volatility).
Step25: The results are shift_base values, i.e. the difference between the model and market implied foward rates.
Step26: The method update_shift_values then calculates deterministic shift values for the relevant time grid by interpolation of the shift_base values.
Step27: When simulating the process, the model forward rates ...
Step28: ... are then shifted by the shift_values to better match the term structure.
Step29: The simulated paths then are including the deterministic shift.
Step30: The effect might not be immediately visible in the paths plot, however, the mean of the simulated values in this case is higher by about 1 point compared to the square_root_jump_diffusion simulation without deterministic shift. | Python Code:
from dx import *
import seaborn as sns; sns.set()
np.set_printoptions(precision=3)
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Model Classes
The model classes represent the fundamental building blocks to model a financial market. They are used to represent the fundamental risk factors driving uncertainty (e.g. a stock, an equity index an interest rate). The following models are available:
geometric_brownian_motion: Black-Scholes-Merton (1973) geometric Brownian motion
jump_diffusion: Merton (1976) jump diffusion
stochastic_volatility: Heston (1993) stochastic volatility model
stoch_vol_jump_diffusion: Bates (1996) stochastic volatility jump diffusion
square_root_diffusion: Cox-Ingersoll-Ross (1985) square-root diffusion
square_root_jump_diffusion: square-root jump diffusion (experimental)
square_root_jump_diffusion_plus: square-root jump diffusion plus term structure (experimental)
End of explanation
r = constant_short_rate('r', 0.06)
Explanation: Throughout this section we fix a constant_short_rate discounting object.
End of explanation
me = market_environment(name='me', pricing_date=dt.datetime(2015, 1, 1))
Explanation: geometric_brownian_motion
To instantiate any kind of model class, you need a market_environment object conataining a minimum set of data (depending on the specific model class).
End of explanation
me.add_constant('initial_value', 36.)
me.add_constant('volatility', 0.2)
me.add_constant('final_date', dt.datetime(2015, 12, 31))
# time horizon for the simulation
me.add_constant('currency', 'EUR')
me.add_constant('frequency', 'M')
# monthly frequency; paramter accorind to pandas convention
me.add_constant('paths', 10000)
# number of paths for simulation
Explanation: For the geometric Browniam motion class, the minimum set is as follows with regard to the constant parameter values. Here, we simply make assumptions, in practice the single values would, for example be retrieved from a data service provider like Thomson Reuters or Bloomberg. The frequency parameter is according to the pandas frequency conventions (cf. <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html" target="_blank">http://pandas.pydata.org/pandas-docs/stable/timeseries.html</a>).
End of explanation
me.add_curve('discount_curve', r)
Explanation: Every model class needs a discounting object since this defines the risk-neutral drift of the risk factor.
End of explanation
gbm = geometric_brownian_motion('gbm', me)
Explanation: The instantiation of a model class is then accomplished by providing a name as a string object and the respective market_environment object.
End of explanation
gbm.generate_time_grid()
gbm.time_grid
Explanation: The generate_time_grid method generates a ndarray objet of datetime objects given the specifications in the market environment. This represents the discretization of the time interval between the pricing_date and the final_date. This method does not need to be called actively.
End of explanation
paths = gbm.get_instrument_values()
paths[:, :2]
Explanation: The simulation itself is initiated by a call of the method get_instrument_values. It returns an ndarray object containing the simulated paths for the risk factor.
End of explanation
%matplotlib inline
colormap='RdYlBu_r'
lw=1.25
figsize=(10, 6)
legend=False
no_paths=10
Explanation: These can, for instance, be visualized easily. First some plotting parameter specifications we want to use throughout.
End of explanation
pdf = pd.DataFrame(paths, index=gbm.time_grid)
Explanation: For easy plotting, we put the data with the time_grid information into a pandas DataFrame object.
End of explanation
pdf[pdf.columns[:no_paths]].plot(colormap=colormap, lw=lw,
figsize=figsize, legend=legend)
Explanation: The following visualizes the first 10 paths of the simulation.
End of explanation
me.add_constant('lambda', 0.7)
# probability for a jump p.a.
me.add_constant('mu', -0.8)
# expected relative jump size
me.add_constant('delta', 0.1)
# standard deviation of relative jump
Explanation: jump_diffusion
The next model is the jump diffusion model from Merton (1976) adding a log-normally distributed jump component to the geometric Brownian motion. Three more parameter values are needed:
End of explanation
jd = jump_diffusion('jd', me)
paths = jd.get_instrument_values()
pdf = pd.DataFrame(paths, index=jd.time_grid)
pdf[pdf.columns[:no_paths]].plot(colormap=colormap, lw=lw,
figsize=figsize, legend=legend)
Explanation: The instantiation of the model class and usage then is the same as before.
End of explanation
me.add_constant('rho', -.5)
# correlation between risk factor process (eg index)
# and variance process
me.add_constant('kappa', 2.5)
# mean reversion factor
me.add_constant('theta', 0.1)
# long-term variance level
me.add_constant('vol_vol', 0.1)
# volatility factor for variance process
Explanation: stochastic_volatility
Another important financial model is the stochastic volatility model according to Heston (1993). Compared to the geometric Brownian motion, this model class need four more parameter values.
End of explanation
sv = stochastic_volatility('sv', me)
Explanation: Again, the instantiation and usage of this model class are essentially the same.
End of explanation
paths = sv.get_instrument_values()
pdf = pd.DataFrame(paths, index=sv.time_grid)
# index level paths
pdf[pdf.columns[:no_paths]].plot(colormap=colormap, lw=lw,
figsize=figsize, legend=legend)
Explanation: The following visualizes 10 simulated paths for the risk factor process.
End of explanation
vols = sv.get_volatility_values()
pdf = pd.DataFrame(vols, index=sv.time_grid)
# volatility paths
pdf[pdf.columns[:no_paths]].plot(colormap=colormap, lw=lw,
figsize=figsize, legend=legend)
Explanation: This model class has a second set of simulated paths, namely for the variance process.
End of explanation
svjd = stoch_vol_jump_diffusion('svjd', me)
Explanation: stoch_vol_jump_diffusion
The next model class, i.e. stoch_vol_jump_diffusion, combines stochastic volatility with a jump diffusion according to Bates (1996). Our market environment object me contains already all parameters needed for the instantiation of this model.
End of explanation
paths = svjd.get_instrument_values()
pdf = pd.DataFrame(paths, index=svjd.time_grid)
# index level paths
pdf[pdf.columns[:no_paths]].plot(colormap=colormap, lw=lw,
figsize=figsize, legend=legend)
vols = svjd.get_volatility_values()
pdf = pd.DataFrame(vols, index=svjd.time_grid)
# volatility paths
pdf[pdf.columns[:no_paths]].plot(colormap=colormap, lw=lw,
figsize=figsize, legend=legend)
Explanation: As with the stochastic_volatility class, this class generates simulated paths for both the risk factor and variance process.
End of explanation
# short rate like parameters
me.add_constant('initial_value', 0.05)
# starting value (eg inital short rate)
me.add_constant('volatility', 0.1)
# volatility factor
me.add_constant('kappa', 2.5)
# mean reversion factor
me.add_constant('theta', 0.01)
# long-term mean
Explanation: square_root_diffusion
The square_root_diffusion model class is based on the square-root diffusion according to Cox-Ingersoll-Ross (1985). This class is often used to model stochastic short rates or a volatility process (eg like the VSTOXX volatility index). The model needs the following set of parameters:
End of explanation
srd = square_root_diffusion('srd', me)
paths = srd.get_instrument_values()
pdf = pd.DataFrame(paths, index=srd.time_grid)
pdf[pdf.columns[:no_paths]].plot(colormap=colormap, lw=lw,
figsize=figsize, legend=legend)
Explanation: As before, the handling of the model class is the same, making it easy to simulate paths given the parameter specifications and visualize them.
End of explanation
# volatility index like parameters
me.add_constant('initial_value', 25.)
# starting values
me.add_constant('kappa', 2.)
# mean-reversion factor
me.add_constant('theta', 20.)
# long-term mean
me.add_constant('volatility', 1.)
# volatility of diffusion
me.add_constant('lambda', 0.3)
# probability for jump p.a.
me.add_constant('mu', 0.4)
# expected jump size
me.add_constant('delta', 0.2)
# standard deviation of jump
Explanation: square_root_jump_diffusion
Experimental Status
Building on the square-root diffusion, there is a square-root jump diffusion adding a log-normally distributed jump component. The following parmeters might be for a volatility index, for example. In this case, the major risk might be a large positive jump in the index. The following model parameters are needed:
End of explanation
srjd = square_root_jump_diffusion('srjd', me)
paths = srjd.get_instrument_values()
pdf = pd.DataFrame(paths, index=srjd.time_grid)
pdf[pdf.columns[:no_paths]].plot(colormap=colormap, lw=lw,
figsize=figsize, legend=legend)
paths[-1].mean()
Explanation: Once the square_root_jump_diffusion class is instantiated, the handling of the resulting object is the same as with the other model classes.
End of explanation
term_structure = np.array([(dt.datetime(2015, 1, 1), 25.),
(dt.datetime(2015, 3, 31), 24.),
(dt.datetime(2015, 6, 30), 27.),
(dt.datetime(2015, 9, 30), 28.),
(dt.datetime(2015, 12, 31), 30.)])
me.add_curve('term_structure', term_structure)
srjdp = square_root_jump_diffusion_plus('srjdp', me)
Explanation: square_root_jump_diffusion_plus
Experimental Status
This model class further enhances the square_root_jump_diffusion class by adding a deterministic shift approach according to Brigo-Mercurio (2001) to account for a market given term structure (e.g. in volatility, interest rates). Let us define a simple term structure as follows:
End of explanation
srjdp.generate_shift_base((2.0, 20., 0.1))
Explanation: The method generate_shift_base calibrates the square-root diffusion to the given term structure (varying the parameters kappa, theta and volatility).
End of explanation
srjdp.shift_base
# difference between market and model
# forward rates after calibration
Explanation: The results are shift_base values, i.e. the difference between the model and market implied foward rates.
End of explanation
srjdp.update_shift_values()
srjdp.shift_values
# shift values to apply to simulation scheme
# given the shift base values
Explanation: The method update_shift_values then calculates deterministic shift values for the relevant time grid by interpolation of the shift_base values.
End of explanation
srjdp.update_forward_rates()
srjdp.forward_rates
# model forward rates resulting from parameters
Explanation: When simulating the process, the model forward rates ...
End of explanation
srjdp.forward_rates[:, 1] + srjdp.shift_values[:, 1]
# shifted foward rates
Explanation: ... are then shifted by the shift_values to better match the term structure.
End of explanation
paths = srjdp.get_instrument_values()
pdf = pd.DataFrame(paths, index=srjdp.time_grid)
pdf[pdf.columns[:no_paths]].plot(colormap=colormap, lw=lw,
figsize=figsize, legend=legend)
Explanation: The simulated paths then are including the deterministic shift.
End of explanation
paths[-1].mean()
Explanation: The effect might not be immediately visible in the paths plot, however, the mean of the simulated values in this case is higher by about 1 point compared to the square_root_jump_diffusion simulation without deterministic shift.
End of explanation |
11,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: We can inspect the backend via
Step2: Defining the VGG Model
We begin by first generating our VGG network. VGG is a popular convolutional neural network with ~19 layers. Not only does this network perform well with fine-tuning, but is also easy to define since the convolutional layers all have 3x3 filter sizes, and only differ in the number of feature maps.
We first define some common parameters used by all the convolution layers
Step3: Then, we can define our network as a list of layers
Step4: The last layer of VGG is an Affine layer with 1000 units, for the 1000 categories in the ImageNet dataset. However, since our dataset only has 10 classes, we will instead use 10 output units. We also give this layer a special name (class_layer) so we know not to load pre-trained weights for this layer.
Step5: Now we are ready to load the pre-trained weights into this model. First we generate a Model object to hold the VGG layers
Step6: Loading pre-trained weights
Next, we download the pre-trained VGG weights from our Model Zoo. Note
Step7: In neon, models are saved as python dictionaries. Below are some example calls to explore the model. You can examine the weights, the layer configuration, and more.
Step8: We encourage you to use the below blank code cell to explore the model dictionary!
We then iterate over the layers in our model, and load the weights from trained_vgg using each layers' layer.load_weights method. The final Affine layer is different between our model and the pre-trained model, since the number of classes have changed. Therefore, we break the loop when we encounter the final Affine layer, which has the name class_layer.
Step9: As a check, the above code should have printed out pairs of layer names, from our model and the pre-trained vgg models. The exact name may differ, but the type of layer and layer number should match between the two.
Fine-tuning VGG on the CIFAR-10 dataset
Now that we've modified the model for our new CIFAR-10 dataset, and loaded the model weights, let's give training a try!
Aeon Dataloader
The CIFAR-10 dataset is small enough to fit into memory, meaning that we would normally use an ArrayIterator to generate our dataset. However, the CIFAR-10 images are 28x28, and VGG was trained on ImageNet, which ahs images that are of 224x224 size. For this reason, we use our macrobatching dataloader aeon, which performs image scaling and cropping on-the-fly.
To prepare the data, we first invoke an ingestion script that downloads the data and creates the macrobatches. The script is located in your neon folder. Here we use some python language to extract the path to neon from the virtual environment.
Step10: Then we execute the ingestion script below
Step11: Aeon configuration
Aeon allows a diverse set of configurations to specify which transformations are applied on-the-fly during training. These configs are specific as python dictionaries. For more detail, see the aeon documentation.
Step12: We then use this configuration to create our dataloader. The outputs from the dataloader are then sent through a series of transformations.
Step13: Optimizer configuration
For fine-tuning, we want the final Affine layer to be updated with a higher learning rate compared to the pre-trained weights throughout the rest of the network.
Step14: Finally, we set up callbacks so the model can report progress during training, and then run the model.fit function. Note that if you are on a CPU, this next section will take long to finish for you to see results. | Python Code:
from neon.backends import gen_backend
be = gen_backend(batch_size=64, backend='cpu')
Explanation: Tutorial: Fine-tuning VGG on CIFAR-10
One of the most common questions we get is how to use neon to load a pre-trained model and fine-tune on a new dataset. In this tutorial, we show how to load a pre-trained convolutional neural network (VGG), which was trained on ImageNet, a large corpus of natural images with 1000 categories. We will then use this model to train on the CIFAR-10 dataset, a much smaller set of images with 10 categories.
We begin by first generating a computational backend with the gen_backend function from neon. If there is a GPU available (recommended), this function will generate a GPU backend. Otherwise, a CPU backend will be used.
Note: VGG will not fit on a Kepler GPU, so here we use CPU backend for instructional purposes. If you are running on a Maxwell+ GPU, switch the backend to gpu below.
End of explanation
print be
Explanation: We can inspect the backend via:
End of explanation
from neon.transforms import Rectlin
from neon.initializers import Constant, Xavier
relu = Rectlin()
conv_params = {'strides': 1,
'padding': 1,
'init': Xavier(local=True),
'bias': Constant(0),
'activation': relu}
Explanation: Defining the VGG Model
We begin by first generating our VGG network. VGG is a popular convolutional neural network with ~19 layers. Not only does this network perform well with fine-tuning, but is also easy to define since the convolutional layers all have 3x3 filter sizes, and only differ in the number of feature maps.
We first define some common parameters used by all the convolution layers:
End of explanation
from neon.layers import Conv, Dropout, Pooling, GeneralizedCost, Affine
from neon.initializers import GlorotUniform
# Set up the model layers
vgg_layers = []
# set up 3x3 conv stacks with different number of filters
vgg_layers.append(Conv((3, 3, 64), **conv_params))
vgg_layers.append(Conv((3, 3, 64), **conv_params))
vgg_layers.append(Pooling(2, strides=2))
vgg_layers.append(Conv((3, 3, 128), **conv_params))
vgg_layers.append(Conv((3, 3, 128), **conv_params))
vgg_layers.append(Pooling(2, strides=2))
vgg_layers.append(Conv((3, 3, 256), **conv_params))
vgg_layers.append(Conv((3, 3, 256), **conv_params))
vgg_layers.append(Conv((3, 3, 256), **conv_params))
vgg_layers.append(Pooling(2, strides=2))
vgg_layers.append(Conv((3, 3, 512), **conv_params))
vgg_layers.append(Conv((3, 3, 512), **conv_params))
vgg_layers.append(Conv((3, 3, 512), **conv_params))
vgg_layers.append(Pooling(2, strides=2))
vgg_layers.append(Conv((3, 3, 512), **conv_params))
vgg_layers.append(Conv((3, 3, 512), **conv_params))
vgg_layers.append(Conv((3, 3, 512), **conv_params))
vgg_layers.append(Pooling(2, strides=2))
vgg_layers.append(Affine(nout=4096, init=GlorotUniform(), bias=Constant(0), activation=relu))
vgg_layers.append(Dropout(keep=0.5))
vgg_layers.append(Affine(nout=4096, init=GlorotUniform(), bias=Constant(0), activation=relu))
vgg_layers.append(Dropout(keep=0.5))
Explanation: Then, we can define our network as a list of layers:
End of explanation
from neon.transforms import Softmax
vgg_layers.append(Affine(nout=10, init=GlorotUniform(), bias=Constant(0), activation=Softmax(),
name="class_layer"))
Explanation: The last layer of VGG is an Affine layer with 1000 units, for the 1000 categories in the ImageNet dataset. However, since our dataset only has 10 classes, we will instead use 10 output units. We also give this layer a special name (class_layer) so we know not to load pre-trained weights for this layer.
End of explanation
from neon.models import Model
model = Model(layers=vgg_layers)
Explanation: Now we are ready to load the pre-trained weights into this model. First we generate a Model object to hold the VGG layers:
End of explanation
from neon.data.datasets import Dataset
from neon.util.persist import load_obj
import os
# location and size of the VGG weights file
url = 'https://s3-us-west-1.amazonaws.com/nervana-modelzoo/VGG/'
filename = 'VGG_D.p'
size = 554227541
# edit filepath below if you have the file elsewhere
_, filepath = Dataset._valid_path_append('data', '', filename)
if not os.path.exists(filepath):
Dataset.fetch_dataset(url, filename, filepath, size)
# load the weights param file
print("Loading VGG weights from {}...".format(filepath))
trained_vgg = load_obj(filepath)
print("Done!")
Explanation: Loading pre-trained weights
Next, we download the pre-trained VGG weights from our Model Zoo. Note: this file is quite large (~550MB). By default, the weights file is saved in your home directory. To change this, or if you have already downloaded the file somewhere else, please edit the filepath variable below.
End of explanation
print("The dictionary has the following keys: {}".format(trained_vgg.keys()))
layer0 = trained_vgg['model']['config']['layers'][0]
print("The first layer is of type: {}".format(layer0['type']))
# filter weights of the first layer
W = layer0['params']['W']
print("The first layer weights have average magnitude of {:.2}".format(abs(W).mean()))
Explanation: In neon, models are saved as python dictionaries. Below are some example calls to explore the model. You can examine the weights, the layer configuration, and more.
End of explanation
param_layers = [l for l in model.layers.layers]
param_dict_list = trained_vgg['model']['config']['layers']
for layer, params in zip(param_layers, param_dict_list):
if(layer.name == 'class_layer'):
break
# To be sure, we print the name of the layer in our model
# and the name in the vgg model.
print(layer.name + ", " + params['config']['name'])
layer.load_weights(params, load_states=True)
Explanation: We encourage you to use the below blank code cell to explore the model dictionary!
We then iterate over the layers in our model, and load the weights from trained_vgg using each layers' layer.load_weights method. The final Affine layer is different between our model and the pre-trained model, since the number of classes have changed. Therefore, we break the loop when we encounter the final Affine layer, which has the name class_layer.
End of explanation
import neon
import os
neon_path = os.path.split(os.path.dirname(neon.__file__))[0]
print "Found path to neon as {}".format(neon_path)
Explanation: As a check, the above code should have printed out pairs of layer names, from our model and the pre-trained vgg models. The exact name may differ, but the type of layer and layer number should match between the two.
Fine-tuning VGG on the CIFAR-10 dataset
Now that we've modified the model for our new CIFAR-10 dataset, and loaded the model weights, let's give training a try!
Aeon Dataloader
The CIFAR-10 dataset is small enough to fit into memory, meaning that we would normally use an ArrayIterator to generate our dataset. However, the CIFAR-10 images are 28x28, and VGG was trained on ImageNet, which ahs images that are of 224x224 size. For this reason, we use our macrobatching dataloader aeon, which performs image scaling and cropping on-the-fly.
To prepare the data, we first invoke an ingestion script that downloads the data and creates the macrobatches. The script is located in your neon folder. Here we use some python language to extract the path to neon from the virtual environment.
End of explanation
%run $neon_path/examples/cifar10_msra/data.py --out_dir data/cifar10/
Explanation: Then we execute the ingestion script below
End of explanation
config = {
'manifest_filename': 'data/cifar10/train-index.csv', # CSV manifest of data
'manifest_root': 'data/cifar10', # root data directory
'image': {'height': 224, 'width': 224, # output image size
'scale': [0.875, 0.875], # random scaling of image before cropping
'flip_enable': True}, # randomly flip image
'type': 'image,label', # type of data
'minibatch_size': be.bsz # batch size
}
Explanation: Aeon configuration
Aeon allows a diverse set of configurations to specify which transformations are applied on-the-fly during training. These configs are specific as python dictionaries. For more detail, see the aeon documentation.
End of explanation
from neon.data.aeon_shim import AeonDataLoader
from neon.data.dataloader_transformers import OneHot, TypeCast, BGRMeanSubtract
train_set = AeonDataLoader(config, be)
train_set = OneHot(train_set, index=1, nclasses=10) # perform onehot on the labels
train_set = TypeCast(train_set, index=0, dtype=np.float32) # cast the image to float32
train_set = BGRMeanSubtract(train_set, index=0) # subtract image color means (based on default values)
Explanation: We then use this configuration to create our dataloader. The outputs from the dataloader are then sent through a series of transformations.
End of explanation
from neon.optimizers import GradientDescentMomentum, Schedule, MultiOptimizer
from neon.transforms import CrossEntropyMulti
# define different optimizers for the class_layer and the rest of the network
# we use a momentum coefficient of 0.9 and weight decay of 0.0005.
opt_vgg = GradientDescentMomentum(0.001, 0.9, wdecay=0.0005)
opt_class_layer = GradientDescentMomentum(0.01, 0.9, wdecay=0.0005)
# also define optimizers for the bias layers, which have a different learning rate
# and not weight decay.
opt_bias = GradientDescentMomentum(0.002, 0.9)
opt_bias_class = GradientDescentMomentum(0.02, 0.9)
# set up the mapping of layers to optimizers
opt = MultiOptimizer({'default': opt_vgg, 'Bias': opt_bias,
'class_layer': opt_class_layer, 'class_layer_bias': opt_bias_class})
# use cross-entropy cost to train the network
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
Explanation: Optimizer configuration
For fine-tuning, we want the final Affine layer to be updated with a higher learning rate compared to the pre-trained weights throughout the rest of the network.
End of explanation
from neon.callbacks.callbacks import Callbacks
callbacks = Callbacks(model)
model.fit(train_set, optimizer=opt, num_epochs=10, cost=cost, callbacks=callbacks)
Explanation: Finally, we set up callbacks so the model can report progress during training, and then run the model.fit function. Note that if you are on a CPU, this next section will take long to finish for you to see results.
End of explanation |
11,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MODELED.netconf
github.com/modeled/modeled.netconf
Highly Pythonized NETCONF and YANG
Automagically create YANG
modules and data containers from
MODELED Python classes
>>>
Simply turn Python methods into
NETCONF/YANG RPC methods
using decorators
>>>
Directly start NETCONF servers from modeled YANG modules
>>>
TODO
Step7: To install in development mode
Step8: To check if the Turing Machine works, it needs an actual program.
I took it from the pyang tutorial again.
It's a very simple program for adding to numbers in unary notation,
separated by a 0.
It can easily be defined YAML.
If you haven't installed pyyaml yet
Step9: Instantiate the Turing Machine with the loaded program
Step10: And set the initial state for computing unary 1 + 2
Step11: The tape string gets automatically converted to a list,
because TuringMachine.tape is defined as a list member
Step12: Ready for turning on the Turing Machine
Step13: Final state is reached. Result is unary 3. Seems to work!
YANGifying the Turing Machine
Creating a YANG module from the modeled TuringMachine class
is now quite simple. Just import the modeled YANG module adapter class
Step14: And plug it to the TuringMachine.
This will create a new class which will be derived
from both the YANG module adapter and the TuringMachine class
Step15: It also has a class attribute referencing the original modeled class
Step16: BTW
Step17: But let's take look at the really useful features now.
The adapted class dynamically provides .to_...() methods
for every pyang output format plugin
which you could pass to the pyang command's -f flag.
Calling such a method will programmatically
create a pyang.statement.Statement tree
(which pyang does internally on loading an input file)
according to the typed members of the adapted modeled class.
Every .to_...() method takes optional
revision date and XML prefix and namespace arguments.
If no revision is given,
the current date will be used.
The adapted class will be mapped to a YANG module
and its main data container definition.
Module and container name will be generated from the name
of the adapted modeled class
by decapitalizing and joining its name parts with hyphens.
YANG leaf names will be generated from modeled member names
by replacing underscores with hyphens.
list and dict members will be mapped to YANG list definitions.
If members have other modeled classes as types,
sub-containers will be defined.
Type mapping is very simple in this early project stage.
Only int and str are supported
and no YANG typedefs are used.
All containers and their contents are defined configurable
(with write permissions).
That will change soon...
The result is a complete module definition text in the given format,
like default YANG
Step18: Or XMLified YIN
Step19: Since the modeled YANG module
is derived from the adapted TuringMachine class,
it can still be instantiated and used in the same way
Step20: Adding RPC methods
The above modeled YANG module is not very useful
without some RPC methods for controlling the Turing Machine via NETCONF.
MODELED.netconf offers a simple @rpc decorator
for defining them
Step23: The following RPC definitions are again designed
according to the pyang tutorial.
Since those RPC methods are NETCONF/YANG specific,
they are defined after the modeled YANG adaption.
The simplest way is to derive a new class for that purpose
Step24: Now the .to_yang() conversion also includes the rpc definitions,
with descriptions taken from the Python methods' __doc__ strings,
and rpc and input leaf names automatically
created from the Python method and argument names
by replacing underscores with hyphens again
Step25: Now is a good time to verify if that's really correct YANG.
Just write it to a file
Step26: And feed it to the pyang command.
Since the pyang turorial also produces
a tree format output from its YANG Turing Machine,
I also do it here for comparison
(!... runs external programs in IPython)
Step27: No errors. Great!
From modeled YANG modules to a NETCONF service
Finally! Time to run a Turing Machine NETCONF server...
First create an instance of the final Turing Machine class
with RPC method definitions
Step28: Currently only serving NETCONF over
SSH is supported.
An SSH service needs a network port and user authentication credentials
Step29: And it needs an SSH key.
If you don't have any key lying around,
the UNIX tool ssh-keygen from
OpenSSH
(or Windows tools like
PuTTY)
can generate one for you.
Just name the file key
Step30: And that's it! The created server is an instance of Python
netconf project's
NetconfSSHServer class.
The server's internals run in a separate thread,
so it doesn't block the Python script.
We can just continue with creating a NETCONF client
which talks to the server.
Let's directly use NetconfSSHSession
from the netconf project for now.
The Pythonic client features of MODELED.netconf are not implemented yet,
but they will also be based on netconf.
Step31: Now the Turing Machine can be remotely initialized
with a NETCONF RPC call.
Let's compute unary 2 + 3 this time.
Normally this would also need the Turing Machine's XML namespace,
but namspace handling is not properly supported yet
by MODELED.netconf
Step32: The tape will be set accordingly
Step33: Now run the Turing Machine via RPC | Python Code:
import modeled.netconf
modeled.netconf.__requires__
Explanation: MODELED.netconf
github.com/modeled/modeled.netconf
Highly Pythonized NETCONF and YANG
Automagically create YANG
modules and data containers from
MODELED Python classes
>>>
Simply turn Python methods into
NETCONF/YANG RPC methods
using decorators
>>>
Directly start NETCONF servers from modeled YANG modules
>>>
TODO:
Proper RPC namespace handling
Handle default NETCONF RPC methods like <get> or <get-config>
Create NETCONF servers with multiple modeled YANG modules
Automagically create Pythonic NETCONF clients from YANG definitions
WARNING: This project is in early alpha state
and therefore not production ready.
ABOUT this README:
It contains setup instructions
and a tutorial.
It was automatically created from
IPython notebook README.ipynb.
You can view the notebook
online.
The internal links don't work on Bitbucket.
Setup
Just use pip
to install the latest release from PyPI:
pip install modeled.netconf
It automatically installs all runtime requirements:
End of explanation
import modeled
from modeled import member
class Input(modeled.object):
The input part of a Turing Machine program rule.
state = member[int]()
symbol = member[str]()
class Output(modeled.object):
The output part of a Turing Machine program rule.
state = member[int]()
symbol = member[str]()
head_move = member[str]['L', 'R']()
class Rule(modeled.object):
A Turing Machine program rule.
input = member[Input]()
output = member[Output]()
def __init__(self, input, output):
Expects both `input` and `output` as mappings.
self.input = Input(
# modeled.object.__init__ supports **kwargs
# for initializing modeled.member values
**dict(input))
self.output = Output(**dict(output))
class TuringMachine(modeled.object):
state = member[int]()
head_position = member[int]()
# the list of symbols on the input/output tape
tape = member.list[str](indexname='cell', itemname='symbol')
# the machine program as named rules
program = member.dict[str, Rule](keyname='name')
def __init__(self, program):
Create a Turing Machine with the given `program`.
program = dict(program)
for name, (input, output) in program.items():
self.program[name] = Rule(input, output)
def run(self):
Start the Turing Machine.
- Runs until no matching input part for current state and tape symbol
can be found in the program rules.
self.log = " %s %d\n" % (''.join(self.tape), self.state)
while True:
pos = self.head_position
if 0 <= pos < len(self.tape):
symbol = self.tape[pos]
else:
symbol = None
for name, rule in self.program.items():
if (self.state, symbol) == (rule.input.state, rule.input.symbol):
self.log += "%s^%s --> %s\n" % (
' ' * (pos + 1),
' ' * (len(self.tape) - pos),
name)
if rule.output.state is not None:
self.state = rule.output.state
if rule.output.symbol is not None:
self.tape[pos] = rule.output.symbol
self.head_position += {'L': -1, 'R': 1}[rule.output.head_move]
self.log += " %s %d\n" % (''.join(self.tape), self.state)
break
else:
break
Explanation: To install in development mode:
pip install -e /path/to/repository/
From a MODELED class to a YANG module
MODELED.netconf is based on my
MODELED framework,
which provides tools for defining Python classes
with typed members and methods,
quite similar to Django database models,
but with a more general approach.
Those modeled classes can then automagically be mapped
to data serialization formats, databases,
GUI frameworks, web frameworks, or whatever,
using the integrated modeled.Adapter system.
The MODELED framework is still in a late alpha stage,
needs some internal refactoring, and lacks documentation,
but I am actively working on this.
The basic principles should nevertheless become visible during the following example.
A MODELED Turing Machine
Since MODELED.netconf uses pyang
for auto-generating YANG definitions from modeled classes,
I decided to resemble the Turing Machine example from the
pyang tutorial...
a bit more simplified and with some little structural and naming changes...
however... below is a modeled Turing Machine implementation:
End of explanation
%%file turing-machine-program.yaml
left summand:
- {state: 0, symbol: 1}
- {state: null, symbol: null, head_move: R}
separator:
- {state: 0, symbol: 0}
- {state: 1, symbol: 1, head_move: R}
right summand:
- {state: 1, symbol: 1}
- {state: null, symbol: null, head_move: R}
right end:
- {state: 1, symbol: null}
- {state: 2, symbol: null, head_move: L}
write separator:
- {state: 2, symbol: 1}
- {state: 3, symbol: 0, head_move: L}
go home:
- {state: 3, symbol: 1}
- {state: null, symbol: null, head_move: L}
final step:
- {state: 3, symbol: null}
- {state: 4, symbol: null, head_move: R}
import yaml
with open('turing-machine-program.yaml') as f:
TM_PROGRAM = yaml.load(f)
Explanation: To check if the Turing Machine works, it needs an actual program.
I took it from the pyang tutorial again.
It's a very simple program for adding to numbers in unary notation,
separated by a 0.
It can easily be defined YAML.
If you haven't installed pyyaml yet:
pip install pyyaml
(%%... are IPython magic functions):
End of explanation
tm = TuringMachine(TM_PROGRAM)
Explanation: Instantiate the Turing Machine with the loaded program:
End of explanation
tm.state = 0
tm.head_position = 0
tm.tape = '1011'
Explanation: And set the initial state for computing unary 1 + 2:
End of explanation
tm.tape
Explanation: The tape string gets automatically converted to a list,
because TuringMachine.tape is defined as a list member:
End of explanation
tm.run()
print(tm.log)
Explanation: Ready for turning on the Turing Machine:
End of explanation
from modeled.netconf import YANG
Explanation: Final state is reached. Result is unary 3. Seems to work!
YANGifying the Turing Machine
Creating a YANG module from the modeled TuringMachine class
is now quite simple. Just import the modeled YANG module adapter class:
End of explanation
YANG[TuringMachine].mro()
Explanation: And plug it to the TuringMachine.
This will create a new class which will be derived
from both the YANG module adapter and the TuringMachine class:
End of explanation
YANG[TuringMachine].mclass
Explanation: It also has a class attribute referencing the original modeled class:
End of explanation
YANG[TuringMachine] is YANG[TuringMachine]
Explanation: BTW: the class adaption will be cached,
so every YANG[TuringMachine] operation
will return the same class object:
End of explanation
print(YANG[TuringMachine].to_yang(
prefix='tm', namespace='http://modeled.netconf/turing-machine'))
Explanation: But let's take look at the really useful features now.
The adapted class dynamically provides .to_...() methods
for every pyang output format plugin
which you could pass to the pyang command's -f flag.
Calling such a method will programmatically
create a pyang.statement.Statement tree
(which pyang does internally on loading an input file)
according to the typed members of the adapted modeled class.
Every .to_...() method takes optional
revision date and XML prefix and namespace arguments.
If no revision is given,
the current date will be used.
The adapted class will be mapped to a YANG module
and its main data container definition.
Module and container name will be generated from the name
of the adapted modeled class
by decapitalizing and joining its name parts with hyphens.
YANG leaf names will be generated from modeled member names
by replacing underscores with hyphens.
list and dict members will be mapped to YANG list definitions.
If members have other modeled classes as types,
sub-containers will be defined.
Type mapping is very simple in this early project stage.
Only int and str are supported
and no YANG typedefs are used.
All containers and their contents are defined configurable
(with write permissions).
That will change soon...
The result is a complete module definition text in the given format,
like default YANG:
End of explanation
print(YANG[TuringMachine].to_yin(
prefix='tm', namespace='http://modeled.netconf/turing-machine'))
Explanation: Or XMLified YIN:
End of explanation
tm = YANG[TuringMachine](TM_PROGRAM)
tm.state = 0
tm.head_position = 0
tm.tape = '1011'
tm.run()
tm.state, tm.tape
Explanation: Since the modeled YANG module
is derived from the adapted TuringMachine class,
it can still be instantiated and used in the same way:
End of explanation
from modeled.netconf import rpc
Explanation: Adding RPC methods
The above modeled YANG module is not very useful
without some RPC methods for controlling the Turing Machine via NETCONF.
MODELED.netconf offers a simple @rpc decorator
for defining them:
End of explanation
class TM(YANG[TuringMachine]):
@rpc(argtypes={'tape_content': str})
# in Python 3 you can also use function annotations
# and write (self, tape_content: str) below
# instead of argtypes= above
def initialize(self, tape_content):
Initialize the Turing Machine.
self.state = 0
self.head_position = 0
self.tape = tape_content
@rpc(argtypes={})
def run(self):
Start the Turing Machine operation.
TuringMachine.run(self)
Explanation: The following RPC definitions are again designed
according to the pyang tutorial.
Since those RPC methods are NETCONF/YANG specific,
they are defined after the modeled YANG adaption.
The simplest way is to derive a new class for that purpose:
End of explanation
TM_YANG = TM.to_yang(
prefix='tm', namespace='http://modeled.netconf/turing-machine')
print(TM_YANG)
Explanation: Now the .to_yang() conversion also includes the rpc definitions,
with descriptions taken from the Python methods' __doc__ strings,
and rpc and input leaf names automatically
created from the Python method and argument names
by replacing underscores with hyphens again:
End of explanation
with open('turing-machine.yang', 'w') as f:
f.write(TM_YANG)
Explanation: Now is a good time to verify if that's really correct YANG.
Just write it to a file:
End of explanation
!pyang -f tree turing-machine.yang
Explanation: And feed it to the pyang command.
Since the pyang turorial also produces
a tree format output from its YANG Turing Machine,
I also do it here for comparison
(!... runs external programs in IPython):
End of explanation
tm = TM(TM_PROGRAM)
Explanation: No errors. Great!
From modeled YANG modules to a NETCONF service
Finally! Time to run a Turing Machine NETCONF server...
First create an instance of the final Turing Machine class
with RPC method definitions:
End of explanation
PORT = 12345
USERNAME = 'user'
PASSWORD = 'password'
Explanation: Currently only serving NETCONF over
SSH is supported.
An SSH service needs a network port and user authentication credentials:
End of explanation
server = tm.serve_netconf_ssh(
port=PORT, host_key='key', username=USERNAME, password=PASSWORD)
Explanation: And it needs an SSH key.
If you don't have any key lying around,
the UNIX tool ssh-keygen from
OpenSSH
(or Windows tools like
PuTTY)
can generate one for you.
Just name the file key:
ssh-keygen -f key
End of explanation
from netconf.client import NetconfSSHSession
client = NetconfSSHSession(
'localhost', port=PORT, username=USERNAME, password=PASSWORD)
Explanation: And that's it! The created server is an instance of Python
netconf project's
NetconfSSHServer class.
The server's internals run in a separate thread,
so it doesn't block the Python script.
We can just continue with creating a NETCONF client
which talks to the server.
Let's directly use NetconfSSHSession
from the netconf project for now.
The Pythonic client features of MODELED.netconf are not implemented yet,
but they will also be based on netconf.
End of explanation
reply = client.send_rpc(
'<initialize><tape-content>110111</tape-content></initialize>')
Explanation: Now the Turing Machine can be remotely initialized
with a NETCONF RPC call.
Let's compute unary 2 + 3 this time.
Normally this would also need the Turing Machine's XML namespace,
but namspace handling is not properly supported yet
by MODELED.netconf:
End of explanation
tm.tape
Explanation: The tape will be set accordingly:
End of explanation
reply = client.send_rpc('<run/>')
tm.state, tm.tape
Explanation: Now run the Turing Machine via RPC:
End of explanation |
11,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train a basic TensorFlow Lite for Microcontrollers model
This notebook demonstrates the process of training a 2.5 kB model using TensorFlow and converting it for use with TensorFlow Lite for Microcontrollers.
Deep learning networks learn to model patterns in underlying data. Here, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
The model created in this notebook is used in the hello_world example for TensorFlow Lite for MicroControllers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Setup Environment
Install Dependencies
Step2: Set Seed for Repeatable Results
Step3: Import Dependencies
Step4: Dataset
1. Generate Data
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph.
Step5: 2. Add Noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph
Step6: 3. Split the Data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
The data is split as follows
Step7: Training
1. Design the Model
We're going to build a simple neural network model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 8 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note
Step8: 2. Train the Model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 500 epochs, with 64 pieces of data in each batch. We also pass in some data for validation. As you will see when you run the cell, training can take a while to complete
Step9: 3. Plot Metrics
1. Mean Squared Error
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form
Step10: The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 50 epochs
Step11: From the plot, we can see that loss continues to reduce until around 200 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 200 epochs.
However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.
2. Mean Absolute Error
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers
Step12: This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.
3. Actual vs Predicted Outputs
To get more insight into what is happening, let's check its predictions against the test dataset we set aside earlier
Step13: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Training a Larger Model
1. Design the Model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with 16 neurons in the first layer and an additional layer of 16 neurons in the middle
Step14: 2. Train the Model
We'll now train the new model.
Step15: 3. Plot Metrics
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ)
Step16: Great results! From these graphs, we can see several exciting things
Step17: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to quit, and this model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Generate a TensorFlow Lite Model
1. Generate Models with or without Quantization
We now have an acceptably accurate model. We'll use the TensorFlow Lite Converter to convert the model into a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of models is called quantization while converting the model. It reduces the precision of the model's weights, and possibly the activations (output of each layer) as well, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
Note
Step18: 2. Compare Model Sizes
Step19: Our quantized model is only 224 bytes smaller than the original version, which only a tiny reduction in size! At around 2.5 kilobytes, this model is already so small that the weights make up only a small fraction of the overall size, meaning quantization has little effect.
More complex models have many more weights, meaning the space saving from quantization will be much higher, approaching 4x for most sophisticated models.
Regardless, our quantized model will take less time to execute than the original version, which is important on a tiny microcontroller!
3. Test the Models
To prove these models are still accurate after conversion and quantization, we'll use both of them to make predictions and compare these against our test results
Step20: We can see from the graph that the predictions for the original model, the converted model, and the quantized model are all close enough to be indistinguishable. This means that our quantized model is ready to use!
Generate a TensorFlow Lite for Microcontrollers Model
Convert the TensorFlow Lite quantized model into a C source file that can be loaded by TensorFlow Lite for Microcontrollers.
Step21: Deploy to a Microcontroller
Follow the instructions in the hello_world README.md for TensorFlow Lite for MicroControllers to deploy this model on a specific microcontroller.
Reference Model | Python Code:
# Define paths to model files
import os
MODELS_DIR = 'models/'
if not os.path.exists(MODELS_DIR):
os.mkdir(MODELS_DIR)
MODEL_TF = MODELS_DIR + 'model.pb'
MODEL_NO_QUANT_TFLITE = MODELS_DIR + 'model_no_quant.tflite'
MODEL_TFLITE = MODELS_DIR + 'model.tflite'
MODEL_TFLITE_MICRO = MODELS_DIR + 'model.cc'
Explanation: Train a basic TensorFlow Lite for Microcontrollers model
This notebook demonstrates the process of training a 2.5 kB model using TensorFlow and converting it for use with TensorFlow Lite for Microcontrollers.
Deep learning networks learn to model patterns in underlying data. Here, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
The model created in this notebook is used in the hello_world example for TensorFlow Lite for MicroControllers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Training is much faster using GPU acceleration. Before you proceed, ensure you are using a GPU runtime by going to Runtime -> Change runtime type and set Hardware accelerator: GPU.
Configure Defaults
End of explanation
! pip2 install gast==0.3.3
! pip install -q tensorflow==2
Explanation: Setup Environment
Install Dependencies
End of explanation
# Set a "seed" value, so we get the same random numbers each time we run this
# notebook for reproducible results.
# Numpy is a math library
import numpy as np
np.random.seed(1) # numpy seed
# TensorFlow is an open source machine learning library
import tensorflow as tf
tf.random.set_seed(1) # tensorflow global random seed
Explanation: Set Seed for Repeatable Results
End of explanation
# Keras is TensorFlow's high-level API for deep learning
from tensorflow import keras
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# Math is Python's math library
import math
Explanation: Import Dependencies
End of explanation
# Number of sample datapoints
SAMPLES = 1000
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2ฯ, which covers a complete sine wave oscillation
x_values = np.random.uniform(
low=0, high=2*math.pi, size=SAMPLES).astype(np.float32)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values).astype(np.float32)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
Explanation: Dataset
1. Generate Data
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph.
End of explanation
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
Explanation: 2. Add Noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph:
End of explanation
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
Explanation: 3. Split the Data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
The data is split as follows:
1. Training: 60%
2. Validation: 20%
3. Testing: 20%
The following code will split our data and then plots each set as a different color:
End of explanation
# We'll use Keras to create a simple model architecture
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 8 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(keras.layers.Dense(8, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(keras.layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_1.compile(optimizer='adam', loss='mse', metrics=['mae'])
Explanation: Training
1. Design the Model
We're going to build a simple neural network model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 8 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note: To learn more about how neural networks function, you can explore the Learn TensorFlow codelabs.
The code in the following cell defines our model using Keras, TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we compile it, specifying parameters that determine how it will be trained:
End of explanation
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
Explanation: 2. Train the Model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 500 epochs, with 64 pieces of data in each batch. We also pass in some data for validation. As you will see when you run the cell, training can take a while to complete:
End of explanation
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: 3. Plot Metrics
1. Mean Squared Error
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form:
End of explanation
# Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 50 epochs:
End of explanation
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
Explanation: From the plot, we can see that loss continues to reduce until around 200 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 200 epochs.
However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.
2. Mean Absolute Error
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers:
End of explanation
# Calculate and print the loss on our test dataset
loss = model_1.evaluate(x_test, y_test)
# Make predictions based on our test dataset
predictions = model_1.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
Explanation: This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.
3. Actual vs Predicted Outputs
To get more insight into what is happening, let's check its predictions against the test dataset we set aside earlier:
End of explanation
model_2 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_2.add(keras.layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second layer may help the network learn more complex representations
model_2.add(keras.layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model_2.add(keras.layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_2.compile(optimizer='adam', loss='mse', metrics=['mae'])
Explanation: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Training a Larger Model
1. Design the Model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with 16 neurons in the first layer and an additional layer of 16 neurons in the middle:
End of explanation
history_2 = model_2.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
Explanation: 2. Train the Model
We'll now train the new model.
End of explanation
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_2.history['loss']
val_loss = history_2.history['val_loss']
epochs = range(1, len(loss) + 1)
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.figure(figsize=(10, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_2.history['mae']
val_mae = history_2.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.tight_layout()
Explanation: 3. Plot Metrics
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ):
Epoch 500/500
600/600 [==============================] - 0s 51us/sample - loss: 0.0118 - mae: 0.0873 - val_loss: 0.0105 - val_mae: 0.0832
You can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.01, and validation MAE has dropped from 0.33 to 0.08.
The following cell will print the same graphs we used to evaluate our original model, but showing our new training history:
End of explanation
# Calculate and print the loss on our test dataset
loss = model_2.evaluate(x_test, y_test)
# Make predictions based on our test dataset
predictions = model_2.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
Explanation: Great results! From these graphs, we can see several exciting things:
The overall loss and MAE are much better than our previous network
Metrics are better for validation than training, which means the network is not overfitting
The reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.
This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:
End of explanation
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
model_no_quant_tflite = converter.convert()
# # Save the model to disk
open(MODEL_NO_QUANT_TFLITE, "wb").write(model_no_quant_tflite)
# Convert the model to the TensorFlow Lite format with quantization
def representative_dataset():
for i in range(500):
yield([x_train[i].reshape(1, 1)])
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce full-int8 quantization (except inputs/outputs which are always float)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Provide a representative dataset to ensure we quantize correctly.
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open(MODEL_TFLITE, "wb").write(model_tflite)
Explanation: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to quit, and this model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Generate a TensorFlow Lite Model
1. Generate Models with or without Quantization
We now have an acceptably accurate model. We'll use the TensorFlow Lite Converter to convert the model into a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of models is called quantization while converting the model. It reduces the precision of the model's weights, and possibly the activations (output of each layer) as well, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
Note: Currently, TFLite Converter produces TFlite models with float interfaces (input and output ops are always float). This is a blocker for users who require TFlite models with pure int8 or uint8 inputs/outputs. Refer to https://github.com/tensorflow/tensorflow/issues/38285
In the following cell, we'll convert the model twice: once with quantization, once without.
End of explanation
import os
model_no_quant_size = os.path.getsize(MODEL_NO_QUANT_TFLITE)
print("Model is %d bytes" % model_no_quant_size)
model_size = os.path.getsize(MODEL_TFLITE)
print("Quantized model is %d bytes" % model_size)
difference = model_no_quant_size - model_size
print("Difference is %d bytes" % difference)
Explanation: 2. Compare Model Sizes
End of explanation
# Instantiate an interpreter for each model
model_no_quant = tf.lite.Interpreter(MODEL_NO_QUANT_TFLITE)
model = tf.lite.Interpreter(MODEL_TFLITE)
# Allocate memory for each model
model_no_quant.allocate_tensors()
model.allocate_tensors()
# Get the input and output tensors so we can feed in values and get the results
model_no_quant_input = model_no_quant.tensor(model_no_quant.get_input_details()[0]["index"])
model_no_quant_output = model_no_quant.tensor(model_no_quant.get_output_details()[0]["index"])
model_input = model.tensor(model.get_input_details()[0]["index"])
model_output = model.tensor(model.get_output_details()[0]["index"])
# Create arrays to store the results
model_no_quant_predictions = np.empty(x_test.size)
model_predictions = np.empty(x_test.size)
# Run each model's interpreter for each value and store the results in arrays
for i in range(x_test.size):
model_no_quant_input().fill(x_test[i])
model_no_quant.invoke()
model_no_quant_predictions[i] = model_no_quant_output()[0]
model_input().fill(x_test[i])
model.invoke()
model_predictions[i] = model_output()[0]
# See how they line up with the data
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual values')
plt.plot(x_test, predictions, 'ro', label='Original predictions')
plt.plot(x_test, model_no_quant_predictions, 'bx', label='Lite predictions')
plt.plot(x_test, model_predictions, 'gx', label='Lite quantized predictions')
plt.legend()
plt.show()
Explanation: Our quantized model is only 224 bytes smaller than the original version, which only a tiny reduction in size! At around 2.5 kilobytes, this model is already so small that the weights make up only a small fraction of the overall size, meaning quantization has little effect.
More complex models have many more weights, meaning the space saving from quantization will be much higher, approaching 4x for most sophisticated models.
Regardless, our quantized model will take less time to execute than the original version, which is important on a tiny microcontroller!
3. Test the Models
To prove these models are still accurate after conversion and quantization, we'll use both of them to make predictions and compare these against our test results:
End of explanation
# Install xxd if it is not available
!apt-get update && apt-get -qq install xxd
# Convert to a C source file
!xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO}
# Update variable names
REPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_')
!sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO}
Explanation: We can see from the graph that the predictions for the original model, the converted model, and the quantized model are all close enough to be indistinguishable. This means that our quantized model is ready to use!
Generate a TensorFlow Lite for Microcontrollers Model
Convert the TensorFlow Lite quantized model into a C source file that can be loaded by TensorFlow Lite for Microcontrollers.
End of explanation
# Print the C source file
!cat {MODEL_TFLITE_MICRO}
Explanation: Deploy to a Microcontroller
Follow the instructions in the hello_world README.md for TensorFlow Lite for MicroControllers to deploy this model on a specific microcontroller.
Reference Model: If you have not modified this notebook, you can follow the instructions as is, to deploy the model. Refer to the hello_world/train/models directory to access the models generated in this notebook.
New Model: If you have generated a new model, then update the values assigned to the variables defined in hello_world/model.cc with values displayed after running the following cell.
End of explanation |
11,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 0. Before start
OK, to begin we need to import some standart Python modules
Step2: 1. Setup
First, let us setup the working area.
Step3: Let's show our all-zero image
Step4: 2. Main class definition
What we are now going to do is create a class named Hyperbola
Each object of this class is capable of computing traveltimes to a certain subsurface point (diffractor) and plotting this point response (hyperbola) on a grid
How? to more clearly define a class? probably change to a function?
Step5: For testing purposes, let's create an object named Hyp_test and view its parameters
Step6: 3. Creating the model and 'forward modelling'
OK, now let's define a subsurface model. For the sake of simplicity, the model will consist of two types of objects
Step7: Next step is computing traveltimes for these subsurface diffractors. This is done by creating an instance of Hyperbola class for every diffractor.
Step8: ~~Next step is computing Green's functions for these subsurface diffractors. To do this, we need to setup a wavelet.~~
Of course, we are going to create an extremely simple wavelet.
Step9: Define a Line class
Step10: Create a line and add it to image
Step11: Excellent. The image now is pretty messy, so we need to migrate it and see what we can achieve
4. Migration definition
Step12: 5. Migration application | Python Code:
# -*- coding: utf-8 -*-
Created on Fri Feb 12 13:21:45 2016
@author: GrinevskiyAS
from __future__ import division
import numpy as np
from numpy import sin,cos,tan,pi,sqrt
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
%matplotlib inline
font = {'family': 'Arial', 'weight': 'normal', 'size':14}
mpl.rc('font', **font)
mpl.rc('figure', figsize=(9, 7))
#mpl.rc({'axes.facecolor':[0.97,0.97,0.98],"axes.edgecolor":[0.7,0.7,0.7],"grid.linewidth": 1,
# "axes.titlesize":16,"xtick.labelsize":20,"ytick.labelsize":14})
Explanation: 0. Before start
OK, to begin we need to import some standart Python modules
End of explanation
#This would be the size of each grid cell (X is the spatial coordinate, T is two-way time)
xstep=5
tstep=5
#size of the whole grid
xmax = 320
tmax = 220
#that's the arrays of x and t
xarray = np.arange(0, xmax, xstep).astype(float)
tarray = np.arange(0, tmax, tstep).astype(float)
#now fimally we created a 2D array img, which is now all zeros, but later we will add some amplitudes there
img=np.zeros((len(xarray), len(tarray)))
Explanation: 1. Setup
First, let us setup the working area.
End of explanation
plt.imshow(img.T,interpolation='none',cmap=cm.Greys, vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
Explanation: Let's show our all-zero image
End of explanation
class Hyperbola:
def __init__(self, xarray, tarray, x0, t0, v=2):
###input parameters define a difractor's position (x0,t0), P-wave velocity of homogeneous subsurface, and x- and t-arrays to compute traveltimes on.
###
self.x=xarray
self.x0=x0
self.t0=t0
self.v=v
#compute traveltimes
self.t=sqrt(t0**2 + (2*(xarray-x0)/v)**2)
#obtain some grid parameters
xstep=xarray[1]-xarray[0]
tbegin=tarray[0]
tend=tarray[-1]
tstep=tarray[1]-tarray[0]
#delete t's and x's for samples where t exceeds maxt
self.x=self.x[ (self.t>=tbegin) & (self.t <= tend) ]
self.t=self.t[ (self.t>=tbegin) & (self.t <= tend) ]
self.imgind=((self.x-xarray[0])/xstep).astype(int)
#compute amplitudes' fading according to geometrical spreading
self.amp = 1/(self.t/self.t0)
self.grid_resample(xarray, tarray)
def grid_resample(self, xarray, tarray):
# that's a function that computes at which 'cells' of image should we place the hyperbola
tend=tarray[-1]
tstep=tarray[1]-tarray[0]
self.xind=((self.x-xarray[0])/xstep).astype(int) #X cells numbers
self.tind=np.round((self.t-tarray[0])/tstep).astype(int) #T cells numbers
self.tind=self.tind[self.tind*tstep<=tarray[-1]] #delete T's exceeding max.T
self.tgrid=tarray[self.tind] # get 'gridded' T-values
self.coord=np.vstack((self.xind,tarray[self.tind]))
def add_to_img(self, img, wavelet):
# puts the hyperbola into the right cells of image with a given wavelet
maxind=np.size(img,1)
wavlen=np.floor(len(wavelet)/2).astype(int)
self.imgind=self.imgind[self.tind < maxind-wavlen-1]
self.amp = self.amp[self.tind < maxind-wavlen-1]
self.tind=self.tind[self.tind < maxind-wavlen-1]
ind_begin=self.tind-wavlen
for i,sample in enumerate(wavelet):
img[self.imgind,ind_begin+i]=img[self.imgind,ind_begin+i]+sample*self.amp
return img
Explanation: 2. Main class definition
What we are now going to do is create a class named Hyperbola
Each object of this class is capable of computing traveltimes to a certain subsurface point (diffractor) and plotting this point response (hyperbola) on a grid
How? to more clearly define a class? probably change to a function?
End of explanation
Hyp_test = Hyperbola(xarray, tarray, x0 = 100, t0 = 30, v = 2)
#Create a fugure and add axes to it
fgr_test1 = plt.figure(figsize=(7,5), facecolor='w')
ax_test1 = fgr_test1.add_subplot(111)
#Now plot Hyp_test's parameters: X vs T
ax_test1.plot(Hyp_test.x, Hyp_test.t, 'r', lw = 2)
#and their 'gridded' equivalents
ax_test1.plot(Hyp_test.x, Hyp_test.tgrid, ls='none', marker='o', ms=6, mfc=[0,0.5,1],mec='none')
#Some commands to add gridlines, change the directon of T axis and move x axis to top
ax_test1.set_ylim(tarray[-1],tarray[0])
ax_test1.xaxis.set_ticks_position('top')
ax_test1.grid(True, alpha = 0.1, ls='-',lw=.5)
ax_test1.set_xlabel('X, m')
ax_test1.set_ylabel('T, ms')
ax_test1.xaxis.set_label_position('top')
plt.show()
Explanation: For testing purposes, let's create an object named Hyp_test and view its parameters
End of explanation
point_diff_x0 = [100, 150, 210]
point_diff_t0 = [100, 50, 70]
plt.scatter(point_diff_x0,point_diff_t0, c='r',s=70)
plt.xlim(0, xmax)
plt.ylim(tmax, 0)
plt.gca().set_xlabel('X, m')
plt.gca().set_ylabel('T, ms')
plt.gca().xaxis.set_ticks_position('top')
plt.gca().xaxis.set_label_position('top')
plt.gca().grid(True, alpha = 0.1, ls='-',lw=.5)
Explanation: 3. Creating the model and 'forward modelling'
OK, now let's define a subsurface model. For the sake of simplicity, the model will consist of two types of objects:
1. Point diffractor in a homogeneous medium
* defined by their coordinates $(x_0, t_0)$ in data domain.
2. Plane reflecting surface
* defined by their end points $(x_1, t_1)$ and $(x_2, t_2)$, also in data domain.
We will be able to add any number of these objects to image.
Let's start by adding three point diffractors:
End of explanation
hyps=[]
for x0,t0 in zip(point_diff_x0,point_diff_t0):
hyp_i = Hyperbola(xarray, tarray, x0, t0, v=2)
hyps.append(hyp_i)
Explanation: Next step is computing traveltimes for these subsurface diffractors. This is done by creating an instance of Hyperbola class for every diffractor.
End of explanation
wav1 = np.array([-1,2,-1])
with plt.xkcd():
plt.axhline(0,c='k')
plt.stem((np.arange(len(wav1))-np.floor(len(wav1)/2)).astype(int) ,wav1)
plt.gca().set_xlim(-2*len(wav1), 2*len(wav1))
plt.gca().set_ylim(np.min(wav1)-1, np.max(wav1)+1)
for hyp_i in hyps:
hyp_i.add_to_img(img,wav1)
plt.imshow(img.T,interpolation='none',cmap=cm.Greys, vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
plt.gca().xaxis.set_ticks_position('top')
plt.gca().grid(ls=':', alpha=0.25, lw=1, c='w' )
Explanation: ~~Next step is computing Green's functions for these subsurface diffractors. To do this, we need to setup a wavelet.~~
Of course, we are going to create an extremely simple wavelet.
End of explanation
class Line:
def __init__(self, xmin, xmax, tmin, tmax, xarray, tarray):
self.xmin=xmin
self.xmax=xmax
self.tmin=tmin
self.tmax=tmax
xstep=xarray[1]-xarray[0]
tstep=tarray[1]-tarray[0]
xmin=xmin-np.mod(xmin,xstep)
xmax=xmax-np.mod(xmax,xstep)
tmin=tmin-np.mod(tmin,tstep)
tmax=tmax-np.mod(tmax,tstep)
self.x = np.arange(xmin,xmax+xstep,xstep)
self.t = tmin+(tmax-tmin)*(self.x-xmin)/(xmax-xmin)
self.imgind=((self.x-xarray[0])/xstep).astype(int)
self.tind=((self.t-tarray[0])/tstep).astype(int)
def add_to_img(self, img, wavelet):
maxind=np.size(img,1)
wavlen=np.floor(len(wavelet)/2).astype(int)
self.imgind=self.imgind[self.tind < maxind-1]
self.tind=self.tind[self.tind < maxind-1]
ind_begin=self.tind-wavlen
for i,sample in enumerate(wavelet):
img[self.imgind,ind_begin+i]=img[self.imgind,ind_begin+i]+sample
return img
Explanation: Define a Line class
End of explanation
line1=Line(100,250,50,150,xarray,tarray)
img=line1.add_to_img(img, [-1,2,-1])
line2=Line(40,270,175,100,xarray,tarray)
img=line2.add_to_img(img, [-1,2,-1])
plt.imshow(img.T,interpolation='none',cmap=cm.Greys, vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
plt.gca().xaxis.set_ticks_position('top')
plt.gca().grid(ls=':', alpha=0.25, lw=1, c='w' )
Explanation: Create a line and add it to image
End of explanation
def migrate(img,v,aper,xarray,tarray):
imgmig=np.zeros_like(img)
xstep=xarray[1]-xarray[0]
# print 'ะฝะฐัะธะฝะฐะตะผ ะผะธะณัะฐัะธั'
# print 'ะฐะฟะตััััะฐ {0}, ัะบะพัะพััั {1},'.format(aper, v)
# print '\n xarray: ะพั {0} ะดะพ {1} ั ัะฐะณะพะผ {2},'.format(xarray[0], xarray[-1], xstep)
# print '\n tarray: ะพั {0} ะดะพ {1} ั ัะฐะณะพะผ {2},'.format(tarray[0], tarray[-1], tarray[1]-tarray[0])
# for x0 in xarray[(xarray>xarray[0]+aper) & (xarray<xarray[-1]-aper)]:
for x0 in xarray:
for t0 in tarray[1:-1]:
# print "t0 = {0}, x0 = {1}".format(t0,x0)
xmig=xarray[(x0-aper<=xarray) & (xarray<=x0+aper)]
# print 'xmig = ะพั', xmig[0],' ะดะพ ', xmig[-1], ' ะพัััััะพะฒ ', len(xmig)
hi=Hyperbola(xmig,tarray,x0,t0,v)
# print 'hi.x: ะพั ', hi.x[0], ' ะดะพ ', hi.x[-1]
migind_start = hi.x[0]/xstep
migind_stop = (hi.x[-1]+xstep)/xstep
hi.imgind=np.arange(migind_start, migind_stop).astype(int)
# si=np.sum(img[hi.imgind,hi.tind])
si=np.mean(img[hi.imgind,hi.tind]*hi.amp)
# si=np.mean(img[hi.imgind,hi.tind])
imgmig[(x0/xstep).astype(int),(t0/tstep).astype(int)]=si
# if ( (t0==3 and x0==10) or (t0==7 and x0==17) or (t0==11 and x0==12) ):
# if ( (t0==8 and x0==20)):
# ax_data.plot(hi.x,hi.t,c='m',lw=3,alpha=0.8)
# ax_data.plot(hi.x0,hi.t0,marker='H', mfc='r', mec='m',ms=5)
# for xi in xmig:
# ax_data.plot([xi,hi.x0],[0,hi.t0],c='#AFFF94',lw=1.5,alpha=1)
#
return imgmig
Explanation: Excellent. The image now is pretty messy, so we need to migrate it and see what we can achieve
4. Migration definition
End of explanation
vmig = 2
aper = 200
res = migrate(img, vmig, aper, xarray, tarray)
plt.imshow(res.T,interpolation='none',vmin=-2,vmax=2,cmap=cm.Greys, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
#plt.imshow(res.T,cmap=cm.Greys,vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
#f_migv = plt.figure()
def migshow(vmig_i, aper_i, gain_i, interp):
res_i = migrate(img, vmig_i, aper_i, xarray, tarray)
if interp:
interp_style = 'bilinear'
else:
interp_style = 'none'
plt.imshow(res_i.T,interpolation=interp_style,vmin=-gain_i,vmax=gain_i,cmap=cm.Greys, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
plt.title('Vmig = '+str(vmig_i))
plt.show()
interact(migshow, vmig_i = widgets.FloatSlider(min = 1.0,max = 3.0, step = 0.01, value=2.0,continuous_update=False,description='Migration velocity: '),
aper_i = widgets.IntSlider(min = 10,max = 500, step = 1, value=200,continuous_update=False,description='Migration aperture: '),
gain_i = widgets.FloatSlider(min = 0.0,max = 5.0, step = 0.1, value=2.0,continuous_update=False,description='Gain: '),
interp = widgets.Checkbox(value=True, description='interpolate'))
#interact(migrate, img=fixed(img), v = widgets.IntSlider(min = 1.0,max = 3.0, step = 0.1, value=2), aper=fixed(aper), xarray=fixed(xarray), tarray=fixed(tarray))
Explanation: 5. Migration application
End of explanation |
11,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Spark and Python
Let's learn how to use Spark with Python by using the pyspark library! Make sure to view the video lecture explaining Spark and RDDs before continuing on with this code.
This notebook will serve as reference code for the Big Data section of the course involving Amazon Web Services. The video will provide fuller explanations for what the code is doing.
Creating a SparkContext
First we need to create a SparkContext. We will import this from pyspark
Step1: Now create the SparkContext,A SparkContext represents the connection to a Spark cluster, and can be used to create an RDD and broadcast variables on that cluster.
Note! You can only have one SparkContext at a time the way we are running things here.
Step2: Basic Operations
We're going to start with a 'hello world' example, which is just reading a text file. First let's create a text file.
Let's write an example text file to read, we'll use some special jupyter notebook commands for this, but feel free to use any .txt file
Step3: Creating the RDD
Now we can take in the textfile using the textFile method off of the SparkContext we created. This method will read a text file from HDFS, a local file system (available on all
nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
Step4: Sparkโs primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs.
Actions
We have just created an RDD using the textFile method and can perform operations on this object, such as counting the rows.
RDDs have actions, which return values, and transformations, which return pointers to new RDDs. Letโs start with a few actions
Step5: Transformations
Now we can use transformations, for example the filter transformation will return a new RDD with a subset of items in the file. Let's create a sample transformation using the filter() method. This method (just like Python's own filter function) will only return elements that satisfy the condition. Let's try looking for lines that contain the word 'second'. In which case, there should only be one line that has that. | Python Code:
from pyspark import SparkContext
Explanation: Introduction to Spark and Python
Let's learn how to use Spark with Python by using the pyspark library! Make sure to view the video lecture explaining Spark and RDDs before continuing on with this code.
This notebook will serve as reference code for the Big Data section of the course involving Amazon Web Services. The video will provide fuller explanations for what the code is doing.
Creating a SparkContext
First we need to create a SparkContext. We will import this from pyspark:
End of explanation
sc = SparkContext()
Explanation: Now create the SparkContext,A SparkContext represents the connection to a Spark cluster, and can be used to create an RDD and broadcast variables on that cluster.
Note! You can only have one SparkContext at a time the way we are running things here.
End of explanation
%%writefile example.txt
first line
second line
third line
fourth line
Explanation: Basic Operations
We're going to start with a 'hello world' example, which is just reading a text file. First let's create a text file.
Let's write an example text file to read, we'll use some special jupyter notebook commands for this, but feel free to use any .txt file:
End of explanation
textFile = sc.textFile('example.txt')
Explanation: Creating the RDD
Now we can take in the textfile using the textFile method off of the SparkContext we created. This method will read a text file from HDFS, a local file system (available on all
nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
End of explanation
textFile.count()
textFile.first()
Explanation: Sparkโs primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs.
Actions
We have just created an RDD using the textFile method and can perform operations on this object, such as counting the rows.
RDDs have actions, which return values, and transformations, which return pointers to new RDDs. Letโs start with a few actions:
End of explanation
secfind = textFile.filter(lambda line: 'second' in line)
# RDD
secfind
# Perform action on transformation
secfind.collect()
# Perform action on transformation
secfind.count()
Explanation: Transformations
Now we can use transformations, for example the filter transformation will return a new RDD with a subset of items in the file. Let's create a sample transformation using the filter() method. This method (just like Python's own filter function) will only return elements that satisfy the condition. Let's try looking for lines that contain the word 'second'. In which case, there should only be one line that has that.
End of explanation |
11,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import needed libraries
Step1: We used the library "request" last time in getting Twitter data (REST-ful). We are introducing the new "lxml" library for analyzing & extracting HTML elements and attributes here.
Use Requests to get HackerNews content
HackerNews is a community contributed news website with an emphasis on technology related content. Let's grab the set of articles that are at the top of the HN list.
Step2: We will now use lxml to create a programmatic access to the content from HackerNews.
Analyzing HTML Content
Step3: CSS Selectors
For those of you who are web designers, you are likely very familiar with Cascading Stylesheets (CSS). Here is an example for how to use CSS selector for finding specific HTML elements
Step4: Details of how to use CSS selectors can be found in the w3 schools site
Step5: We are only interested in those "td" tags that contain an anchor link to the referred article.
Step6: So, only half of those "td" tags with "title" contain posts that we are interested in. Let's take a look at the first such post.
Step7: There is a lot of "content" in the td tag's attributes. | Python Code:
import requests
from lxml import html
Explanation: Import needed libraries
End of explanation
response = requests.get('http://news.ycombinator.com/')
response
response.content
Explanation: We used the library "request" last time in getting Twitter data (REST-ful). We are introducing the new "lxml" library for analyzing & extracting HTML elements and attributes here.
Use Requests to get HackerNews content
HackerNews is a community contributed news website with an emphasis on technology related content. Let's grab the set of articles that are at the top of the HN list.
End of explanation
page = html.fromstring(response.content)
page
Explanation: We will now use lxml to create a programmatic access to the content from HackerNews.
Analyzing HTML Content
End of explanation
posts = page.cssselect('.title')
len(posts)
Explanation: CSS Selectors
For those of you who are web designers, you are likely very familiar with Cascading Stylesheets (CSS). Here is an example for how to use CSS selector for finding specific HTML elements
End of explanation
posts = page.xpath('//td[contains(@class, "title")]')
len(posts)
Explanation: Details of how to use CSS selectors can be found in the w3 schools site:
http://www.w3schools.com/cssref/css_selectors.asp
XPath
Alternatively, we can use a standard called "XPath" to find specific content in the HTML.
End of explanation
posts = page.xpath('//td[contains(@class, "title")]/a')
len(posts)
Explanation: We are only interested in those "td" tags that contain an anchor link to the referred article.
End of explanation
first_post = posts[0]
first_post.text
Explanation: So, only half of those "td" tags with "title" contain posts that we are interested in. Let's take a look at the first such post.
End of explanation
first_post.attrib
first_post.attrib["href"]
all_links = []
for p in posts:
all_links.append((p.text, p.attrib["href"]))
all_links
Explanation: There is a lot of "content" in the td tag's attributes.
End of explanation |
11,533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Chi-Square-Feature-Selection" data-toc-modified-id="Chi-Square-Feature-Selection-1"><span class="toc-item-num">1 </span>Chi-Square Feature Selection</a></span><ul class="toc-item"><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.1"><span class="toc-item-num">1.1 </span>Implementation</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Chi-Square Feature Selection
Feature selection is a process where you automatically select those features in your data that contribute most to the prediction variable or output in which you are interested. The benefits of performing feature selection before modeling your data are
Step2: One common feature selection method that is used with text data is the Chi-Square feature selection. The $\chi^2$ test is used in statistics to test the independence of two events. More specifically in feature selection we use it to test whether the occurrence of a specific term and the occurrence of a specific class are independent. More formally, given a document $D$, we estimate the following quantity for each term and rank them by their score
Step3: e.g. the second row of the observed array refers to the total count of the terms that belongs to class 1. Then we compute the expected frequencies of each term for each class.
Step4: We can confirm our result with the scikit-learn library using the chi2 function. The following code chunk computes chi-square value for each feature. For the returned tuple, the first element is the chi-square scores, the scores are better if greater. The second element is the p-values, they are better if smaller.
Step5: Scikit-learn provides a SelectKBest class that can be used with a suite of different statistical tests. It will rank the features with the statistical test that we've specified and select the top k performing ones (meaning that these terms is considered to be more relevant to the task at hand than the others), where k is also a number that we can tweak. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style = False)
os.chdir(path)
import numpy as np
import pandas as pd
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%load_ext watermark
%load_ext autoreload
%autoreload 2
from sklearn.preprocessing import LabelBinarizer
from sklearn.feature_selection import chi2, SelectKBest
from sklearn.feature_extraction.text import CountVectorizer
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Chi-Square-Feature-Selection" data-toc-modified-id="Chi-Square-Feature-Selection-1"><span class="toc-item-num">1 </span>Chi-Square Feature Selection</a></span><ul class="toc-item"><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.1"><span class="toc-item-num">1.1 </span>Implementation</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
# suppose we have the following toy text data
X = np.array(['call you tonight', 'Call me a cab', 'please call me... PLEASE!', 'he will call me'])
y = [1, 1, 2, 0]
# we'll convert it to a dense document-term matrix,
# so we can print a more readable output
vect = CountVectorizer()
X_dtm = vect.fit_transform(X)
X_dtm = X_dtm.toarray()
pd.DataFrame(X_dtm, columns = vect.get_feature_names())
Explanation: Chi-Square Feature Selection
Feature selection is a process where you automatically select those features in your data that contribute most to the prediction variable or output in which you are interested. The benefits of performing feature selection before modeling your data are:
Avoid Overfitting: Less redundant data gives performance boost to the model and results in less opportunity to make decisions based on noise
Reduces Training Time: Less data means that algorithms train faster
End of explanation
# binarize the output column,
# this makes computing the observed value a
# simple dot product
y_binarized = LabelBinarizer().fit_transform(y)
print(y_binarized)
print()
# our observed count for each class (the row)
# and each feature (the column)
observed = np.dot(y_binarized.T, X_dtm)
print(observed)
Explanation: One common feature selection method that is used with text data is the Chi-Square feature selection. The $\chi^2$ test is used in statistics to test the independence of two events. More specifically in feature selection we use it to test whether the occurrence of a specific term and the occurrence of a specific class are independent. More formally, given a document $D$, we estimate the following quantity for each term and rank them by their score:
$$
\chi^2(D, t, c) = \sum_{e_t \in {0, 1}} \sum_{e_c \in {0, 1}}
\frac{ ( N_{e_te_c} - E_{e_te_c} )^2 }{ E_{e_te_c} }$$
Where
$N$ is the observed frequency in and $E$ the expected frequency
$e_t$ takes the value 1 if the document contains term $t$ and 0 otherwise
$e_c$ takes the value 1 if the document is in class $c$ and 0 otherwise
For each feature (term), a corresponding high $\chi^2$ score indicates that the null hypothesis $H_0$ of independence (meaning the document class has no influence over the term's frequency) should be rejected and the occurrence of the term and class are dependent. In this case, we should select the feature for the text classification.
Implementation
We first compute the observed count for each class. This is done by building a contingency table from an input $X$ (feature values) and $y$ (class labels). Each entry $i$, $j$ corresponds to some feature $i$ and some class $j$, and holds the sum of the $i_{th}$ feature's values across all samples belonging to the class $j$.
Note that although the feature values here are represented as frequencies, this method also works quite well in practice when the values are tf-idf values, since those are just weighted/scaled frequencies.
End of explanation
# compute the probability of each class and the feature count;
# keep both as a 2 dimension array using reshape
class_prob = y_binarized.mean(axis = 0).reshape(1, -1)
feature_count = X_dtm.sum(axis = 0).reshape(1, -1)
expected = np.dot(class_prob.T, feature_count)
print(expected)
chisq = (observed - expected) ** 2 / expected
chisq_score = chisq.sum(axis = 0)
print(chisq_score)
Explanation: e.g. the second row of the observed array refers to the total count of the terms that belongs to class 1. Then we compute the expected frequencies of each term for each class.
End of explanation
chi2score = chi2(X_dtm, y)
chi2score
Explanation: We can confirm our result with the scikit-learn library using the chi2 function. The following code chunk computes chi-square value for each feature. For the returned tuple, the first element is the chi-square scores, the scores are better if greater. The second element is the p-values, they are better if smaller.
End of explanation
kbest = SelectKBest(score_func = chi2, k = 4)
X_dtm_kbest = kbest.fit_transform(X_dtm, y)
X_dtm_kbest
Explanation: Scikit-learn provides a SelectKBest class that can be used with a suite of different statistical tests. It will rank the features with the statistical test that we've specified and select the top k performing ones (meaning that these terms is considered to be more relevant to the task at hand than the others), where k is also a number that we can tweak.
End of explanation |
11,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
H2O Tutorial
Step1: If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows
Step2: Download EEG Data
The following code downloads a copy of the EEG Eye State dataset. All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.
We can import the data directly into H2O using the import_file method in the Python API. The import path can be a URL, a local path, a path to an HDFS file, or a file on Amazon S3.
Step3: Explore Data
Once we have loaded the data, let's take a quick look. First the dimension of the frame
Step4: Now let's take a look at the top of the frame
Step5: The first 14 columns are numeric values that represent EEG measurements from the headset. The "eyeDetection" column is the response. There is an additional column called "split" that was added (by me) in order to specify partitions of the data (so we can easily benchmark against other tools outside of H2O using the same splits). I randomly divided the dataset into three partitions
Step6: To select a subset of the columns to look at, typical Pandas indexing applies
Step7: Now let's select a single column, for example -- the response column, and look at the data more closely
Step8: It looks like a binary response, but let's validate that assumption
Step9: If you don't specify the column types when you import the file, H2O makes a guess at what your column types are. If there are 0's and 1's in a column, H2O will automatically parse that as numeric by default.
Therefore, we should convert the response column to a more efficient "enum" representation -- in this case it is a categorial variable with two levels, 0 and 1. If the only column in my data that is categorical is the response, I typically don't bother specifying the column type during the parse, and instead use this one-liner to convert it aftewards
Step10: Now we can check that there are two levels in our response column
Step11: We can query the categorical "levels" as well ('0' and '1' stand for "eye open" and "eye closed") to see what they are
Step12: We may want to check if there are any missing values, so let's look for NAs in our dataset. For tree-based methods like GBM and RF, H2O handles missing feature values automatically, so it's not a problem if we are missing certain feature values. However, it is always a good idea to check to make sure that you are not missing any of the training labels.
To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column.
Step13: The isna method doesn't directly answer the question, "Does the response column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look
Step14: Great, no missing labels.
Step15: The sum is still zero, so there are no missing values in any of the cells.
The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution
Step16: Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
Let's calculate the percentage that each class represents
Step17: Split H2O Frame into a train and test set
So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts
Step18: Machine Learning in H2O
We will do a quick demo of the H2O software using a Gradient Boosting Machine (GBM). The goal of this problem is to train a model to predict eye state (open vs closed) from EEG data.
Train and Test a GBM model
Step19: We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters.
Step20: Specify the predictor set and response
The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables.
The x argument should be a list of predictor names in the training frame, and y specifies the response column. We have already set y = "eyeDetector" above, but we still need to specify x.
Step21: Now that we have specified x and y, we can train the model
Step22: Inspect Model
The type of results shown when you print a model, are determined by the following
Step23: Model Performance on a Test Set
Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we just ran the model once, so our validation set (passed as validation_frame), could have also served as a "test set." We technically have already created test set predictions and evaluated test set performance.
However, when performing model selection over a variety of model parameters, it is common for users to train a variety of models (using different parameters) using the training set, train, and a validation set, valid. Once the user selects the best model (based on validation set performance), the true test of model performance is performed by making a final set of predictions on the held-out (never been used before) test set, test.
You can use the model_performance method to generate predictions on a new dataset. The results are stored in an object of class, "H2OBinomialModelMetrics".
Step24: Individual model performance metrics can be extracted using methods like r2, auc and mse. In the case of binary classification, we may be most interested in evaluating test set Area Under the ROC Curve (AUC).
Step25: Cross-validated Performance
To perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row.
Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the nfolds argument.
When performing cross-validation, you can still pass a validation_frame, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which is called data.
Step26: This time around, we will simply pull the training and cross-validation metrics out of the model. To do so, you use the auc method again, and you can specify train or xval as True to get the correct metric.
Step27: Grid Search
One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over
Step28: Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters
Step29: An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid.
Step30: Compare Models
Step31: The "best" model in terms of validation set AUC is listed first in auc_table.
Step32: The last thing we may want to do is generate predictions on the test set using the "best" model, and evaluate the test set AUC. | Python Code:
import h2o
# Start an H2O Cluster on your local machine
h2o.init()
Explanation: H2O Tutorial: EEG Eye State Classification
Author: Erin LeDell
Contact: [email protected]
This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python.
Most of the functionality for a Pandas DataFrame is exactly the same syntax for an H2OFrame, so if you are comfortable with Pandas, data frame manipulation will come naturally to you in H2O. The modeling syntax in the H2O Python API may also remind you of scikit-learn.
References: H2O Python API documentation and H2O general documentation
Install H2O in Python
Prerequisites
This tutorial assumes you have Python 2.7 installed. The h2o Python package has a few dependencies which can be installed using pip. The packages that are required are (which also have their own dependencies):
bash
pip install requests
pip install tabulate
pip install scikit-learn
If you have any problems (for example, installing the scikit-learn package), check out this page for tips.
Install h2o
Once the dependencies are installed, you can install H2O. We will use the latest stable version of the h2o package, which is currently "Tibshirani-8." The installation instructions are on the "Install in Python" tab on this page.
```bash
The following command removes the H2O module for Python (if it already exists).
pip uninstall h2o
Next, use pip to install this version of the H2O Python module.
pip install http://h2o-release.s3.amazonaws.com/h2o/rel-tibshirani/8/Python/h2o-3.6.0.8-py2.py3-none-any.whl
```
For reference, the Python documentation for the latest stable release of H2O is here.
Start up an H2O cluster
In a Python terminal, we can import the h2o package and start up an H2O cluster.
End of explanation
# This will not actually do anything since it's a fake IP address
# h2o.init(ip="123.45.67.89", port=54321)
Explanation: If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows:
End of explanation
#csv_url = "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv"
csv_url = "https://h2o-public-test-data.s3.amazonaws.com/eeg_eyestate_splits.csv"
data = h2o.import_file(csv_url)
Explanation: Download EEG Data
The following code downloads a copy of the EEG Eye State dataset. All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.
We can import the data directly into H2O using the import_file method in the Python API. The import path can be a URL, a local path, a path to an HDFS file, or a file on Amazon S3.
End of explanation
data.shape
Explanation: Explore Data
Once we have loaded the data, let's take a quick look. First the dimension of the frame:
End of explanation
data.head()
Explanation: Now let's take a look at the top of the frame:
End of explanation
data.columns
Explanation: The first 14 columns are numeric values that represent EEG measurements from the headset. The "eyeDetection" column is the response. There is an additional column called "split" that was added (by me) in order to specify partitions of the data (so we can easily benchmark against other tools outside of H2O using the same splits). I randomly divided the dataset into three partitions: train (60%), valid (%20) and test (20%) and marked which split each row belongs to in the "split" column.
Let's take a look at the column names. The data contains derived features from the medical images of the tumors.
End of explanation
columns = ['AF3', 'eyeDetection', 'split']
data[columns].head()
Explanation: To select a subset of the columns to look at, typical Pandas indexing applies:
End of explanation
y = 'eyeDetection'
data[y]
Explanation: Now let's select a single column, for example -- the response column, and look at the data more closely:
End of explanation
data[y].unique()
Explanation: It looks like a binary response, but let's validate that assumption:
End of explanation
data[y] = data[y].asfactor()
Explanation: If you don't specify the column types when you import the file, H2O makes a guess at what your column types are. If there are 0's and 1's in a column, H2O will automatically parse that as numeric by default.
Therefore, we should convert the response column to a more efficient "enum" representation -- in this case it is a categorial variable with two levels, 0 and 1. If the only column in my data that is categorical is the response, I typically don't bother specifying the column type during the parse, and instead use this one-liner to convert it aftewards:
End of explanation
data[y].nlevels()
Explanation: Now we can check that there are two levels in our response column:
End of explanation
data[y].levels()
Explanation: We can query the categorical "levels" as well ('0' and '1' stand for "eye open" and "eye closed") to see what they are:
End of explanation
data.isna()
data[y].isna()
Explanation: We may want to check if there are any missing values, so let's look for NAs in our dataset. For tree-based methods like GBM and RF, H2O handles missing feature values automatically, so it's not a problem if we are missing certain feature values. However, it is always a good idea to check to make sure that you are not missing any of the training labels.
To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column.
End of explanation
data[y].isna().sum()
Explanation: The isna method doesn't directly answer the question, "Does the response column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look:
End of explanation
data.isna().sum()
Explanation: Great, no missing labels. :-)
Out of curiosity, let's see if there is any missing data in this frame:
End of explanation
data[y].table()
Explanation: The sum is still zero, so there are no missing values in any of the cells.
The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution:
End of explanation
n = data.shape[0] # Total number of training samples
data[y].table()['Count']/n
Explanation: Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
Let's calculate the percentage that each class represents:
End of explanation
train = data[data['split']=="train"]
train.shape
valid = data[data['split']=="valid"]
valid.shape
test = data[data['split']=="test"]
test.shape
Explanation: Split H2O Frame into a train and test set
So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts: a training set, validation set and a test set.
If you want H2O to do the splitting for you, you can use the split_frame method. However, we have explicit splits that we want (for reproducibility reasons), so we can just subset the Frame to get the partitions we want.
Subset the data H2O Frame on the "split" column:
End of explanation
# Import H2O GBM:
from h2o.estimators.gbm import H2OGradientBoostingEstimator
Explanation: Machine Learning in H2O
We will do a quick demo of the H2O software using a Gradient Boosting Machine (GBM). The goal of this problem is to train a model to predict eye state (open vs closed) from EEG data.
Train and Test a GBM model
End of explanation
model = H2OGradientBoostingEstimator(distribution='bernoulli',
ntrees=100,
max_depth=4,
learn_rate=0.1)
Explanation: We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters.
End of explanation
x = list(train.columns)
x
del x[12:14] #Remove the 13th and 14th columns, 'eyeDetection' and 'split'
x
Explanation: Specify the predictor set and response
The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables.
The x argument should be a list of predictor names in the training frame, and y specifies the response column. We have already set y = "eyeDetector" above, but we still need to specify x.
End of explanation
model.train(x=x, y=y, training_frame=train, validation_frame=valid)
Explanation: Now that we have specified x and y, we can train the model:
End of explanation
print(model)
Explanation: Inspect Model
The type of results shown when you print a model, are determined by the following:
- Model class of the estimator (e.g. GBM, RF, GLM, DL)
- The type of machine learning problem (e.g. binary classification, multiclass classification, regression)
- The data you specify (e.g. training_frame only, training_frame and validation_frame, or training_frame and nfolds)
Below, we see a GBM Model Summary, as well as training and validation metrics since we supplied a validation_frame. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score.
The scoring history is also printed, which shows the performance metrics over some increment such as "number of trees" in the case of GBM and RF.
Lastly, for tree-based methods (GBM and RF), we also print variable importance.
End of explanation
perf = model.model_performance(test)
print(perf.__class__)
Explanation: Model Performance on a Test Set
Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we just ran the model once, so our validation set (passed as validation_frame), could have also served as a "test set." We technically have already created test set predictions and evaluated test set performance.
However, when performing model selection over a variety of model parameters, it is common for users to train a variety of models (using different parameters) using the training set, train, and a validation set, valid. Once the user selects the best model (based on validation set performance), the true test of model performance is performed by making a final set of predictions on the held-out (never been used before) test set, test.
You can use the model_performance method to generate predictions on a new dataset. The results are stored in an object of class, "H2OBinomialModelMetrics".
End of explanation
perf.r2()
perf.auc()
perf.mse()
Explanation: Individual model performance metrics can be extracted using methods like r2, auc and mse. In the case of binary classification, we may be most interested in evaluating test set Area Under the ROC Curve (AUC).
End of explanation
cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli',
ntrees=100,
max_depth=4,
learn_rate=0.1,
nfolds=5)
cvmodel.train(x=x, y=y, training_frame=data)
Explanation: Cross-validated Performance
To perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row.
Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the nfolds argument.
When performing cross-validation, you can still pass a validation_frame, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which is called data.
End of explanation
print(cvmodel.auc(train=True))
print(cvmodel.auc(xval=True))
Explanation: This time around, we will simply pull the training and cross-validation metrics out of the model. To do so, you use the auc method again, and you can specify train or xval as True to get the correct metric.
End of explanation
ntrees_opt = [5,50,100]
max_depth_opt = [2,3,5]
learn_rate_opt = [0.1,0.2]
hyper_params = {'ntrees': ntrees_opt,
'max_depth': max_depth_opt,
'learn_rate': learn_rate_opt}
Explanation: Grid Search
One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over:
- ntrees: Number of trees
- max_depth: Maximum depth of a tree
- learn_rate: Learning rate in the GBM
We will define a grid as follows:
End of explanation
from h2o.grid.grid_search import H2OGridSearch
gs = H2OGridSearch(H2OGradientBoostingEstimator, hyper_params = hyper_params)
Explanation: Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters:
End of explanation
gs.train(x=x, y=y, training_frame=train, validation_frame=valid)
Explanation: An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid.
End of explanation
print(gs)
# print out the auc for all of the models
auc_table = gs.sort_by('auc(valid=True)',increasing=False)
print(auc_table)
Explanation: Compare Models
End of explanation
best_model = h2o.get_model(auc_table['Model Id'][0])
best_model.auc()
Explanation: The "best" model in terms of validation set AUC is listed first in auc_table.
End of explanation
best_perf = best_model.model_performance(test)
best_perf.auc()
Explanation: The last thing we may want to do is generate predictions on the test set using the "best" model, and evaluate the test set AUC.
End of explanation |
11,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generalized Linear Models
Step1: GLM
Step2: Load the data and add a constant to the exogenous (independent) variables
Step3: The dependent variable is N by 2 (Success
Step4: The independent variables include all the other variables described above, as
well as the interaction terms
Step5: Fit and summary
Step6: Quantities of interest
Step7: First differences
Step8: The interquartile first difference for the percentage of low income households in a school district is
Step9: Plots
We extract information that will be used to draw some interesting plots
Step10: Plot yhat vs y
Step11: Plot yhat vs. Pearson residuals
Step12: Histogram of standardized deviance residuals
Step13: QQ Plot of Deviance Residuals
Step14: GLM
Step15: Load the data and add a constant to the exogenous variables
Step16: Model Fit and summary
Step17: GLM
Step18: Fit and summary (artificial data) | Python Code:
%matplotlib inline
import numpy as np
import statsmodels.api as sm
from scipy import stats
from matplotlib import pyplot as plt
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
Explanation: Generalized Linear Models
End of explanation
print(sm.datasets.star98.NOTE)
Explanation: GLM: Binomial response data
Load Star98 data
In this example, we use the Star98 dataset which was taken with permission
from Jeff Gill (2000) Generalized linear models: A unified approach. Codebook
information can be obtained by typing:
End of explanation
data = sm.datasets.star98.load()
data.exog = sm.add_constant(data.exog, prepend=False)
Explanation: Load the data and add a constant to the exogenous (independent) variables:
End of explanation
print(data.endog.head())
Explanation: The dependent variable is N by 2 (Success: NABOVE, Failure: NBELOW):
End of explanation
print(data.exog.head())
Explanation: The independent variables include all the other variables described above, as
well as the interaction terms:
End of explanation
glm_binom = sm.GLM(data.endog, data.exog, family=sm.families.Binomial())
res = glm_binom.fit()
print(res.summary())
Explanation: Fit and summary
End of explanation
print('Total number of trials:', data.endog.iloc[:, 0].sum())
print('Parameters: ', res.params)
print('T-values: ', res.tvalues)
Explanation: Quantities of interest
End of explanation
means = data.exog.mean(axis=0)
means25 = means.copy()
means25.iloc[0] = stats.scoreatpercentile(data.exog.iloc[:,0], 25)
means75 = means.copy()
means75.iloc[0] = lowinc_75per = stats.scoreatpercentile(data.exog.iloc[:,0], 75)
resp_25 = res.predict(means25)
resp_75 = res.predict(means75)
diff = resp_75 - resp_25
Explanation: First differences: We hold all explanatory variables constant at their means and manipulate the percentage of low income households to assess its impact on the response variables:
End of explanation
print("%2.4f%%" % (diff*100))
Explanation: The interquartile first difference for the percentage of low income households in a school district is:
End of explanation
nobs = res.nobs
y = data.endog.iloc[:,0]/data.endog.sum(1)
yhat = res.mu
Explanation: Plots
We extract information that will be used to draw some interesting plots:
End of explanation
from statsmodels.graphics.api import abline_plot
fig, ax = plt.subplots()
ax.scatter(yhat, y)
line_fit = sm.OLS(y, sm.add_constant(yhat, prepend=True)).fit()
abline_plot(model_results=line_fit, ax=ax)
ax.set_title('Model Fit Plot')
ax.set_ylabel('Observed values')
ax.set_xlabel('Fitted values');
Explanation: Plot yhat vs y:
End of explanation
fig, ax = plt.subplots()
ax.scatter(yhat, res.resid_pearson)
ax.hlines(0, 0, 1)
ax.set_xlim(0, 1)
ax.set_title('Residual Dependence Plot')
ax.set_ylabel('Pearson Residuals')
ax.set_xlabel('Fitted values')
Explanation: Plot yhat vs. Pearson residuals:
End of explanation
from scipy import stats
fig, ax = plt.subplots()
resid = res.resid_deviance.copy()
resid_std = stats.zscore(resid)
ax.hist(resid_std, bins=25)
ax.set_title('Histogram of standardized deviance residuals');
Explanation: Histogram of standardized deviance residuals:
End of explanation
from statsmodels import graphics
graphics.gofplots.qqplot(resid, line='r')
Explanation: QQ Plot of Deviance Residuals:
End of explanation
print(sm.datasets.scotland.DESCRLONG)
Explanation: GLM: Gamma for proportional count response
Load Scottish Parliament Voting data
In the example above, we printed the NOTE attribute to learn about the
Star98 dataset. statsmodels datasets ships with other useful information. For
example:
End of explanation
data2 = sm.datasets.scotland.load()
data2.exog = sm.add_constant(data2.exog, prepend=False)
print(data2.exog.head())
print(data2.endog.head())
Explanation: Load the data and add a constant to the exogenous variables:
End of explanation
glm_gamma = sm.GLM(data2.endog, data2.exog, family=sm.families.Gamma(sm.families.links.log()))
glm_results = glm_gamma.fit()
print(glm_results.summary())
Explanation: Model Fit and summary
End of explanation
nobs2 = 100
x = np.arange(nobs2)
np.random.seed(54321)
X = np.column_stack((x,x**2))
X = sm.add_constant(X, prepend=False)
lny = np.exp(-(.03*x + .0001*x**2 - 1.0)) + .001 * np.random.rand(nobs2)
Explanation: GLM: Gaussian distribution with a noncanonical link
Artificial data
End of explanation
gauss_log = sm.GLM(lny, X, family=sm.families.Gaussian(sm.families.links.log()))
gauss_log_results = gauss_log.fit()
print(gauss_log_results.summary())
Explanation: Fit and summary (artificial data)
End of explanation |
11,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Volshow
A simple volume rendering example
Using the pylab API
Step1: Visualizating a scan of a male head
Included in ipyvolume, is a visualuzation of a scan of a human head, see the sourcecode for more details. | Python Code:
import numpy as np
import ipyvolume as ipv
V = np.zeros((128,128,128)) # our 3d array
# outer box
V[30:-30,30:-30,30:-30] = 0.75
V[35:-35,35:-35,35:-35] = 0.0
# inner box
V[50:-50,50:-50,50:-50] = 0.25
V[55:-55,55:-55,55:-55] = 0.0
ipv.figure()
ipv.volshow(V, level=[0.25, 0.75], opacity=0.03, level_width=0.1, data_min=0, data_max=1)
ipv.view(-30, 40)
ipv.show()
Explanation: Volshow
A simple volume rendering example
Using the pylab API
End of explanation
import ipyvolume as ipv
fig = ipv.figure()
vol_head = ipv.examples.head(max_shape=128);
vol_head.ray_steps = 400
ipv.view(90, 0)
Explanation: Visualizating a scan of a male head
Included in ipyvolume, is a visualuzation of a scan of a human head, see the sourcecode for more details.
End of explanation |
11,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compare fit of mixture model where the nulldistribution is either with or without prethreshold
In this notebook, I did a first effort to see if we can apply the thresholdfree peakdistribution in our method to estimate the alternative distribution on a simulated dataset.
Import packages and set working directory
Step1: Define peak density function
Step2: Simulate and export data from 10 subjects
Step3: Perform group analysis and extract peaks from Tstat-map
Step4: Plot observed distribution of peaks with theoretical distribution (under H_0)
Step5: Compute p-values based on theoretical distribution (by numerical integration)
Step6: Compute proportion of activation based on BUM model
Step7: Plot histogram of p-values with expected distribution (beta and uniform)
Step8: Apply power procedure WITH threshold
Step10: Adjust power procedure without threshold
Step11: Figures for JSM
Step12: $P(T>t | H_0, t>u)$ | Python Code:
import matplotlib
% matplotlib inline
import numpy as np
import scipy
import scipy.stats as stats
import scipy.optimize as optimize
import scipy.integrate as integrate
from __future__ import print_function, division
import BUM
import neuropower
import os
import math
from nipy.labs.utils.simul_multisubject_fmri_dataset import surrogate_3d_dataset
from nipype.interfaces import fsl
import nibabel as nib
import matplotlib.pyplot as plt
import pandas as pd
from palettable.colorbrewer.qualitative import Paired_12
import scipy.stats as stats
os.chdir("/Users/Joke/Documents/Onderzoek/Studie_7_neuropower_improved/WORKDIR/")
Explanation: Compare fit of mixture model where the nulldistribution is either with or without prethreshold
In this notebook, I did a first effort to see if we can apply the thresholdfree peakdistribution in our method to estimate the alternative distribution on a simulated dataset.
Import packages and set working directory
End of explanation
def peakdens1D(x,k):
f1 = (3-k**2)**0.5/(6*math.pi)**0.5*np.exp(-3*x**2/(2*(3-k**2)))
f2 = 2*k*x*math.pi**0.5/6**0.5*stats.norm.pdf(x)*stats.norm.cdf(k*x/(3-k**2)**0.5)
out = f1*f2
return out
def peakdens2D(x,k):
f1 = 3**0.5*k**2*(x**2-1)*stats.norm.pdf(x)*stats.norm.cdf(k*x/(2-k**2)**0.5)
f2 = k*x*(3*(2-k**2))**0.5/(2*math.pi) * np.exp(-x**2/(2-k**2))
f31 = 6**0.5/(math.pi*(3-k**2))**0.5*np.exp(-3*x**2/(2*(3-k**2)))
f32 = stats.norm.cdf(k*x/((3-k**2)*(2-k**2))**0.5)
out = f1+f2+f31*f32
return out
def peakdens3D(x,k):
fd1 = 144*stats.norm.pdf(x)/(29*6**(0.5)-36)
fd211 = k**2.*((1.-k**2.)**3. + 6.*(1.-k**2.)**2. + 12.*(1.-k**2.)+24.)*x**2. / (4.*(3.-k**2.)**2.)
fd212 = (2.*(1.-k**2.)**3. + 3.*(1.-k**2.)**2.+6.*(1.-k**2.)) / (4.*(3.-k**2.))
fd213 = 3./2.
fd21 = (fd211 + fd212 + fd213)
fd22 = np.exp(-k**2.*x**2./(2.*(3.-k**2.))) / (2.*(3.-k**2.))**(0.5)
fd23 = stats.norm.cdf(2.*k*x / ((3.-k**2.)*(5.-3.*k**2.))**(0.5))
fd2 = fd21*fd22*fd23
fd31 = (k**2.*(2.-k**2.))/4.*x**2. - k**2.*(1.-k**2.)/2. - 1.
fd32 = np.exp(-k**2.*x**2./(2.*(2.-k**2.))) / (2.*(2.-k**2.))**(0.5)
fd33 = stats.norm.cdf(k*x / ((2.-k**2.)*(5.-3.*k**2.))**(0.5))
fd3 = fd31 * fd32 * fd33
fd41 = (7.-k**2.) + (1-k**2)*(3.*(1.-k**2.)**2. + 12.*(1.-k**2.) + 28.)/(2.*(3.-k**2.))
fd42 = k*x / (4.*math.pi**(0.5)*(3.-k**2.)*(5.-3.*k**2)**0.5)
fd43 = np.exp(-3.*k**2.*x**2/(2.*(5-3.*k**2.)))
fd4 = fd41*fd42 * fd43
fd51 = math.pi**0.5*k**3./4.*x*(x**2.-3.)
f521low = np.array([-10.,-10.])
f521up = np.array([0.,k*x/2.**(0.5)])
f521mu = np.array([0.,0.])
f521sigma = np.array([[3./2., -1.],[-1.,(3.-k**2.)/2.]])
fd521,i = stats.mvn.mvnun(f521low,f521up,f521mu,f521sigma)
f522low = np.array([-10.,-10.])
f522up = np.array([0.,k*x/2.**(0.5)])
f522mu = np.array([0.,0.])
f522sigma = np.array([[3./2., -1./2.],[-1./2.,(2.-k**2.)/2.]])
fd522,i = stats.mvn.mvnun(f522low,f522up,f522mu,f522sigma)
fd5 = fd51*(fd521+fd522)
out = fd1*(fd2+fd3+fd4+fd5)
return out
Explanation: Define peak density function
End of explanation
smooth_FWHM = 3
smooth_sigma = smooth_FWHM/(2*math.sqrt(2*math.log(2)))
dimensions = (50,50,50)
positions = np.array([[60,40,40],
[40,80,40],
[50,30,60]])
amplitudes = np.array([1.,1.,1.])
width = 5.
seed=123
mask = nib.load("mask.nii")
nsub=10
noise = surrogate_3d_dataset(n_subj=nsub, shape=dimensions, mask=mask,
sk=smooth_sigma,noise_level=1.0,
width=5.0,out_text_file=None,
out_image_file=None, seed=seed)
signal = surrogate_3d_dataset(n_subj=nsub, shape=dimensions, mask=mask,
sk=smooth_sigma,noise_level=0.0, pos=positions,
ampli=amplitudes, width=10.0,out_text_file=None,
out_image_file=None, seed=seed)
low_values_indices = signal < 0.1
signal[low_values_indices] = 0
high_values_indices = signal > 0
signal[high_values_indices] = 1
data = noise+signal
fig,axs=plt.subplots(1,3,figsize=(13,3))
fig.subplots_adjust(hspace = .5, wspace=0.3)
axs=axs.ravel()
axs[0].imshow(noise[1,:,:,40])
axs[1].imshow(signal[1,:,:,40])
axs[2].imshow(data[1,:,:,40])
fig.show()
data = data.transpose((1,2,3,0))
img=nib.Nifti1Image(data,np.eye(4))
img.to_filename(os.path.join("simulated_dataset.nii.gz"))
Explanation: Simulate and export data from 10 subjects
End of explanation
model=fsl.L2Model(num_copes=nsub)
model.run()
flameo=fsl.FLAMEO(cope_file='simulated_dataset.nii.gz',
cov_split_file='design.grp',
design_file='design.mat',
t_con_file='design.con',
mask_file='mask.nii',
run_mode='ols',
terminal_output='none')
flameo.run()
from StringIO import StringIO # This is for reading a string into a pandas df
import tempfile
import shutil
tstat = nib.load("stats/tstat1.nii.gz").get_data()
minimum = np.nanmin(tstat)
newdata = tstat - minimum #little trick because fsl.model.Cluster ignores negative values
img=nib.Nifti1Image(newdata,np.eye(4))
img.to_filename(os.path.join("tstat1_allpositive.nii.gz"))
input_file = os.path.join("tstat1_allpositive.nii.gz")
# 0) Creating a temporary directory for the temporary file to save the local cluster file
tmppath = tempfile.mkdtemp()
# 1) Running the command and saving output to screen into df
cmd = "cluster -i %s --thresh=0 --num=10000 --olmax=%s/locmax.txt --connectivity=26" %(input_file,tmppath)
output = StringIO(os.popen(cmd).read()) #Joke - If you need the output for the max stuffs, you can get it in this variable,
# you can read it into a pandas data frame
df = pd.DataFrame.from_csv(output, sep="\t", parse_dates=False)
df
# 2) Now let's read in the temporary file, and delete the directory and everything in it
peaks = pd.read_csv("%s/locmax.txt" %tmppath,sep="\t").drop('Unnamed: 5',1)
peaks.Value = peaks.Value + minimum
shutil.rmtree(tmppath)
peaks[:5]
Explanation: Perform group analysis and extract peaks from Tstat-map
End of explanation
xn = np.arange(-10,10,0.01)
yn = []
for x in xn:
yn.append(peakdens3D(x,1))
twocol = Paired_12.mpl_colors
plt.figure(figsize=(7,5))
plt.hist(peaks.Value,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(-5,10,0.3),label="observed distribution")
plt.xlim([-1,10])
plt.ylim([0,0.6])
plt.plot(xn,yn,color=twocol[1],lw=3,label="theoretical distribution under H_0")
plt.title("histogram")
plt.xlabel("peak height")
plt.ylabel("density")
plt.legend(loc="upper left",frameon=False)
plt.show()
Explanation: Plot observed distribution of peaks with theoretical distribution (under H_0)
End of explanation
y = []
for x in peaks.Value:
y.append(1-integrate.quad(lambda x: peakdens3D(x,1), -20, x)[0])
ynew = [10**(-6) if x<10**(-6) else x for x in y]
peaks.P = ynew
Explanation: Compute p-values based on theoretical distribution (by numerical integration)
End of explanation
bum = BUM.bumOptim(peaks.P,starts=100)
bum["pi1"]
Explanation: Compute proportion of activation based on BUM model
End of explanation
twocol = Paired_12.mpl_colors
plt.figure(figsize=(7,5))
plt.hist(peaks.P,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(0,1,0.1),label="observed distribution")
plt.hlines(1-bum["pi1"],0,1,color=twocol[1],lw=3,label="null part of distribution")
plt.plot(xn,stats.beta.pdf(xn,bum["a"],1)+1-bum["pi1"],color=twocol[3],lw=3,label="alternative part of distribution")
plt.xlim([0,1])
plt.ylim([0,4])
plt.title("histogram")
plt.xlabel("peak height")
plt.ylabel("density")
plt.legend(loc="upper right",frameon=False)
plt.show()
Explanation: Plot histogram of p-values with expected distribution (beta and uniform)
End of explanation
powerthres = neuropower.peakmixmodfit(peaks.Value[peaks.Value>3],bum["pi1"],3)
print(powerthres["mu"])
print(powerthres["sigma"])
twocol = Paired_12.mpl_colors
plt.figure(figsize=(7,5))
plt.hist(peaks.Value[peaks.Value>3],lw=0,facecolor=twocol[0],normed=True,bins=np.arange(3,10,0.3),label="observed distribution")
plt.xlim([3,10])
plt.ylim([0,1])
plt.plot(xn,neuropower.nulprobdens(3,xn)*(1-bum["pi1"]),color=twocol[3],lw=3,label="null distribution")
plt.plot(xn,neuropower.altprobdens(powerthres["mu"],powerthres["sigma"],3,xn)*(bum["pi1"]),color=twocol[5],lw=3, label="alternative distribution")
plt.plot(xn,neuropower.mixprobdens(powerthres["mu"],powerthres["sigma"],bum["pi1"],3,xn),color=twocol[1],lw=3,label="total distribution")
plt.title("histogram")
plt.xlabel("peak height")
plt.ylabel("density")
plt.legend(loc="upper right",frameon=False)
plt.show()
Explanation: Apply power procedure WITH threshold
End of explanation
def altprobdens(mu,sigma,peaks):
out = scipy.stats.norm(mu,sigma).pdf(peaks)
return out
def mixprobdens(mu,sigma,pi1,peaks):
f0=[(1-pi1)*peakdens3D(p,1) for p in peaks]
fa=[pi1*altprobdens(mu,sigma,p) for p in peaks]
f=[x + y for x, y in zip(f0, fa)]
return(f)
def mixprobdensSLL(pars,pi1,peaks):
mu=pars[0]
sigma=pars[1]
f = mixprobdens(mu,sigma,pi1,peaks)
LL = -sum(np.log(f))
return(LL)
def nothrespeakmixmodfit(peaks,pi1):
Searches the maximum likelihood estimator for the mixture distribution of null and alternative
start = [5,0.5]
opt = scipy.optimize.minimize(mixprobdensSLL,start,method='L-BFGS-B',args=(pi1,peaks),bounds=((2.5,50),(0.1,50)))
out={'maxloglikelihood': opt.fun,
'mu': opt.x[0],
'sigma': opt.x[1]}
return out
modelfit = nothrespeakmixmodfit(peaks.Value,bum["pi1"])
twocol = Paired_12.mpl_colors
plt.figure(figsize=(7,5))
plt.hist(peaks.Value,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(-2,10,0.3),label="observed distribution")
plt.xlim([-2,10])
plt.ylim([0,0.5])
plt.plot(xn,[(1-bum["pi1"])*peakdens3D(p,1) for p in xn],color=twocol[3],lw=3,label="null distribution")
plt.plot(xn,bum["pi1"]*altprobdens(modelfit["mu"],modelfit["sigma"],xn),color=twocol[5],lw=3,label="alternative distribution")
plt.plot(xn,mixprobdens(modelfit["mu"],modelfit["sigma"],bum["pi1"],xn),color=twocol[1],lw=3,label="fitted distribution")
plt.title("histogram")
plt.xlabel("peak height")
plt.ylabel("density")
plt.legend(loc="upper right",frameon=False)
plt.show()
Explanation: Adjust power procedure without threshold
End of explanation
xn = np.arange(-10,10,0.01)
newcol = ["#8C1515","#4D4F53","#000000","#B3995D"]
plt.figure(figsize=(5,3))
plt.xlim([1.7,7.8])
plt.ylim([0,2])
k = -1
for u in range(2,6):
k = k+1
print(k)
plt.plot(xn,neuropower.nulprobdens(u,xn),color=newcol[k],lw=3,label="u=%s" %(u))
plt.vlines(u,0,2,color=newcol[k],lw=1,linestyle="--")
plt.legend(loc="upper right",frameon=False)
plt.show()
Explanation: Figures for JSM
End of explanation
plt.figure(figsize=(5,3))
plt.hlines(1-0.30,0,1,color=newcol[1],lw=3,label="null distribution")
plt.plot(xn,stats.beta.pdf(xn,0.2,1)+1-0.3,color=newcol[0],lw=3,label="alternative distribution")
plt.xlim([0,1])
plt.ylim([0,4])
plt.title("")
plt.xlabel("")
plt.ylabel("")
plt.legend(loc="upper right",frameon=False)
plt.show()
plt.figure(figsize=(5,3))
plt.xlim([2,6])
plt.ylim([0,1])
plt.plot(xn,neuropower.nulprobdens(2,xn)*0.3,color=newcol[3],lw=3,label="null distribution")
plt.plot(xn,neuropower.altprobdens(3,1,2,xn)*0.7,color=newcol[1],lw=3, label="alternative distribution")
plt.plot(xn,neuropower.mixprobdens(3,1,0.7,2,xn),color=newcol[0],lw=3,label="total distribution")
plt.title("")
plt.xlabel("")
plt.ylabel("")
plt.legend(loc="upper right",frameon=False)
plt.show()
y1 = []
ran = range(10,51)
for n in ran:
delta = 3/10**0.5
new = delta*n**0.5
y1.append(1-neuropower.altcumdens(new,1,2,4))
plt.figure(figsize=(5,3))
plt.plot(ran,y1,color=newcol[0],lw=3)
plt.xlim([10,np.max(ran)])
plt.ylim([0,1])
plt.title("")
plt.xlabel("")
plt.ylabel("")
plt.show()
Explanation: $P(T>t | H_0, t>u)$
End of explanation |
11,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
'Grouped' k-fold CV
A quick demo by Matt
In cross-validating, we'd like to drop out one well at a time. LeaveOneGroupOut is good for this
Step1: Isolate X and y
Step2: We want the well names to use as groups in the k-fold analysis, so we'll get those too
Step3: Now we train as normal, but LeaveOneGroupOut gives us the approriate indices from X and y to test against one well at a time | Python Code:
import pandas as pd
training_data = pd.read_csv('../training_data.csv')
Explanation: 'Grouped' k-fold CV
A quick demo by Matt
In cross-validating, we'd like to drop out one well at a time. LeaveOneGroupOut is good for this:
End of explanation
X = training_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values
y = training_data['Facies'].values
Explanation: Isolate X and y:
End of explanation
wells = training_data["Well Name"].values
Explanation: We want the well names to use as groups in the k-fold analysis, so we'll get those too:
End of explanation
from sklearn.svm import SVC
from sklearn.model_selection import LeaveOneGroupOut
logo = LeaveOneGroupOut()
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
score = SVC().fit(X[train], y[train]).score(X[test], y[test])
print("{:>20s} {:.3f}".format(well_name, score))
Explanation: Now we train as normal, but LeaveOneGroupOut gives us the approriate indices from X and y to test against one well at a time:
End of explanation |
11,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
01
Step1: Having matplotlib play nice with virtual environments
The matplotlib library has some issues when youโre using a Python 3 virtual environment. The error looks like this
Step2: Read the csv in (it should be UTF-8 already so you don't have to worry about encoding), save it with the proper boring name
Step3: Display the first 3 animals.
Step4: 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
Step5: Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
Step6: Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
Step7: What's the mean length of a cat?
Step8: What's the mean length of a dog?
Step9: Use groupby to accomplish both of the above tasks at once.
Step10: Make a histogram of the length of dogs. I apologize that it is so boring.
Step11: Change your graphing style to be something else (anything else!)
Step12: Make a horizontal bar graph of the length of the animals, with their name as the label
Step13: Make a sorted horizontal bar graph of the cats, with the larger cats on top. | Python Code:
# !workon dataanalysis
import pandas as pd
Explanation: 01: Building a pandas Cheat Sheet, Part 1
Use the csv I've attached to answer the following questions
Import pandas with the right name
End of explanation
import matplotlib.pyplot as plt
#DISPLAY MOTPLOTLIB INLINE WITH THE NOTEBOOK AS OPPOSED TO POP UP WINDOW
%matplotlib inline
Explanation: Having matplotlib play nice with virtual environments
The matplotlib library has some issues when youโre using a Python 3 virtual environment. The error looks like this:
RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are Working with Matplotlib in a virtual enviroment see โWorking with Matplotlib in Virtual environmentsโ in the Matplotlib FAQ
Luckily itโs an easy fix.
mkdir -p ~/.matplotlib && echo 'backend: TkAgg' >> ~/.matplotlib/matplotlibrc (ADD THIS LINE TO TERMINAL)
This adds a line to the matplotlib startup script to set the backend to TkAgg, whatever that means.
Set all graphics from matplotlib to display inline
End of explanation
df = pd.read_csv('07-hw-animals.csv')
df
# Display the names of the columns in the csv
df.columns
Explanation: Read the csv in (it should be UTF-8 already so you don't have to worry about encoding), save it with the proper boring name
End of explanation
df.head(3)
# Sort the animals to see the 3 longest animals.
df.sort_values('length', ascending = False).head(3)
# What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs.
# Only select the dogs.
(df['animal'] == 'dog').value_counts()
# Display all of the animals that are greater than 40 cm.
df[df['length'] > 40]
Explanation: Display the first 3 animals.
End of explanation
length_in = df['length']* 0.3937
df['length (in.)'] = length_in
Explanation: 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
End of explanation
dogs = df[df['animal'] == 'dog']
cats = df[df['animal'] == 'cat']
Explanation: Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
End of explanation
cats['length'] > 12
df[(df['length'] > 12) & (df['animal'] == 'cat')]
Explanation: Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
End of explanation
# cats.describe() displays all stats for length
cats['length'].mean()
#only shows mean length
cats.mean()
Explanation: What's the mean length of a cat?
End of explanation
dogs['length'].mean()
dogs['length'].describe()
dogs.mean()
Explanation: What's the mean length of a dog?
End of explanation
df.groupby('animal')['length (in.)'].mean()
Explanation: Use groupby to accomplish both of the above tasks at once.
End of explanation
dogs.plot(kind='hist', y = 'length (in.)') # all the same length "/
Explanation: Make a histogram of the length of dogs. I apologize that it is so boring.
End of explanation
df.plot(kind="bar", x="name", y="length", color = "red", legend =False)
df.plot(kind="barh", x="name", y="length", color = "red", legend =False)
dogs
dogs.plot(kind='bar')
# dogs.plot(kind='scatter', x='name', y='length (in.)')
Explanation: Change your graphing style to be something else (anything else!)
End of explanation
df.columns
dogs['name']
dogs.plot(kind='bar', x='name', y = 'length', legend=False)
Explanation: Make a horizontal bar graph of the length of the animals, with their name as the label
End of explanation
cats.sort_values('length').plot(kind='barh', x='name', y = 'length', legend = False)
Explanation: Make a sorted horizontal bar graph of the cats, with the larger cats on top.
End of explanation |
11,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http
Step1: Install BAZEL with Baselisk
Step2: Build .aar files | Python Code:
# Create folders
!mkdir -p '/android/sdk'
# Download and move android SDK tools to specific folders
!wget -q 'https://dl.google.com/android/repository/tools_r25.2.5-linux.zip'
!unzip 'tools_r25.2.5-linux.zip'
!mv '/content/tools' '/android/sdk'
# Copy paste the folder
!cp -r /android/sdk/tools /android/android-sdk-linux
# Download NDK, unzip and move contents
!wget 'https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip'
!unzip 'android-ndk-r19c-linux-x86_64.zip'
!mv /content/android-ndk-r19c /content/ndk
!mv '/content/ndk' '/android'
# Copy paste the folder
!cp -r /android/ndk /android/android-ndk-r19c
# Remove .zip files
!rm 'tools_r25.2.5-linux.zip'
!rm 'android-ndk-r19c-linux-x86_64.zip'
# Make android ndk executable to all users
!chmod -R go=u '/android'
# Set and view environment variables
%env PATH = /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/android/sdk/tools:/android/sdk/platform-tools:/android/ndk
%env ANDROID_SDK_API_LEVEL=29
%env ANDROID_API_LEVEL=29
%env ANDROID_BUILD_TOOLS_VERSION=29.0.2
%env ANDROID_DEV_HOME=/android
%env ANDROID_NDK_API_LEVEL=21
%env ANDROID_NDK_FILENAME=android-ndk-r19c-linux-x86_64.zip
%env ANDROID_NDK_HOME=/android/ndk
%env ANDROID_NDK_URL=https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip
%env ANDROID_SDK_FILENAME=tools_r25.2.5-linux.zip
%env ANDROID_SDK_HOME=/android/sdk
#%env ANDROID_HOME=/android/sdk
%env ANDROID_SDK_URL=https://dl.google.com/android/repository/tools_r25.2.5-linux.zip
#!echo $PATH
!export -p
# Install specific versions of sdk, tools etc.
!android update sdk --no-ui -a \
--filter tools,platform-tools,android-29,build-tools-29.0.2
Explanation: Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Build TensorFlow Lite Support libraries with Bazel
Set up Android environment
End of explanation
# Download Latest version of Bazelisk
!wget https://github.com/bazelbuild/bazelisk/releases/latest/download/bazelisk-linux-amd64
# Make script executable
!chmod +x bazelisk-linux-amd64
# Adding to the path
!sudo mv bazelisk-linux-amd64 /usr/local/bin/bazel
# Extract bazel info
!bazel
# Clone TensorFlow Lite Support repository OR upload your custom folder to build
!git clone https://github.com/tensorflow/tflite-support.git
# Move into tflite-support folder
%cd /content/tflite-support/
!ls
Explanation: Install BAZEL with Baselisk
End of explanation
#@title Select library. { display-mode: "form" }
library = 'Support library' #@param ["Support library", "Task Vision library", "Task Text library", "Task Audio library","Metadata library","C++ image_classifier","C++ image_objector","C++ image_segmenter","C++ image_embedder","C++ nl_classifier","C++ bert_nl_classifier", "C++ bert_question_answerer", "C++ metadata_extractor"]
print('You selected:', library)
if library == 'Support library':
library = '//tensorflow_lite_support/java:tensorflowlite_support.aar'
elif library == 'Task Vision library':
library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/vision:task-library-vision'
elif library == 'Task Text library':
library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/text:task-library-text'
elif library == 'Task Audio library':
library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/audio:task-library-audio'
elif library == 'Metadata library':
library = '//tensorflow_lite_support/metadata/java:tensorflow-lite-support-metadata-lib'
elif library == 'C++ image_classifier':
library = '//tensorflow_lite_support/cc/task/vision:image_classifier'
elif library == 'C++ image_objector':
library = '//tensorflow_lite_support/cc/task/vision:image_objector'
elif library == 'C++ image_segmenter':
library = '//tensorflow_lite_support/cc/task/vision:image_segmenter'
elif library == 'C++ image_embedder':
library = '//tensorflow_lite_support/cc/task/vision:image_embedder'
elif library == 'C++ nl_classifier':
library = '//tensorflow_lite_support/cc/task/text/nlclassifier:nl_classifier'
elif library == 'C++ bert_nl_classifier':
library = '//tensorflow_lite_support/cc/task/text/nlclassifier:bert_nl_classifier'
elif library == 'C++ bert_question_answerer':
library = '//tensorflow_lite_support/cc/task/text/qa:bert_question_answerer'
elif library == 'C++ metadata_extractor':
library = '//tensorflow_lite_support/metadata/cc:metadata_extractor'
#@title Select platform(s). { display-mode: "form" }
platforms = 'arm64-v8a,armeabi-v7a' #@param ["arm64-v8a,armeabi-v7a","x86", "x86_64", "arm64-v8a", "armeabi-v7a","x86,x86_64,arm64-v8a,armeabi-v7a"]
print('You selected:', platforms)
# Build library
!bazel build \
--fat_apk_cpu='{platforms}' \
'{library}'
Explanation: Build .aar files
End of explanation |
11,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defensive programming
We've covered
Step1: Programs like firefox browser are full of assertions
Step2: now look at the post-conditions to help us catch bugs by telling us the calculation isn't right
for example if we normalize a rect that is taller than it is wide
Step3: re-reading our function, we realize that line 10 should divide dy by dx rather than dx by dy
if we had left out hte assertion at the end of the function we would have created and returned somethign that had the right shape as a valid answer but wasn't
assertions are just about catching errors they also help people understand programs (chance to check their understanding)
good programmers follow
Step4: error is reassuring b/c we haven't written that fuction!
b/c we wrote the assertions, we have defined what our input and output should look like!
we are missing a case where the ranges don't overlap | Python Code:
numbers = [1.5, 2.3, 0.7, -0.001, 4.4]
total = 0.0
for n in numbers:
assert n > 0.0, 'Data should only contain positve values'
total += n
print('total is: ', total)
Explanation: Defensive programming
We've covered:
variables and lists,
file i/o,
loops,
conditionals,
and functions
but we haven't shown whether a program is getting the right answer and whether it is still getting the right answers as we change the program
We need to:
write prgrams that check for their own operation
write and run tests for widely-used functions
make sure we know what 'correct' actually means
as with real carpentry - time is saved by carefully measuring before cutting wood
assume errors will happen and guard against them
Called defensive programming
we add assertions to our code so that it checks itself as it runs
Python will evaluate assertions, if true does nothing, if false halts and prints error
for example
End of explanation
def normalize_rectangle(rect):
'''Normalizes a rectangle so that it is at the origin and 1.0 units long on its longest axis.'''
assert len(rect) == 4, 'Rectangles must contain 4 coordinates'
x0, y0, x1, y1 = rect
assert x0 < x1, 'Invalid X coordinates'
assert y0 < y1, 'Invalid Y coordinates'
dx = x1 - x0
dy = y1 - y0
if dx > dy:
scaled = float(dx) / dy
upper_x, upper_y = 1.0, scaled
else:
scaled = float(dx) / dy
upper_x, upper_y = scaled, 1.0
assert 0 < upper_x <= 1.0, 'Calculated upper X coordinate invalid'
assert 0 < upper_y <= 1.0, 'Calculated upper Y coordinate invalid'
return (0, 0, upper_x, upper_y)
print(normalize_rectangle((0.0, 1.0, 2.0))) #missing the fourth coordinate
Explanation: Programs like firefox browser are full of assertions: 10-20% (to check that 80-90% are worksing correctly)
Three assertion categories:
a precondition -- something that must be true at the of the fuction in order for it work correctly
a postcondition - somthing that the function guarantees is true after execution
invariant - somthing that is always true at a particular point in code
Suppose we are representing rectangles using a tuple of four coordinates (x0,y0,x1,y2) representing the lower left and upper right corners of the rectangle.
IN order to do some calculations, we need to normalize the rectangle so that the lower left corner is at the origin and the longest side is 1.0 units long.
The following function does that, but also checks that its input is correctly formatted and that its result makes sense:
End of explanation
print(normalize_rectangle((0.0,0.0, 1.0, 5.0)))
print(normalize_rectangle((0.0,0.0,5.0,1.0)))
Explanation: now look at the post-conditions to help us catch bugs by telling us the calculation isn't right
for example if we normalize a rect that is taller than it is wide
End of explanation
assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)
assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)
assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0)
Explanation: re-reading our function, we realize that line 10 should divide dy by dx rather than dx by dy
if we had left out hte assertion at the end of the function we would have created and returned somethign that had the right shape as a valid answer but wasn't
assertions are just about catching errors they also help people understand programs (chance to check their understanding)
good programmers follow: 1) fail early, fail often - catch mistakes as early as possible 2) turn bugs into assertions or tests - whenever you fix a bug write an assertion, so you won't make same mistake later (regression)
TDD
assertion checks that something is true at a particular point,
next step is check overall behavior of piece of code
for instance we need to find the overlap for two or more time series - lines of time intervals
most novices would: write function range_overlap, call it interactively on two or three diff inputs, fix if wrong
better way: write a short function for each test, write a range_overlap function that should pass tests, if range_overlap fails, fix it and re-run test function
writing tests bfore the function is called text driven devlopment
its advocates belive it produces better coder faster because:
confirmation bias encroaches if one writes tests after you write the function (subconsciously write to pass)
writing test helps programmers figure out what teh function is actually supposed to do
here are three test functions for range_overlap:
End of explanation
assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == ???
Explanation: error is reassuring b/c we haven't written that fuction!
b/c we wrote the assertions, we have defined what our input and output should look like!
we are missing a case where the ranges don't overlap
End of explanation |
11,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparative barcharts
In order to get a glimpse of what specific attribute values could be used to determine if a mushroom was edible or poisonous we generated some barcharts to compare the attribute values by poisonous and edible mushrooms.
The following function can be used to collect the frequencies of each attribute value for two attributes.
Step1: Next we can use this function to plot the comparative data.
Step2: From the plot we can see that any mushroom with a foul, spicy and fishy smell as almost certainly poisonous. No smell is almost always edible, but in some rare cases it can be posionous.
Let's take a look at spore print color.
Step3: We can see that chocolate and white mushrooms are usually poisonous so it is best to avoid those. Black or brown are usually edible, but not always.
Determining an attribute's association with edibility
To determine association between attributes and edibility we used Pearson's chi-squared test on the frequency of attribute values and then ordered the attributes in descending order of the chi-squared statistic. The chi-squared test works by comparing the observed data to expected data (the null hypothesis which is an even distribution across each row and column) with the following equation,
$$
\chi^2 = \sum^n_{i=1} \frac{ (O_i - E_i)^2 }{ E_i }
$$
where O is the observed data point and E is the expected data point.
With the following functions we can get a contingency table of the expected and observed values of any two attributes
Step4: Using these two tables for each attribute we can collect the chi-squared test statistic for each, and then sort them in descending order to rank the attributes by association with being poisonous or edible.
Step5: As we can see from the plot, odor is the most associated attribute with edibility, followed by spore print color and gill color. These rankings seem to agree heavily with our comparative barcharts.
While this use of the chi-squared test statistic may not be the traditional use of finding the p-value and accepting or rejecting the null hypothesis to determine independence, it still provided us with a metric to rank the attributes by their association of edibility.
Scatterplot
Next we decided to plot a scatterplot matrix of the top 5 most associated attributes with edibility. In order to plot categorical variables on a scatterplot we needed to add some jitter to the data. This was done by adding a random number between -0.167 and 0.167 to all the categorical codes.
Step6: From the scatter plots we can cleary see how values of certain variables are grouped between poisonous and edible. Because the values were converted to the categorical codes to plot, we have generated a legend for the values of each attribute. | Python Code:
def attr_freqs(attr1, attr2):
df = shroom_dealer.get_data_frame()
labels1 = shroom_dealer.get_attribute_dictionary()[attr1]
labels2 = shroom_dealer.get_attribute_dictionary()[attr2]
data = []
for a in df[attr1].cat.categories:
column = df[attr2][df[attr1] == a].value_counts()
data.append(column)
observed = pd.concat(data, axis=1)
observed.columns = [labels1[a] for a in df[attr1].cat.categories]
return observed
attr_freqs('odor', 'poisonous')
Explanation: Comparative barcharts
In order to get a glimpse of what specific attribute values could be used to determine if a mushroom was edible or poisonous we generated some barcharts to compare the attribute values by poisonous and edible mushrooms.
The following function can be used to collect the frequencies of each attribute value for two attributes.
End of explanation
def plot_comparative_data(attr, plot=True, save=False):
data = attr_freqs(attr, 'poisonous')
labels = shroom_dealer.get_attribute_dictionary()[attr]
index = np.arange(o.shape[1])
bar_width = 0.35
opacity=0.4
fig, ax = plt.subplots()
plt.bar(index, data.loc['e',:].values, bar_width, align='center',
color='b', label='edible', alpha=opacity)
plt.bar(index + bar_width, data.loc['p',:].values, bar_width,
align='center', color='r', label='poisonous', alpha=opacity)
plt.xlabel('Attributes')
plt.ylabel('Frequency')
plt.title('Frequency by attribute and edibility ({})'.format(attr))
plt.xticks(index + bar_width / 2, data.columns)
plt.legend()
plt.tight_layout()
plt.show()
plt.close()
plot_comparative_data('odor')
Explanation: Next we can use this function to plot the comparative data.
End of explanation
plot_comparative_data('spore-print-color')
Explanation: From the plot we can see that any mushroom with a foul, spicy and fishy smell as almost certainly poisonous. No smell is almost always edible, but in some rare cases it can be posionous.
Let's take a look at spore print color.
End of explanation
def expected_data(observed):
expected = np.zeros(observed.shape)
total = observed.sum().sum()
for j in [0, 1]:
for i, col_total in enumerate(observed.sum()):
row_total = observed.sum(axis=1)[j]
expected[j][i] = row_total*col_total/total
return pd.DataFrame(expected, index=observed.index,
columns=observed.columns)
o = attr_freqs('odor', 'poisonous')
o
expected_data(o)
Explanation: We can see that chocolate and white mushrooms are usually poisonous so it is best to avoid those. Black or brown are usually edible, but not always.
Determining an attribute's association with edibility
To determine association between attributes and edibility we used Pearson's chi-squared test on the frequency of attribute values and then ordered the attributes in descending order of the chi-squared statistic. The chi-squared test works by comparing the observed data to expected data (the null hypothesis which is an even distribution across each row and column) with the following equation,
$$
\chi^2 = \sum^n_{i=1} \frac{ (O_i - E_i)^2 }{ E_i }
$$
where O is the observed data point and E is the expected data point.
With the following functions we can get a contingency table of the expected and observed values of any two attributes:
End of explanation
cat_names = shroom_dealer.get_attribute_dictionary().keys()
chisqrs = []
for cat in cat_names:
if cat != 'poisonous':
observed = observed_data(cat, 'poisonous')
expected = expected_data(observed)
chisqr = (((observed-expected)**2)/expected).sum().sum()
chisqrs.append((chisqr, cat))
chisqrs = sorted(chisqrs)[::-1]
chisqrs = chisqrs[:10]
values = [d[0] for d in chisqrs]
labels = [d[1].replace("-", "\n") for d in chisqrs]
index = np.arange(len(chisqrs))
bar_width = .35
opacity=0.4
plt.title("Attributes most associated with edibility")
plt.bar(index, values, bar_width, align='center')
plt.xticks(index, labels)
plt.ylabel("Chi-squared values")
plt.xlabel("Attributes")
plt.autoscale()
plt.tight_layout()
plt.show()
Explanation: Using these two tables for each attribute we can collect the chi-squared test statistic for each, and then sort them in descending order to rank the attributes by association with being poisonous or edible.
End of explanation
df = shroom_dealer.get_data_frame()
for col in df:
if col in ['odor', 'spore-print-color', 'gill-color', 'ring-type',
'stalk-surface-above-ring']:
df[col] = df[col].cat.codes + (np.random.rand(len(df),) - .5)/3
elif col == 'poisonous':
df[col] = df[col].cat.codes
else:
del df[col]
g = sns.pairplot(df, hue='poisonous')
plt.autoscale()
plt.tight_layout()
plt.show()
plt.close()
Explanation: As we can see from the plot, odor is the most associated attribute with edibility, followed by spore print color and gill color. These rankings seem to agree heavily with our comparative barcharts.
While this use of the chi-squared test statistic may not be the traditional use of finding the p-value and accepting or rejecting the null hypothesis to determine independence, it still provided us with a metric to rank the attributes by their association of edibility.
Scatterplot
Next we decided to plot a scatterplot matrix of the top 5 most associated attributes with edibility. In order to plot categorical variables on a scatterplot we needed to add some jitter to the data. This was done by adding a random number between -0.167 and 0.167 to all the categorical codes.
End of explanation
df = shroom_dealer.get_data_frame()
attr = shroom_dealer.get_attribute_dictionary()
labels = {}
for col in df:
if col in ['odor', 'spore-print-color', 'gill-color', 'ring-type',
'stalk-surface-above-ring', 'poisonous']:
labels[col] = [attr[col][c] for c in df[col].cat.categories] + \
(12-len(df[col].cat.categories))*[" "]
pd.DataFrame(labels)
Explanation: From the scatter plots we can cleary see how values of certain variables are grouped between poisonous and edible. Because the values were converted to the categorical codes to plot, we have generated a legend for the values of each attribute.
End of explanation |
11,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning
Step1: Protein structures
Write a function that loads in the x, y, and z coordinates for all CA atoms from a pdb file.
Step2: Load in the pdb files homolog-1.pdb and homolog-2.pdb into separate numpy arrays.
Step3: Plot x vs. y for the two proteins on the same graph.
Step4: Perform a principle component analysis using sklearn.decomposition.PCA on each individual set of coordinates and then transform them individually onto their PCA axes.
Step5: Plot the transformed coordinates on top of one another.
Step6: Can you explain the result?
Worm Population
You are studying a mixed population of C. elegans worms and would like to figure out how many classes of worms are present. You measure 10 different features (things like worm length, fecundity, etc.) for 50,000 individual. You have a dataset in pca_dataset.csv, with the parameters in columns the top (numbered 0 to 9) and the individuals in rows.
Use a PCA analysis to decide how many worm classes you can discriminate. NOTE
Step7: How many principle components do you have to look at to capture 90% of the variation in the worm features?
Step8: You measure the features of a new worm that was sent to the lab. It's feature set is below. Does this worm belong to one of your classes? If so, which one?
Step9: Generate Worm Population
Step10: Generate random worm | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from sklearn import datasets
from sklearn.decomposition import PCA
Explanation: Machine Learning
End of explanation
def load_pdb(pdb_file):
f = open(pdb_file,'r')
lines = f.readlines()
f.close()
all_coord = []
for l in lines:
if l[0:4] == "ATOM" and l[13:16] == "CA ":
coord = [float(l[(30 + i*8):(38 + i*8)]) for i in range(3)]
all_coord.append(coord)
return np.array(all_coord)
Explanation: Protein structures
Write a function that loads in the x, y, and z coordinates for all CA atoms from a pdb file.
End of explanation
hm1 = load_pdb("homolog-1.pdb")
hm2 = load_pdb("homolog-2.pdb")
Explanation: Load in the pdb files homolog-1.pdb and homolog-2.pdb into separate numpy arrays.
End of explanation
plt.plot(hm1[:,0],hm1[:,1])
plt.plot(hm2[:,0],hm2[:,1])
Explanation: Plot x vs. y for the two proteins on the same graph.
End of explanation
pca1 = PCA(n_components=3)
pca1_fit = pca1.fit(hm1)
t_hm1 = pca1_fit.transform(hm1)
pca2 = PCA(n_components=3)
pca2_fit = pca2.fit(hm2)
t_hm2 = pca2_fit.transform(hm2)
Explanation: Perform a principle component analysis using sklearn.decomposition.PCA on each individual set of coordinates and then transform them individually onto their PCA axes.
End of explanation
plt.plot(t_hm1[:,0],t_hm1[:,1])
plt.plot(t_hm2[:,0],t_hm2[:,1])
Explanation: Plot the transformed coordinates on top of one another.
End of explanation
import pandas
X = np.array(pandas.read_csv("pca_dataset.csv"))[:,1:]
ndim = 10
bound = 50
num_data_sets = 5 ### THEY WON'T KNOW THIS, but it makes it pretty
pca = PCA(n_components=10)
pca_fit = pca.fit(X)
Q = pca_fit.transform(X)
a = X.shape[0]/num_data_sets
for i in range(X.shape[1]):
plt.plot(X[(i*a):((i+1)*a),3],
X[(i*a):((i+1)*a),4],"o")
plt.xlim((-bound,bound))
plt.ylim((-bound,bound))
plt.show()
for i in range(Q.shape[1]):
plt.plot(Q[(i*a):((i+1)*a),0],
Q[(i*a):((i+1)*a),1],"o")
plt.xlim((-bound,bound))
plt.ylim((-bound,bound))
plt.show()
Explanation: Can you explain the result?
Worm Population
You are studying a mixed population of C. elegans worms and would like to figure out how many classes of worms are present. You measure 10 different features (things like worm length, fecundity, etc.) for 50,000 individual. You have a dataset in pca_dataset.csv, with the parameters in columns the top (numbered 0 to 9) and the individuals in rows.
Use a PCA analysis to decide how many worm classes you can discriminate. NOTE: Make sure you exclude the worm number (leftmost column) from the analysis
End of explanation
pca_fit.explained_variance_ratio_
Explanation: How many principle components do you have to look at to capture 90% of the variation in the worm features?
End of explanation
new_worm = [ 3.6515213, -4.08529885, -6.88944367, 14.65016314, -11.77903051,
0.8635548, -6.81508493, -5.45759634, 10.27459884, -5.07160515]
for i in range(Q.shape[1]):
plt.plot(Q[(i*a):((i+1)*a),0],
Q[(i*a):((i+1)*a),1],"o")
new_col = pca_fit.transform(new_worm)
print(new_col)
plt.plot(new_col[0,0],new_col[0,1],"+",color="yellow")
plt.xlim((-bound,bound))
plt.ylim((-bound,bound))
plt.show()
Explanation: You measure the features of a new worm that was sent to the lab. It's feature set is below. Does this worm belong to one of your classes? If so, which one?
End of explanation
import scipy.ndimage
def gen_samples(ndim=10,nsam=10000,num_data_sets=4,scale=5):
out = []
for i in range(num_data_sets):
cov = np.abs(np.random.randn(ndim,ndim))
np.fill_diagonal(cov,0.0)
cov = scipy.ndimage.interpolation.rotate(cov,angle=180*np.random.random(),
reshape=False,mode='reflect')
out.append(np.dot(np.random.normal(size=(nsam,cov.shape[0])),cov) + np.random.normal(size=ndim)*scale)
return np.concatenate(out)
ndim = 10
num_data_sets = 5
scale = 10
bound = 50
X = gen_samples(ndim=ndim,num_data_sets=num_data_sets,scale=scale)
import pandas
y = pandas.DataFrame(X)
y.to_csv("junk.csv")
Explanation: Generate Worm Population
End of explanation
f = open("pca_dataset.csv")
lines = f.readlines()
f.close()
to_take = int(np.random.random()*50000)
col = np.array([float(c) for c in lines[to_take].split(",")[1:]])
print(col)
Explanation: Generate random worm
End of explanation |
11,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run().
Step6: Let's run it
Step7: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve your validation / test accuracy. | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run this computation and iterate:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run().
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run it:
End of explanation
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden_nodes]))
biases1 = tf.Variable(tf.zeros([num_hidden_nodes]))
weights2 = tf.Variable(
tf.truncated_normal([num_hidden_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
lay1_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
logits = tf.matmul(lay1_train, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
lay1_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1)
valid_prediction = tf.nn.softmax(tf.matmul(lay1_valid, weights2) + biases2)
lay1_test = tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1)
test_prediction = tf.nn.softmax(tf.matmul(lay1_test, weights2) + biases2)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve your validation / test accuracy.
End of explanation |
11,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 1
Step1: Use namedtuples, because they're so nice ;)
Actually, use them because they are still tuples, look like class-objects,
and give more semantic context to the code.
In this case, the Turn is LEFT and RIGHT.
Step2: A VectorPoint is a point with a direction
I use them to plot the point on the graph, and also to represent the
direction of movement.
Step3: When turning, we apply this mask (the logic if above).
Step4: The direction is the starting face, which the problem describes as being NORTH.
Step5: The point to be tracked, this is the position on graph.
Step6: Tokenize the token into turn and number of blocks
The first letter represents the direction to turn to, and will only be 'L' for left, and 'R' for right.
The numbers after that are the number of blocks to walk.
Get mask based on direction to turn
The direction is the swapped direction, with the mask applied. So the previous (x,y) becomes (y,x) * mask(x,y).
The point is now the number of blocks added with the direction applied.
Step7: The final distance from origin is the distance of x and y axis.
Step8: Part Two
Then, you notice the instructions continue
on the back of the Recruiting Document.
Easter Bunny HQ is actually at the first location you visit twice.
For example, if your instructions are
R8, R4, R4, R8,
the first location you visit twice is 4 blocks away, due East.
How many blocks away is the first location you visit twice?
Solution logic
As per the previous solution, this looks to be a trivial problem.
We only have to check when we visit the place twice, so there has to be
some way to check whether we have been to the place, which means we need
to store the point we have been on the graph.
For this, we will use a set to store the points, and check at every
iteration, whether we have been to that point.
The naive solution would be to store every point we visit, so it merely
becomes a problem of sifting through searches (binary search would suffice).
But to make it more efficient, we only store the paths,
and check if the current movement cross any of the previous paths.
Basic Geometry
Let's use the property of points lying on a line segment. Suppose we have the
line segment AB, and a point P lies on the line, then the sum of the
distance of P from A and P to B
is equal to the distance from A to B. Or specified as
Step9: Paths will save all visited paths, but we do not need the constraint that they must be unique, since we are checking for intersecting points.
Step10: last_point was the last point traversed
Step11: Tokenize the token into turn and number of blocks
The first letter represents the direction to turn to, and will only be 'L' for left, and 'R' for right.
The numbers after that are the number of blocks to walk.
Get mask based on direction to turn
The direction is the swapped direction, with the mask applied. So the previous (x,y) becomes (y,x) * mask(x,y).
The point is now the number of blocks added with the direction applied.
Part Two Check if we have visited the path. To do that, we must first identify whether we have moved along the X-axis or the Y-axis.
The movement_mask represents the mask applied to the last point to iterate over to the current point. This gives us every point along the path. The mask takes care of negatives and direction, and helps keep the logic clear.
If the x co-ordinates do not change, this is along the Y-axis.
Check whether we are moving towards the top or the bottom.
If the y attribute is increasing, we are moving towards the top, otherwise towards bottom.
If the y co-ordinates do not change, this is along the X-axis.
Check whether we are moving towards the right or the left.
If the x attribute is increasing, we are moving towards the right, otherwise towards left.
Now iterate through each point along the path
Check if there are any common co-ordinates on this path.
If the condition is satisfied, found a point lying in the intersecting path | Python Code:
with open('../inputs/day01.txt', 'r') as f:
data = [x.strip() for x in f.read().split(',')]
Explanation: Day 1: No Time for a Taxicab
author: Harshvardhan Pandit
license: MIT
link to problem statement
Santa's sleigh uses a very high-precision clock to guide its movements, and the clock's oscillator is regulated by stars. Unfortunately, the stars have been stolen... by the Easter Bunny. To save Christmas, Santa needs you to retrieve all fifty stars by December 25th.
Collect stars by solving puzzles. Two puzzles will be made available on each day in the advent calendar; the second puzzle is unlocked when you complete the first. Each puzzle grants one star. Good luck!
You're airdropped near Easter Bunny Headquarters in a city somewhere. "Near", unfortunately, is as close as you can get - the instructions on the Easter Bunny Recruiting Document the Elves intercepted start here, and nobody had time to work them out further.
The Document indicates that you should start at the given coordinates (where you just landed) and face North. Then, follow the provided sequence: either turn left (L) or right (R) 90 degrees, then walk forward the given number of blocks, ending at a new intersection.
There's no time to follow such ridiculous instructions on foot, though, so you take a moment and work out the destination. Given that you can only walk on the street grid of the city, how far is the shortest path to the destination?
For example:
Following R2, L3 leaves you 2 blocks East and 3 blocks North, or 5 blocks away.
R2, R2, R2 leaves you 2 blocks due South of your starting position, which is 2 blocks away.
R5, L5, R5, R3 leaves you 12 blocks away.
How many blocks away is Easter Bunny HQ?
Solution logic
This looks to be a simple trace the path sort of problem.
Assume we start at origin (0,0),
then use counters to keep track of where we are
on a virtual grid. Also need to keep track of direction.
Read the input, then parse it to read whether to go left or right.
Keep a variable direction to indicate where we are currently moving.
Direction is a vector (x,y)
to represent the math to do when calculcating the
next point on the grid.
North(0,1); South(0,-1); East(1,0); West(-1,0)
Tricky: how to convert north to west by reading in left?
North is (0,1) and West is (-1,0)
Similarly, West --left--> South,
which can be written as (-1,0)--L-->(0,-1)
Drawing up a table to see if I can find a pattern or a formula
<pre>
**Table: Turning LEFT**
Direction Value Turned Value
North (0,1) West (-1,0)
West (-1,0) South (0,-1)
South (0,-1) East (1,0)
East (1,0) North (0,1)
**Table: Turning RIGHT**
Direction Value Turned Value
North (0,1) East (1,0)
West (-1,0) North (0,1)
South (0,-1) West (-1,0)
East (1,0) South (0,-1)
</pre>
Looking at the table, it is apparent that on every turn, there is a change
in the field which contain a non-zero value (x or y)
So we always swap the active (that's what I'll call the non-zero field) field
The trick here is the sign of the values. It is not true that the sign always
changes. For e.g. East--Left-->North
<pre>Turn left:
- swap (x,y = y,x)
- apply mask (-1,1)
Turn right:
- swap (x,y = y,x)
- apply mask (1,-1)</pre>
Algorithm
<pre>
- Initialise starting direction as North
- Read the input
- For every input:
- based on turn, get new direction
- multiply the direction vector by distance in input
- add the new vector to point on graph
</pre>
Block distance / Taxicab distance
That is simply the number of block we need to walk to get there
That is the total x distance + total y distance
So the formula is to take the absolute values of both co-ordinates,
and add them
abs(x) + abs(y)
Get the input data from a file in the inputs folder.
The file contains the tokens seperated by comma(,).
End of explanation
from collections import namedtuple
Turns = namedtuple('Turns', ('left', 'right'))
turns = Turns('L', 'R')
Explanation: Use namedtuples, because they're so nice ;)
Actually, use them because they are still tuples, look like class-objects,
and give more semantic context to the code.
In this case, the Turn is LEFT and RIGHT.
End of explanation
VectorPoint = namedtuple('VectorPoint', ('x', 'y'))
Explanation: A VectorPoint is a point with a direction
I use them to plot the point on the graph, and also to represent the
direction of movement.
End of explanation
mask_left = VectorPoint(-1, 1)
mask_right = VectorPoint(1, -1)
Explanation: When turning, we apply this mask (the logic if above).
End of explanation
direction = VectorPoint(0, 1)
Explanation: The direction is the starting face, which the problem describes as being NORTH.
End of explanation
point = VectorPoint(0, 0)
Explanation: The point to be tracked, this is the position on graph.
End of explanation
for token in data:
turn, blocks = token[:1], int(token[1:])
if turn == turns.left:
mask = mask_left
if turn == turns.right:
mask = mask_right
#
direction = VectorPoint(direction.y * mask.x, direction.x * mask.y)
point = VectorPoint(
point.x + blocks * direction.x,
point.y + blocks * direction.y)
Explanation: Tokenize the token into turn and number of blocks
The first letter represents the direction to turn to, and will only be 'L' for left, and 'R' for right.
The numbers after that are the number of blocks to walk.
Get mask based on direction to turn
The direction is the swapped direction, with the mask applied. So the previous (x,y) becomes (y,x) * mask(x,y).
The point is now the number of blocks added with the direction applied.
End of explanation
distance = abs(point.x) + abs(point.y)
# print(distance)
Explanation: The final distance from origin is the distance of x and y axis.
End of explanation
Path = namedtuple('Path', ('A', 'B'))
Explanation: Part Two
Then, you notice the instructions continue
on the back of the Recruiting Document.
Easter Bunny HQ is actually at the first location you visit twice.
For example, if your instructions are
R8, R4, R4, R8,
the first location you visit twice is 4 blocks away, due East.
How many blocks away is the first location you visit twice?
Solution logic
As per the previous solution, this looks to be a trivial problem.
We only have to check when we visit the place twice, so there has to be
some way to check whether we have been to the place, which means we need
to store the point we have been on the graph.
For this, we will use a set to store the points, and check at every
iteration, whether we have been to that point.
The naive solution would be to store every point we visit, so it merely
becomes a problem of sifting through searches (binary search would suffice).
But to make it more efficient, we only store the paths,
and check if the current movement cross any of the previous paths.
Basic Geometry
Let's use the property of points lying on a line segment. Suppose we have the
line segment AB, and a point P lies on the line, then the sum of the
distance of P from A and P to B
is equal to the distance from A to B. Or specified as:
AB = AP + PB
So we save every point we end up at, as endpoints, and check if the movement
is along the path by checking for every point in the saved set.
Another interesting observation in terms of optimisation, is that if the
point does lie on the line, then one of the axis co-ordinates will be the
same in all three points (A, B, C). This is true in this case because all
movesments are in the single direction along a grid. So we can eliminate
points which do not satisfy this condition.
A path contains two points, A and B. Create a path namedtuple to hold them.
End of explanation
paths = []
Explanation: Paths will save all visited paths, but we do not need the constraint that they must be unique, since we are checking for intersecting points.
End of explanation
point = VectorPoint(0, 0)
last_point = point
Explanation: last_point was the last point traversed
End of explanation
for token in data:
turn, blocks = token[:1], int(token[1:])
if turn == turns.left:
mask = mask_left
if turn == turns.right:
mask = mask_right
direction = VectorPoint(direction.y * mask.x, direction.x * mask.y)
point = VectorPoint(
point.x + blocks * direction.x,
point.y + blocks * direction.y)
if point.x == last_point.x:
if point.y > last_point.y:
movement_mask = VectorPoint(0, 1)
else:
movement_mask = VectorPoint(0, -1)
else:
if point.x > last_point.x:
movement_mask = VectorPoint(1, 0)
else:
movement_mask = VectorPoint(-1, 0)
last_point_holder = last_point
while last_point.x != point.x or last_point.y != point.y:
last_point = VectorPoint(
last_point.x + movement_mask.x,
last_point.y + movement_mask.y)
for path in paths:
if path.A.x == last_point.x and path.B.x == last_point.x:
if abs(path.A.y - last_point.y)\
+ abs(last_point.y - path.B.y)\
== abs(path.A.y - path.B.y):
break
if path.A.y == last_point.y and path.B.y == last_point.y:
if abs(path.A.x - last_point.x)\
+ abs(last_point.x - path.B.x)\
== abs(path.A.x - path.B.x):
break
else:
# No paths match, move ahead.
continue
# Some path is found. Stop searching.
break
else:
# Save the path.
# `last_point = point` at this juncture
path = Path(last_point_holder, point)
last_point = point
paths.append(path)
continue
# We found a path that has been visited
distance = abs(last_point.x) + abs(last_point.y)
# print(distance)
break
else:
print(0)
Explanation: Tokenize the token into turn and number of blocks
The first letter represents the direction to turn to, and will only be 'L' for left, and 'R' for right.
The numbers after that are the number of blocks to walk.
Get mask based on direction to turn
The direction is the swapped direction, with the mask applied. So the previous (x,y) becomes (y,x) * mask(x,y).
The point is now the number of blocks added with the direction applied.
Part Two Check if we have visited the path. To do that, we must first identify whether we have moved along the X-axis or the Y-axis.
The movement_mask represents the mask applied to the last point to iterate over to the current point. This gives us every point along the path. The mask takes care of negatives and direction, and helps keep the logic clear.
If the x co-ordinates do not change, this is along the Y-axis.
Check whether we are moving towards the top or the bottom.
If the y attribute is increasing, we are moving towards the top, otherwise towards bottom.
If the y co-ordinates do not change, this is along the X-axis.
Check whether we are moving towards the right or the left.
If the x attribute is increasing, we are moving towards the right, otherwise towards left.
Now iterate through each point along the path
Check if there are any common co-ordinates on this path.
If the condition is satisfied, found a point lying in the intersecting path
End of explanation |
11,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In contrast to the usually taken %matplotlib inline, we want to have a dedicated window here, where we can just exchange the data being shown.
It should work with most matplotlib backends, I just took qt5 here.
Step1: quickplot will automatically maximize the width of the notebook view. You can suppress this by setting maximize=False
Step2: A number of very basic filtering switches is included.
You can enable them below. | Python Code:
%matplotlib qt5
import qkit
qkit.cfg['fid_scan_hdf'] = True
#qkit.cfg['datadir'] = r'D:\data\run_0815' #maybe you want to set a path to your data directory manually?
qkit.start()
import qkit.gui.notebook.quickplot as qp
Explanation: In contrast to the usually taken %matplotlib inline, we want to have a dedicated window here, where we can just exchange the data being shown.
It should work with most matplotlib backends, I just took qt5 here.
End of explanation
q = qp.QuickPlot(maximize=True)
q.show()
Explanation: quickplot will automatically maximize the width of the notebook view. You can suppress this by setting maximize=False
End of explanation
q.remove_offset_x_avg = False
q.remove_offset_y_avg = True
q.unwrap_phase = True
try: #try to replot the current dataset
q.plot_selected_df(None)
except:
pass
Explanation: A number of very basic filtering switches is included.
You can enable them below.
End of explanation |
11,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Introduction to Hadoop MapReduce </center>
3. Optimization
First principle of optimizing Hadoop workflow
Step1: What is being passed from Map to Reduce?
Can reducer do the same thing as mapper, that is, to load in external data?
If we load external data on the reduce side, do we need to do so on the map side?
Step2: How does the number shuffle bytes in this example compare to the previous example?
Find genres which have the highest average ratings over the years
Common optimization approaches
Step3: 2.2.1 Optimization through in-mapper reduction of Key/Value pairs
Step4: How different are the number of shuffle bytes between the two jobs?
2.2.2 Optimization through combiner function | Python Code:
!hdfs dfs -rm -r intro-to-hadoop/output-movielens-02
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-02 \
-file ./codes/avgRatingMapper04.py \
-mapper avgRatingMapper04.py \
-file ./codes/avgRatingReducer01.py \
-reducer avgRatingReducer01.py \
-file ./movielens/movies.csv
Explanation: <center> Introduction to Hadoop MapReduce </center>
3. Optimization
First principle of optimizing Hadoop workflow: Reduce data movement in the shuffle phase
End of explanation
%%writefile codes/avgRatingReducer02.py
#!/usr/bin/env python
import sys
import csv
movieFile = "./movies.csv"
movieList = {}
with open(movieFile, mode = 'r') as infile:
reader = csv.reader(infile)
for row in reader:
movieList[row[0]] = {}
movieList[row[0]]["title"] = row[1]
movieList[row[0]]["genre"] = row[2]
current_movie = None
current_rating_sum = 0
current_rating_count = 0
for line in sys.stdin:
line = line.strip()
movie, rating = line.split("\t", 1)
try:
rating = float(rating)
except ValueError:
continue
if current_movie == movie:
current_rating_sum += rating
current_rating_count += 1
else:
if current_movie:
rating_average = current_rating_sum / current_rating_count
movieTitle = movieList[current_movie]["title"]
movieGenres = movieList[current_movie]["genre"]
print ("%s\t%s\t%s" % (movieTitle, rating_average, movieGenres))
current_movie = movie
current_rating_sum = rating
current_rating_count = 1
if current_movie == movie:
rating_average = current_rating_sum / current_rating_count
movieTitle = movieList[current_movie]["title"]
movieGenres = movieList[current_movie]["genre"]
print ("%s\t%s\t%s" % (movieTitle, rating_average, movieGenres))
!hdfs dfs -rm -r intro-to-hadoop/output-movielens-03
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-03 \
-file ./codes/avgRatingMapper02.py \
-mapper avgRatingMapper02.py \
-file ./codes/avgRatingReducer02.py \
-reducer avgRatingReducer02.py \
-file ./movielens/movies.csv
!hdfs dfs -ls intro-to-hadoop/output-movielens-02
!hdfs dfs -ls intro-to-hadoop/output-movielens-03
!hdfs dfs -cat intro-to-hadoop/output-movielens-03/part-00000 \
2>/dev/null | head -n 10
Explanation: What is being passed from Map to Reduce?
Can reducer do the same thing as mapper, that is, to load in external data?
If we load external data on the reduce side, do we need to do so on the map side?
End of explanation
%%writefile codes/avgGenreMapper01.py
#!/usr/bin/env python
import sys
import csv
# for nonHDFS run
movieFile = "./movielens/movies.csv"
# for HDFS run
#movieFile = "./movies.csv"
movieList = {}
with open(movieFile, mode = 'r') as infile:
reader = csv.reader(infile)
for row in reader:
movieList[row[0]] = {}
movieList[row[0]]["title"] = row[1]
movieList[row[0]]["genre"] = row[2]
for oneMovie in sys.stdin:
oneMovie = oneMovie.strip()
ratingInfo = oneMovie.split(",")
try:
genreList = movieList[ratingInfo[1]]["genre"]
rating = float(ratingInfo[2])
for genre in genreList.split("|"):
print ("%s\t%s" % (genre, rating))
except ValueError:
continue
%%writefile codes/avgGenreReducer01.py
#!/usr/bin/env python
import sys
import csv
import json
current_genre = None
current_rating_sum = 0
current_rating_count = 0
for line in sys.stdin:
line = line.strip()
genre, rating = line.split("\t", 1)
if current_genre == genre:
try:
current_rating_sum += float(rating)
current_rating_count += 1
except ValueError:
continue
else:
if current_genre:
rating_average = current_rating_sum / current_rating_count
print ("%s\t%s" % (current_genre, rating_average))
current_genre = genre
try:
current_rating_sum = float(rating)
current_rating_count = 1
except ValueError:
continue
if current_genre == genre:
rating_average = current_rating_sum / current_rating_count
print ("%s\t%s" % (current_genre, rating_average))
!hdfs dfs -rm -r intro-to-hadoop/output-movielens-04
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-04 \
-file ./codes/avgGenreMapper01.py \
-mapper avgGenreMapper01.py \
-file ./codes/avgGenreReducer01.py \
-reducer avgGenreReducer01.py \
-file ./movielens/movies.csv
!hdfs dfs -ls intro-to-hadoop/output-movielens-04
!hdfs dfs -cat intro-to-hadoop/output-movielens-04/part-00000
Explanation: How does the number shuffle bytes in this example compare to the previous example?
Find genres which have the highest average ratings over the years
Common optimization approaches:
In-mapper reduction of key/value pairs
Additional combiner function
End of explanation
!hdfs dfs -cat /repository/movielens/ratings.csv 2>/dev/null \
| head -n 10
!hdfs dfs -cat /repository/movielens/ratings.csv 2>/dev/null \
| head -n 10 \
| python ./codes/avgGenreMapper01.py \
%%writefile codes/avgGenreMapper02.py
#!/usr/bin/env python
import sys
import csv
import json
# for nonHDFS run
# movieFile = "./movielens/movies.csv"
# for HDFS run
movieFile = "./movies.csv"
movieList = {}
genreList = {}
with open(movieFile, mode = 'r') as infile:
reader = csv.reader(infile)
for row in reader:
movieList[row[0]] = {}
movieList[row[0]]["title"] = row[1]
movieList[row[0]]["genre"] = row[2]
for oneMovie in sys.stdin:
oneMovie = oneMovie.strip()
ratingInfo = oneMovie.split(",")
try:
genres = movieList[ratingInfo[1]]["genre"]
rating = float(ratingInfo[2])
for genre in genres.split("|"):
if genre in genreList:
genreList[genre]["total_rating"] += rating
genreList[genre]["total_count"] += 1
else:
genreList[genre] = {}
genreList[genre]["total_rating"] = rating
genreList[genre]["total_count"] = 1
except ValueError:
continue
for genre in genreList:
print ("%s\t%s" % (genre, json.dumps(genreList[genre])))
!hdfs dfs -cat /repository/movielens/ratings.csv 2>/dev/null \
| head -n 10 \
| python ./codes/avgGenreMapper02.py \
%%writefile codes/avgGenreReducer02.py
#!/usr/bin/env python
import sys
import csv
import json
current_genre = None
current_rating_sum = 0
current_rating_count = 0
for line in sys.stdin:
line = line.strip()
genre, ratingString = line.split("\t", 1)
ratingInfo = json.loads(ratingString)
if current_genre == genre:
try:
current_rating_sum += ratingInfo["total_rating"]
current_rating_count += ratingInfo["total_count"]
except ValueError:
continue
else:
if current_genre:
rating_average = current_rating_sum / current_rating_count
print ("%s\t%s" % (current_genre, rating_average))
current_genre = genre
try:
current_rating_sum = ratingInfo["total_rating"]
current_rating_count = ratingInfo["total_count"]
except ValueError:
continue
if current_genre == genre:
rating_average = current_rating_sum / current_rating_count
print ("%s\t%s" % (current_genre, rating_average))
!hdfs dfs -cat /repository/movielens/ratings.csv 2>/dev/null \
| head -n 10 \
| python ./codes/avgGenreMapper02.py \
| sort \
| python ./codes/avgGenreReducer02.py
# make sure that the path to movies.csv is correct inside avgGenreMapper02.py
!hdfs dfs -rm -R intro-to-hadoop/output-movielens-05
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-05 \
-file ./codes/avgGenreMapper02.py \
-mapper avgGenreMapper02.py \
-file ./codes/avgGenreReducer02.py \
-reducer avgGenreReducer02.py \
-file ./movielens/movies.csv
!hdfs dfs -cat intro-to-hadoop/output-movielens-05/part-00000
!hdfs dfs -cat intro-to-hadoop/output-movielens-04/part-00000
Explanation: 2.2.1 Optimization through in-mapper reduction of Key/Value pairs
End of explanation
!hdfs dfs -ls /repository/
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/complete-shakespeare.txt \
-output intro-to-hadoop/output-wordcount-01 \
-file ./codes/wordcountMapper.py \
-mapper wordcountMapper.py \
-file ./codes/wordcountReducer.py \
-reducer wordcountReducer.py
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/complete-shakespeare.txt \
-output intro-to-hadoop/output-wordcount-02 \
-file ./codes/wordcountMapper.py \
-mapper wordcountMapper.py \
-file ./codes/wordcountReducer.py \
-reducer wordcountReducer.py \
-combiner wordcountReducer.py
%%writefile codes/avgGenreCombiner.py
#!/usr/bin/env python
import sys
import csv
import json
genreList = {}
for line in sys.stdin:
line = line.strip()
genre, ratingString = line.split("\t", 1)
ratingInfo = json.loads(ratingString)
if genre in genreList:
genreList[genre]["total_rating"] += ratingInfo["total_rating"]
genreList[genre]["total_count"] += ratingInfo["total_count"]
else:
genreList[genre] = {}
genreList[genre]["total_rating"] = ratingInfo["total_rating"]
genreList[genre]["total_count"] = 1
for genre in genreList:
print ("%s\t%s" % (genre, json.dumps(genreList[genre])))
!hdfs dfs -rm -r intro-to-hadoop/output-movielens-06
!yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar \
-input /repository/movielens/ratings.csv \
-output intro-to-hadoop/output-movielens-06 \
-file ./codes/avgGenreMapper02.py \
-mapper avgGenreMapper02.py \
-file ./codes/avgGenreReducer02.py \
-reducer avgGenreReducer02.py \
-file ./codes/avgGenreCombiner.py \
-combiner avgGenreCombiner.py \
-file ./movielens/movies.csv
Explanation: How different are the number of shuffle bytes between the two jobs?
2.2.2 Optimization through combiner function
End of explanation |
11,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames
Shetrone et al. 2015
Title
Step1: Download Data
Step2: The file is about 24 MB.
Data wrangle-- read in the data | Python Code:
import pandas as pd
from astropy.io import ascii, votable, misc
Explanation: ApJdataFrames
Shetrone et al. 2015
Title: THE SDSS-III APOGEE SPECTRAL LINE LIST FOR H-BAND SPECTROSCOPY
Authors: M Shetrone, D Bizyaev, J E Lawler, C Allende Prieto, J A Johnson, V V Smith, K Cunha, J. Holtzman, A E Garcรญa Pรฉrez, Sz Mรฉszรกros, J Sobeck, O Zamora, D A Garcia Hernandez, D Souto, D Chojnowski, L Koesterke, S Majewski, and G Zasowski
Data is from this paper:
http://iopscience.iop.org/0067-0049/221/2/24/
End of explanation
#! mkdir ../data/Shetrone2015
#! wget http://iopscience.iop.org/0067-0049/221/2/24/suppdata/apjs521087t7_mrt.txt
#! mv apjs521087t7_mrt.txt ../data/Shetrone2015/
#! du -hs ../data/Shetrone2015/apjs521087t7_mrt.txt
Explanation: Download Data
End of explanation
dat = ascii.read('../data/Shetrone2015/apjs521087t7_mrt.txt')
! head ../data/Shetrone2015/apjs521087t7_mrt.txt
dat.info
df = dat.to_pandas()
df.head()
df.columns
sns.distplot(df.Wave, norm_hist=False, kde=False)
df.count()
sns.lmplot('orggf', 'newgf', df, fit_reg=False)
from astropy import units as u
u.cm
EP1 = df.EP1.values*1.0/u.cm
EP2 = df.EP2.values*1.0/u.cm
EP1_eV = EP1.to(u.eV, equivalencies=u.equivalencies.spectral())
EP2_eV = EP2.to(u.eV, equivalencies=u.equivalencies.spectral())
deV = EP1_eV - EP2_eV
sns.distplot(deV)
plt.plot(df.Wave, deV, '.', alpha=0.05)
plt.xlabel('$\lambda (\AA)$')
plt.ylabel('$\Delta E \;(\mathrm{eV})$')
Explanation: The file is about 24 MB.
Data wrangle-- read in the data
End of explanation |
11,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#largest-number-of-fans-and-blotches" data-toc-modified-id="largest-number-of-fans-and-blotches-1"><span class="toc-item-num">1 </span>largest number of fans and blotches</a></span></li><li><span><a href="#parameter_scan" data-toc-modified-id="parameter_scan-2"><span class="toc-item-num">2 </span>parameter_scan</a></span></li><li><span><a href="#pipeline-examples" data-toc-modified-id="pipeline-examples-3"><span class="toc-item-num">3 </span>pipeline examples</a></span></li><li><span><a href="#ROIs-map" data-toc-modified-id="ROIs-map-4"><span class="toc-item-num">4 </span>ROIs map</a></span></li></ul></div>
Step1: largest number of fans and blotches
Step2: parameter_scan
Step3: pipeline examples
Step4: ROIs map | Python Code:
from planet4 import plotting, catalog_production
rm = catalog_production.ReleaseManager('v1.0b4')
fans = rm.read_fan_file()
blotches = rm.read_blotch_file()
cols = ['angle', 'distance', 'tile_id', 'marking_id',
'obsid', 'spread',
'l_s', 'map_scale', 'north_azimuth',
'PlanetographicLatitude',
'PositiveEast360Longitude']
fans.head()
fans[cols].rename(dict(PlanetographicLatitude='Latitude',
PositiveEast360Longitude='Longitude'),
axis=1).head()
fans.columns
fan_counts = fans.groupby('tile_id').size()
blotch_counts = blotches.groupby('tile_id').size()
ids = fan_counts[fan_counts > 4][fan_counts < 10].index
pure_fans = list(set(ids) - set(blotches.tile_id))
len(ids)
len(pure_fans)
rm.savefolder
%matplotlib ipympl
plt.close('all')
from ipywidgets import interact
id_ = pure_fans[51]
def do_plot(i):
id_ = pure_fans[i]
plotting.plot_image_id_pipeline(id_, datapath=rm.savefolder, via_obsid=False,
save=True, figsize=(8,4),
saveroot='/Users/klay6683/Dropbox/src/p4_paper1/figures')
interact(do_plot, i=48)
from planet4 import markings
def do_plot(i=0):
plt.close('all')
fig, ax = plt.subplots()
markings.ImageID(pure_fans[i]).show_subframe(ax=ax)
ax.set_title(pure_fans[i])
interact(do_plot, i=(0,len(pure_fans),1))
markings.ImageID('6n3').image_name
from planet4 import markings
markings.ImageID(pure_fans[15]).image_name
plotting.plot_raw_fans(id_)
plotting.plot_finals(id_, datapath=rm.savefolder)
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#largest-number-of-fans-and-blotches" data-toc-modified-id="largest-number-of-fans-and-blotches-1"><span class="toc-item-num">1 </span>largest number of fans and blotches</a></span></li><li><span><a href="#parameter_scan" data-toc-modified-id="parameter_scan-2"><span class="toc-item-num">2 </span>parameter_scan</a></span></li><li><span><a href="#pipeline-examples" data-toc-modified-id="pipeline-examples-3"><span class="toc-item-num">3 </span>pipeline examples</a></span></li><li><span><a href="#ROIs-map" data-toc-modified-id="ROIs-map-4"><span class="toc-item-num">4 </span>ROIs map</a></span></li></ul></div>
End of explanation
g_id = fans.groupby('tile_id')
g_id.size().sort_values(ascending=False).head()
blotches.groupby('tile_id').size().sort_values(ascending=False).head()
plotting.plot_finals('6mr', datapath=rm.savefolder)
plotting.plot_finals('7t9', datapath=rm.savefolder)
Explanation: largest number of fans and blotches
End of explanation
from planet4 import dbscan
db = dbscan.DBScanner()
import seaborn as sns
sns.set_context('paper')
db.parameter_scan(id_, 'fan', [0.13, 0.2], [10, 20, 30], size_to_scan='small')
Explanation: parameter_scan
End of explanation
plotting.plot_image_id_pipeline('bk7', datapath=rm.savefolder, via_obsid=False,
save=True, figsize=(8,4), do_title=False,
saveroot='/Users/klay6683/Dropbox/src/p4_paper1/figures')
plotting.plot_image_id_pipeline('ops', datapath=rm.savefolder, via_obsid=False,
save=True, figsize=(8,4), do_title=False,
saveroot='/Users/klay6683/Dropbox/src/p4_paper1/figures')
plotting.plot_image_id_pipeline('b0t', datapath=rm.savefolder, via_obsid=False,
save=True, figsize=(8,4), do_title=False,
saveroot='/Users/klay6683/Dropbox/src/p4_paper1/figures')
Explanation: pipeline examples
End of explanation
from astropy.table import Table
tab = Table.read('/Users/klay6683/Dropbox/src/p4_paper1/rois_table.tex')
rois = tab.to_pandas()
rois.drop(0, inplace=True)
rois.head()
rois.columns = ['Latitude', 'Longitude', 'Informal Name', '# Images (MY29)', '# Images (MY30)']
rois.head()
rois.to_csv('/Users/klay6683/Dropbox/data/planet4/p4_analysis/rois.csv')
Explanation: ROIs map
End of explanation |
11,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Yellowbrick Feature Importance Examples
This notebook is a sample of the feature importance examples that yellowbrick provides.
Step1: Load Iris Datasets for Example Code
Step2: Logistic Regression with Mean of Feature Importances
Should we normalize relative to maximum value or maximum absolute value?
Step3: Logistic Regression with Stacked Feature Importances
Need to decide how to scale scale feature importance when relative=True
Step4: Load Digits Datasets for Example Code
Should we add an option to show only top n features?
Step5: Linear Regression
Step6: Playground | Python Code:
import os
import sys
sys.path.insert(0, "../..")
import importlib
import numpy as np
import pandas as pd
import yellowbrick
import yellowbrick as yb
from yellowbrick.features.importances import FeatureImportances
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import manifold, datasets
from sklearn.linear_model import LogisticRegression, LinearRegression
mpl.rcParams["figure.figsize"] = (9,6)
Explanation: Yellowbrick Feature Importance Examples
This notebook is a sample of the feature importance examples that yellowbrick provides.
End of explanation
X_iris, y_iris = datasets.load_iris(True)
X_iris_pd = pd.DataFrame(X_iris, columns=['f1', 'f2', 'f3', 'f4'])
Explanation: Load Iris Datasets for Example Code
End of explanation
viz = FeatureImportances(LogisticRegression())
viz.fit(X_iris, y_iris)
viz.poof()
viz = FeatureImportances(LogisticRegression(), relative=False)
viz.fit(X_iris, y_iris)
viz.poof()
viz = FeatureImportances(LogisticRegression(), absolute=True)
viz.fit(X_iris, y_iris)
viz.poof()
viz = FeatureImportances(LogisticRegression(), relative=False, absolute=True)
viz.fit(X_iris, y_iris)
viz.poof()
Explanation: Logistic Regression with Mean of Feature Importances
Should we normalize relative to maximum value or maximum absolute value?
End of explanation
viz = FeatureImportances(LogisticRegression(), stack=True)
viz.fit(X_iris, y_iris)
viz.poof()
viz = FeatureImportances(LogisticRegression(), stack=True, relative=False)
viz.fit(X_iris, y_iris)
viz.poof()
viz = FeatureImportances(LogisticRegression(), stack=True, absolute=True)
viz.fit(X_iris, y_iris)
viz.poof()
viz = FeatureImportances(LogisticRegression(), stack=True, relative=False, absolute=True)
viz.fit(X_iris, y_iris)
viz.poof()
Explanation: Logistic Regression with Stacked Feature Importances
Need to decide how to scale scale feature importance when relative=True
End of explanation
X_digits, y_digits = datasets.load_digits(return_X_y=True)
viz = FeatureImportances(LogisticRegression(), stack=True, relative=True)
viz.fit(X_digits, y_digits)
viz.poof()
Explanation: Load Digits Datasets for Example Code
Should we add an option to show only top n features?
End of explanation
viz = FeatureImportances(LinearRegression())
viz.fit(X_iris, y_iris)
viz.poof()
viz = FeatureImportances(LinearRegression(), stack=True)
viz.fit(X_iris, y_iris)
viz.poof()
Explanation: Linear Regression
End of explanation
importlib.reload(yellowbrick.features.importances)
from yellowbrick.features.importances import FeatureImportances
viz = FeatureImportances(LogisticRegression(), relative=False, absolute=False, stack=True)
viz.fit(X_pd, y)
viz.poof()
Explanation: Playground
End of explanation |
11,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to decorate the run_step() method (and why)
The use of decorators is optional and intended to structure and make the run_step()method clearer and more compact.
In order to use the decorators you have to import them as follows
Step1: or
Step2: Currently there are two categories of decorators
Step3: reset_if indicates if resetting is required and in which case. By default all (names) slots are reseted if deletions or modifications occurred on the input data (i.e. on at least one slot). Possible values are
Step4: We can apply process_slot() many times when the treatments on slots differ
Step5: Run condition decorators
These decorators define the conditions that allow the execution of the decorated run_step () method.
They are
Step6: The @and_any extension decorator
It makes possible nested conditions (2 levels) in the form
Step7: The @run_if_all decorator
Allows execution of the decorated run_step() method if and only if all entries contain new data. It can be used with or without arguments which are slot names. When called without arguments it applies to all entry slots
Step8: The @or_all extension decorator
It makes possible nested conditions (2 levels) in the form
Step9: The @run_always decorator
Allows the execution of the decorated run_step() method always. | Python Code:
import progressivis.core.decorators
Explanation: How to decorate the run_step() method (and why)
The use of decorators is optional and intended to structure and make the run_step()method clearer and more compact.
In order to use the decorators you have to import them as follows:
End of explanation
from progressivis.core.decorators import process_slot, run_if_any # , etc.
Explanation: or :
End of explanation
def process_slot(*names, reset_if=('update', 'delete'), reset_cb=None):
pass
Explanation: Currently there are two categories of decorators:
Slot processing decorators [sp-decorators]
Run condition decorators [rc-decorators]
The two categories are inseparable.
Of course you can develop run_step without decorators but if you choose to use the decorators, the run_step() method must be decorated by at least one sp-decorator followed by at least one rc-decorator
Slot processing decorators
For now this category has only one decorator but it can be applied multiple times.
End of explanation
from progressivis.table.module import TableModule
from progressivis.core.slot import SlotDescriptor
from progressivis.table.table import Table
from progressivis.core.decorators import *
class FooModule(TableModule):
inputs = [SlotDescriptor('a', type=Table, required=True),
SlotDescriptor('b', type=Table, required=True),
SlotDescriptor('c', type=Table, required=True),
SlotDescriptor('d', type=Table, required=True),
]
@process_slot("a", "b", "c", "d", reset_if=False)
@run_if_any # mandatory run condition decorator, explained below
def run_step(self, run_number, step_size, howlong):
with self.context as ctx:
pass # do something
Explanation: reset_if indicates if resetting is required and in which case. By default all (names) slots are reseted if deletions or modifications occurred on the input data (i.e. on at least one slot). Possible values are:
reset_if='update' slots are reseted only if modifications occurred
reset_if='delete' slots are reseted only if deletions occurred
reset_if='False' slot are NOT reseted in any case
reset_cb is pertinent only when reset_if is not False. For now reset_cb can contain a method name (i.e. a string) to be called after the slot has been reseted. The method must not have arguments (except self)
We will apply process_slot() once for all slots requiring the same treatment :
End of explanation
class FooModule(TableModule):
inputs = [SlotDescriptor('a', type=Table, required=True),
SlotDescriptor('b', type=Table, required=True),
SlotDescriptor('c', type=Table, required=True),
SlotDescriptor('d', type=Table, required=True),
]
def reset(self):
pass # do some reset related treatments
@process_slot("a", "b", reset_cb='reset') # by default reset_if=('update', 'delete')
@process_slot("c", reset_if='update')
@process_slot("d", reset_if=False)
@run_if_any # mandatory run condition decorator, explained below
def run_step(self, run_number, step_size, howlong):
with self.context as ctx:
pass # do something
Explanation: We can apply process_slot() many times when the treatments on slots differ:
End of explanation
# @run_if_any without arguments
@process_slot("a", "b", "c", "d")
@run_if_any # run if at least one among "a", "b", "c", "d" slots contains new data
def run_step(self, run_number, step_size, howlong):
with self.context as ctx:
pass # do something
# @run_if_any with arguments
@process_slot("a", "b", "c", "d")
@run_if_any("b", "d") # run if at least one between b" and "d" slots contains new data
def run_step(self, run_number, step_size, howlong):
with self.context as ctx:
pass # do something
Explanation: Run condition decorators
These decorators define the conditions that allow the execution of the decorated run_step () method.
They are :
@run_if_any with possible extension @and_any
@run_if_all with possible extension @or_all
@run_always
The @run_if_any decorator
Allows execution of the decorated run_step() method if and only if at least one entry contains new data. It can be used with or without arguments which are slot names. When called without arguments it applies to all entry slots:
End of explanation
# (a|c) & (b|d)
@process_slot("a", "b", "c", "d")
@run_if_any("a", "c")
@and_any("b", "d")
def run_step(self, run_number, step_size, howlong):
with self.context as ctx:
pass # do something
Explanation: The @and_any extension decorator
It makes possible nested conditions (2 levels) in the form :
(a | b | ...) & (x | y | ...) & ...
End of explanation
# @run_if_all without arguments
@process_slot("a", "b", "c", "d")
@run_if_all # all "a", "b", "c", "d" slots contains new data
def run_step(self, run_number, step_size, howlong):
with self.context as ctx:
pass # do something
# @run_if_all with arguments
@process_slot("a", "b", "c", "d")
@run_if_all("b", "d") # run if both b" and "d" slots contains new data
def run_step(self, run_number, step_size, howlong):
with self.context as ctx:
pass # do something
Explanation: The @run_if_all decorator
Allows execution of the decorated run_step() method if and only if all entries contain new data. It can be used with or without arguments which are slot names. When called without arguments it applies to all entry slots:
End of explanation
# (a&c) | (b&d)
@process_slot("a", "b", "c", "d")
@run_if_all("a", "c")
@or_all("b", "d")
def run_step(self, run_number, step_size, howlong):
with self.context as ctx:
pass # do something
Explanation: The @or_all extension decorator
It makes possible nested conditions (2 levels) in the form :
(a & b & ...) | (x & y & ...) | ...
End of explanation
@process_slot("a", "b", "c", "d")
@run_always
def run_step(self, run_number, step_size, howlong):
with self.context as ctx:
pass # do something
Explanation: The @run_always decorator
Allows the execution of the decorated run_step() method always.
End of explanation |
11,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
Parameter Estimation with FloPy
This notebook demonstrates the current parameter estimation functionality that is available with FloPy. The capability to write a simple template file for PEST is the only capability implemented so far. The plan is to develop functionality for creating PEST instruction files as well as the PEST control file.
Step1: This notebook will work with a simple model using the dimensions below
Step2: Simple One Parameter Example
In order to create a PEST template file, we first need to define a parameter. For example, let's say we want to parameterize hydraulic conductivity, which is a static variable in flopy and MODFLOW. As a first step, let's define a parameter called HK_LAYER_1 and assign it to all of layer 1. We will not parameterize hydraulic conductivity for layers 2 and 3 and instead leave HK at its value of 10. (as assigned in the block above this one). We can do this as follows.
Step3: At this point, we have enough information to the write a PEST template file for the LPF package. We can do this using the following statement
Step4: At this point, the lpf template file will have been created. The following block will print the template file.
Step5: The span variable will also accept 'layers', in which the parameter applies to the list of layers, as shown next. When 'layers' is specifed in the span dictionary, then the original hk value of 10. remains in the array, and the multiplier is specified on the array control line.
Step6: Multiple Parameter Zoned Approach
The params module has a helper function called zonearray2params that will take a zone array and some other information and create a list of parameters, which can then be passed to the template writer. This next example shows how to create a slightly more complicated LPF template file in which both HK and VKA are parameterized.
Step7: In this case, Flopy will create three parameters
Step8: Two-Dimensional Transient Arrays
Flopy supports parameterization of transient two dimensional arrays, like recharge. This is similar to the approach for three dimensional static arrays, but there are some important differences in how span is specified. The parameter span here is also a dictionary, and it must contain a 'kper' key, which corresponds to a list of stress periods (zero based, of course) for which the parameter applies. The span dictionary must also contain an 'idx' key. If span['idx'] is None, then the parameter is a multiplier for those stress periods. If span['idx'] is a tuple (iarray, jarray), where iarray and jarray are a list of array indices, or a boolean array of shape (nrow, ncol), then the parameter applies only to the cells specified in idx.
Step9: Next, we create the parameters
Step10: Multiplier parameters can also be combined with index parameters as follows. | Python Code:
%matplotlib inline
import sys
import numpy as np
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('flopy version: {}'.format(flopy.__version__))
Explanation: FloPy
Parameter Estimation with FloPy
This notebook demonstrates the current parameter estimation functionality that is available with FloPy. The capability to write a simple template file for PEST is the only capability implemented so far. The plan is to develop functionality for creating PEST instruction files as well as the PEST control file.
End of explanation
# Define the model dimensions
nlay = 3
nrow = 20
ncol = 20
# Create the flopy model object and add the dis and lpf packages
m = flopy.modflow.Modflow(modelname='mymodel', model_ws='./data')
dis = flopy.modflow.ModflowDis(m, nlay, nrow, ncol)
lpf = flopy.modflow.ModflowLpf(m, hk=10.)
Explanation: This notebook will work with a simple model using the dimensions below
End of explanation
mfpackage = 'lpf'
partype = 'hk'
parname = 'HK_LAYER_1'
idx = np.empty((nlay, nrow, ncol), dtype=np.bool)
idx[0] = True
idx[1:] = False
# The span variable defines how the parameter spans the package
span = {'idx': idx}
# These parameters have not affect yet, but may in the future
startvalue = 10.
lbound = 0.001
ubound = 1000.
transform='log'
p = flopy.pest.Params(mfpackage, partype, parname, startvalue,
lbound, ubound, span)
Explanation: Simple One Parameter Example
In order to create a PEST template file, we first need to define a parameter. For example, let's say we want to parameterize hydraulic conductivity, which is a static variable in flopy and MODFLOW. As a first step, let's define a parameter called HK_LAYER_1 and assign it to all of layer 1. We will not parameterize hydraulic conductivity for layers 2 and 3 and instead leave HK at its value of 10. (as assigned in the block above this one). We can do this as follows.
End of explanation
tw = flopy.pest.TemplateWriter(m, [p])
tw.write_template()
Explanation: At this point, we have enough information to the write a PEST template file for the LPF package. We can do this using the following statement:
End of explanation
lines = open('./data/mymodel.lpf.tpl', 'r').readlines()
for l in lines:
print(l.strip())
Explanation: At this point, the lpf template file will have been created. The following block will print the template file.
End of explanation
mfpackage = 'lpf'
partype = 'hk'
parname = 'HK_LAYER_1-3'
# Span indicates that the hk parameter applies as a multiplier to layers 0 and 2 (MODFLOW layers 1 and 3)
span = {'layers': [0, 2]}
# These parameters have not affect yet, but may in the future
startvalue = 10.
lbound = 0.001
ubound = 1000.
transform='log'
p = flopy.pest.Params(mfpackage, partype, parname, startvalue,
lbound, ubound, span)
tw = flopy.pest.templatewriter.TemplateWriter(m, [p])
tw.write_template()
lines = open('./data/mymodel.lpf.tpl', 'r').readlines()
for l in lines:
print(l.strip())
Explanation: The span variable will also accept 'layers', in which the parameter applies to the list of layers, as shown next. When 'layers' is specifed in the span dictionary, then the original hk value of 10. remains in the array, and the multiplier is specified on the array control line.
End of explanation
# Create a zone array
zonearray = np.ones((nlay, nrow, ncol), dtype=int)
zonearray[0, 10:, 7:] = 2
zonearray[0, 15:, 9:] = 3
zonearray[1] = 4
# Create a list of parameters for HK
mfpackage = 'lpf'
parzones = [2, 3, 4]
parvals = [56.777, 78.999, 99.]
lbound = 5
ubound = 500
transform = 'log'
plisthk = flopy.pest.zonearray2params(mfpackage, 'hk', parzones, lbound,
ubound, parvals, transform, zonearray)
Explanation: Multiple Parameter Zoned Approach
The params module has a helper function called zonearray2params that will take a zone array and some other information and create a list of parameters, which can then be passed to the template writer. This next example shows how to create a slightly more complicated LPF template file in which both HK and VKA are parameterized.
End of explanation
# Create a list of parameters for VKA
parzones = [1, 2]
parvals = [0.001, 0.0005]
zonearray = np.ones((nlay, nrow, ncol), dtype=int)
zonearray[1] = 2
plistvk = flopy.pest.zonearray2params(mfpackage, 'vka', parzones, lbound,
ubound, parvals, transform, zonearray)
# Combine the HK and VKA parameters together
plist = plisthk + plistvk
for p in plist:
print(p.name, p.mfpackage, p.startvalue)
# Write the template file
tw = flopy.pest.templatewriter.TemplateWriter(m, plist)
tw.write_template()
# Print contents of template file
lines = open('./data/mymodel.lpf.tpl', 'r').readlines()
for l in lines:
print(l.strip())
Explanation: In this case, Flopy will create three parameters: hk_2, hk_3, and hk_4, which will apply to the horizontal hydraulic conductivity for cells in zones 2, 3, and 4, respectively. Only those zone numbers listed in parzones will be parameterized. For example, many cells in zonearray have a value of 1. Those cells will not be parameterized. Instead, their hydraulic conductivity values will remain fixed at the value that was specified when the Flopy LPF package was created.
End of explanation
# Define the model dimensions (made smaller for easier viewing)
nlay = 3
nrow = 5
ncol = 5
nper = 3
# Create the flopy model object and add the dis and lpf packages
m = flopy.modflow.Modflow(modelname='mymodel', model_ws='./data')
dis = flopy.modflow.ModflowDis(m, nlay, nrow, ncol, nper=nper)
lpf = flopy.modflow.ModflowLpf(m, hk=10.)
rch = flopy.modflow.ModflowRch(m, rech={0: 0.001, 2: 0.003})
Explanation: Two-Dimensional Transient Arrays
Flopy supports parameterization of transient two dimensional arrays, like recharge. This is similar to the approach for three dimensional static arrays, but there are some important differences in how span is specified. The parameter span here is also a dictionary, and it must contain a 'kper' key, which corresponds to a list of stress periods (zero based, of course) for which the parameter applies. The span dictionary must also contain an 'idx' key. If span['idx'] is None, then the parameter is a multiplier for those stress periods. If span['idx'] is a tuple (iarray, jarray), where iarray and jarray are a list of array indices, or a boolean array of shape (nrow, ncol), then the parameter applies only to the cells specified in idx.
End of explanation
plist = []
# Create a multiplier parameter for recharge
mfpackage = 'rch'
partype = 'rech'
parname = 'RECH_MULT'
startvalue = None
lbound = None
ubound = None
transform = None
# For a recharge multiplier, span['idx'] must be None
idx = None
span = {'kpers': [0, 1, 2], 'idx': idx}
p = flopy.pest.Params(mfpackage, partype, parname, startvalue,
lbound, ubound, span)
plist.append(p)
# Write the template file
tw = flopy.pest.TemplateWriter(m, plist)
tw.write_template()
# Print the results
lines = open('./data/mymodel.rch.tpl', 'r').readlines()
for l in lines:
print(l.strip())
Explanation: Next, we create the parameters
End of explanation
plist = []
# Create a multiplier parameter for recharge
mfpackage = 'rch'
partype = 'rech'
parname = 'RECH_MULT'
startvalue = None
lbound = None
ubound = None
transform = None
# For a recharge multiplier, span['idx'] must be None
span = {'kpers': [1, 2], 'idx': None}
p = flopy.pest.Params(mfpackage, partype, parname, startvalue,
lbound, ubound, span)
plist.append(p)
# Now create an index parameter
mfpackage = 'rch'
partype = 'rech'
parname = 'RECH_ZONE'
startvalue = None
lbound = None
ubound = None
transform = None
# For a recharge index parameter, span['idx'] must be a boolean array or tuple of array indices
idx = np.empty((nrow, ncol), dtype=np.bool)
idx[0:3, 0:3] = True
span = {'kpers': [1], 'idx': idx}
p = flopy.pest.Params(mfpackage, partype, parname, startvalue,
lbound, ubound, span)
plist.append(p)
# Write the template file
tw = flopy.pest.templatewriter.TemplateWriter(m, plist)
tw.write_template()
# Print the results
lines = open('./data/mymodel.rch.tpl', 'r').readlines()
for l in lines:
print(l.strip())
Explanation: Multiplier parameters can also be combined with index parameters as follows.
End of explanation |
11,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Embedding CPLEX in scikit-learn
scikit-learn is a widely-used library of Machine-Learning algorithms in Python.
In this notebook, we show how to embed CPLEX as a scikit-learn transformer class.
DOcplex provides transformer classes that take a matrix X of constraints and a vector y of costs and solves a linear problem using CPLEX.
Transformer classes share a solve(X, Y, **params) method which expects
Step1: In the next section we illustrate the range transformer with the Diet Problem, from DOcplex distributed examples.
The Diet Problem
The diet problem is delivered in the DOcplex examples.
Given a breakdown matrix of various foods in elementary nutrients, plus limitations on quantities for foods an nutrients, and food costs, the goal is to find the optimal quantity for each food for a balanced diet.
Step2: Using the transformer with a numpy matrix
In this section we show how to package the decision model into a scikit transformer that takes two inputs
Step3: Then we extract the two vectors of min/max for each nutrient. Each vector has nb_nutrients elements.
We also break the FOODS collection of tuples into columns
Step4: We are now ready to prepare the transformer matrix. This matrix has shape (7, 11) as we
have 7 nutrients and 9 foods, plus the additional min and max columns
Step5: Using the transformer
To use the transformer, create an instance and pass the follwing parameters to the transform method
- the X matrix of size(M, N+2) containing coefficients for N column variables plus two addition column for range mins and maxs.
- the Y cost vector
In addition, some data elements that can't be encoded in the matrix itself should be passed as keyword arguments
Step6: Using the transformer with a pandas dataframe
In this section we show how to use a transformer with data stored in a pandas dtaframe.
In this case, the row minimum (resp. maximum) values are expected to be stored in column min (resp max).
Prepare the data as a pandas dataframe
In this section we build a numpy matrix to be passed to the transformer.
We start by extracting the 'food to nutrient' matrix by stripping the names, then
add the two addition columns for min and max values.
Step7: Running the transformer is straightforward. Gaian we pass the upper bound of column variables with the ubs keyword arguments, but column names are derived from the dataframe columns, so there is no need to pass a colnames argument.
Step8: Using a transformer with scipy's sparse matrices
In this section we show how to use a scipy sparse matrix with a transformer.
As the Diet Problem matrix is not sparse at all, we change to a small (toy) example
Step9: The cost vector contains only two nonzeros, the first and last slots
Step10: To run the transformer, we add that column variable have a lower bound of 1 and a n upper bound of 2*N (this is not really necessary).
As expected, the result is the sequence of natural numbers. | Python Code:
try:
import numpy as np
except ImportError:
raise RuntimError('This notebook requires numpy')
try:
import pandas as pd
from pandas import DataFrame
except ImportError:
raise RuntimError('This notebook requires pandas (not found)')
Explanation: Embedding CPLEX in scikit-learn
scikit-learn is a widely-used library of Machine-Learning algorithms in Python.
In this notebook, we show how to embed CPLEX as a scikit-learn transformer class.
DOcplex provides transformer classes that take a matrix X of constraints and a vector y of costs and solves a linear problem using CPLEX.
Transformer classes share a solve(X, Y, **params) method which expects:
- an X matrix containing the constraints of the linear problem
- a Y vector containing the cost coefficients.
The transformer classes accept smatrices invarious formats:
Python lists
numpy matrices
pandas dataframes,
scipy's sparse matrices (csr, coo, etc...()
DOcplex transformer classes
There are two DOcplex transformer classes:
CplexLPTransformer expects to solve a linear problem in the classical form:
$$ minimize\ C^{t} x\ s.t.\
Ax <= B$$
Where $A$ is a (M,N) matrix describing the constraints and $B$ is a scalar vector of size M, containing the right hand sides of the constraints, and $C$ is the cost vector of size N. In this case the transformer expects a (M,N+1) matrix, where the last column contains the right hand sides.
CplexRangeTransformer expects to solve linear problem as a set of range constraints:
$$ minimize\ C^{t} x\ s.t.\
m <= Ax <= M$$
Where $A$ is a (M,N) matrix describing the constraints, $m$ and $M$ are two scalar vectors of size M, containing the minimum and maximum values for the row expressions, and $C$ is the cost vector of size N. In this case the transformer expects a (M,N+2) matrix, where the last two columns contains the minimum and maximum values (in this order).
End of explanation
# the baseline diet data as Python lists of tuples.
FOODS = [
("Roasted Chicken", 0.84, 0, 10),
("Spaghetti W/ Sauce", 0.78, 0, 10),
("Tomato,Red,Ripe,Raw", 0.27, 0, 10),
("Apple,Raw,W/Skin", .24, 0, 10),
("Grapes", 0.32, 0, 10),
("Chocolate Chip Cookies", 0.03, 0, 10),
("Lowfat Milk", 0.23, 0, 10),
("Raisin Brn", 0.34, 0, 10),
("Hotdog", 0.31, 0, 10)
]
NUTRIENTS = [
("Calories", 2000, 2500),
("Calcium", 800, 1600),
("Iron", 10, 30),
("Vit_A", 5000, 50000),
("Dietary_Fiber", 25, 100),
("Carbohydrates", 0, 300),
("Protein", 50, 100)
]
FOOD_NUTRIENTS = [
("Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2),
("Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2),
("Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1),
("Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3),
("Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2),
("Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9),
("Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1),
("Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4),
("Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4)
]
nb_foods = len(FOODS)
nb_nutrients = len(NUTRIENTS)
print('#foods={0}'.format(nb_foods))
print('#nutrients={0}'.format(nb_nutrients))
assert nb_foods == len(FOOD_NUTRIENTS)
Explanation: In the next section we illustrate the range transformer with the Diet Problem, from DOcplex distributed examples.
The Diet Problem
The diet problem is delivered in the DOcplex examples.
Given a breakdown matrix of various foods in elementary nutrients, plus limitations on quantities for foods an nutrients, and food costs, the goal is to find the optimal quantity for each food for a balanced diet.
End of explanation
mat_fn = np.array([FOOD_NUTRIENTS[f][1:] for f in range(nb_foods)])
print('The food-nutrient matrix has shape: {0}'.format(mat_fn.shape))
Explanation: Using the transformer with a numpy matrix
In this section we show how to package the decision model into a scikit transformer that takes two inputs:
a matrix X, usually denoting the Machine-LearningL features, but used here to pass the diet problem data in the form of a nb_nutrients x (nb_foods + 2)) matrix. The structure of this matrix is:
for each food, the breakdown quantity of nutrient of the food,
two additional 'min' and 'max' columns contain the range of valid nutrient quantity.
a vector Y, here assumed to contain the costs (size is nb_foods)
Prepare the data as a numpy matrix
In this section we build a numpy matrix to be passed to the transformer.
First, we extract the food to nutrient matrix by stripping the names.
End of explanation
nutrient_mins = [NUTRIENTS[n][1] for n in range(nb_nutrients)]
nutrient_maxs = [NUTRIENTS[n][2] for n in range(nb_nutrients)]
food_names ,food_costs, food_mins, food_maxs = map(list, zip(*FOODS))
Explanation: Then we extract the two vectors of min/max for each nutrient. Each vector has nb_nutrients elements.
We also break the FOODS collection of tuples into columns
End of explanation
# step 1. add two lines for nutrient mins, maxs
nf2 = np.append(mat_fn, np.array([nutrient_mins, nutrient_maxs]), axis=0)
mat_nf = nf2.transpose()
mat_nf.shape
np_costs = np.array(food_costs)
Explanation: We are now ready to prepare the transformer matrix. This matrix has shape (7, 11) as we
have 7 nutrients and 9 foods, plus the additional min and max columns
End of explanation
from docplex.mp.sktrans.transformers import *
np_diet = CplexRangeTransformer().transform(mat_nf, np_costs, ubs=food_maxs, colnames=food_names).sort_values(by='value', ascending=False)
np_diet
Explanation: Using the transformer
To use the transformer, create an instance and pass the follwing parameters to the transform method
- the X matrix of size(M, N+2) containing coefficients for N column variables plus two addition column for range mins and maxs.
- the Y cost vector
In addition, some data elements that can't be encoded in the matrix itself should be passed as keyword arguments:
ubs denotes the upper bound for the column varuiables that are created. The expected size of this scalar vector is N (when matrix has size (M,N+2))
colnames is a vector of strings, containing names for the column variables (here the food names). The expected size of this vector is N (when matrix has size (M,N+2))
End of explanation
# convert raw data to dataframes
df_foods = DataFrame(FOODS, columns=["food", "cost", "min", "max"])
df_nutrients = DataFrame(NUTRIENTS, columns = ["nutrient", "min", "max"])
fn_columns = ["food"] + df_nutrients["nutrient"].values.tolist()
# food to nutrients matrix
df_fns = DataFrame(FOOD_NUTRIENTS, columns=fn_columns)
df_fns.set_index('food', inplace=True)
# nutrients to foods
scX = df_fns.T
scX.columns = df_foods['food']
# min/max columns
scX['min'] = df_nutrients['min'].tolist()
scX['max'] = df_nutrients['max'].tolist()
scX.head()
# the cost vector
scY = df_foods['cost'].copy()
scY.index = df_foods['food']
scY.head()
Explanation: Using the transformer with a pandas dataframe
In this section we show how to use a transformer with data stored in a pandas dtaframe.
In this case, the row minimum (resp. maximum) values are expected to be stored in column min (resp max).
Prepare the data as a pandas dataframe
In this section we build a numpy matrix to be passed to the transformer.
We start by extracting the 'food to nutrient' matrix by stripping the names, then
add the two addition columns for min and max values.
End of explanation
df_diet = CplexRangeTransformer().transform(scX, scY, ubs=df_foods['max']).sort_values(by='value', ascending=False)
df_diet
%matplotlib inline
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.pyplot as plt
def plot_radar_chart(labels, stats, **kwargs):
angles=np.linspace(0, 2*np.pi, len(labels), endpoint=False)
# close the plot
stats = np.concatenate((stats, [stats[0]]))
angles = np.concatenate((angles, [angles[0]]))
fig = plt.figure()
ax = fig.add_subplot(111, polar=True)
ax.plot(angles, stats, 'o-', linewidth=2, **kwargs)
ax.fill(angles, stats, alpha=0.30, **kwargs)
ax.set_thetagrids(angles * 180/np.pi, labels)
#ax.set_title([df.loc[386,"Name"]])
ax.grid(True)
plot_radar_chart(labels=df_diet['name'], stats=df_diet['value'], color='r')
plot_radar_chart(labels=np_diet['name'], stats=np_diet['value'], color='g')
Explanation: Running the transformer is straightforward. Gaian we pass the upper bound of column variables with the ubs keyword arguments, but column names are derived from the dataframe columns, so there is no need to pass a colnames argument.
End of explanation
# N is the size
N = 11
xs = []
ys = []
for i in range(N - 1):
xs.append(i)
ys.append(i)
xs.append(i)
ys.append(i + 1)
data = list([1, -1] * (N - 1))
# add an extra column for rhs
# rhs is one column of 1s at the right
xs += list(range(N))
ys += [N] * N
data += [-1] * (N - 1)
data += [0]
try:
import scipy as sp
except ImportError:
raise RuntimeError('This notebvook requires SciPy')
# build the CSR matrix from xs, ys, data
spm = sp.sparse.csr_matrix((data, (xs, ys)), shape=(N, N + 1))
Explanation: Using a transformer with scipy's sparse matrices
In this section we show how to use a scipy sparse matrix with a transformer.
As the Diet Problem matrix is not sparse at all, we change to a small (toy) example:
We have N integer variables constrained to be greater than the previous in the list, and we want to minimize the sum of the last and first variable.
The solution is obvious: the sequence of integers from 1 to N, but let' see how we can implement this with a ScipY csr matrix and solve it with CPLEX.
Mathematical description of the problem
$$
minimize x_{n} + x_{1}\
s.t.\
x_{i+1} >= x_{i} + 1 \forall i\ in\ {1..N-1}
$$
Prepare the csr matrix
the csr matrix (see https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.sparse.csr_matrix.html) is built by specifying the value of nonzeros with their row and column indices.
There are $N-1$ constraints of the form $x_{i} - x{i+1} <= -1$ so there are only two non-zero coefficients for row $i$:
1 at position $(i,i)$
-1 at position $(i, i+1)$
the right_hand side (rhs for short) is -1 for the first $N-1$ rows, and 0 for the last one.
End of explanation
costs = [0] * N
costs[0] = 1
costs[-1] = 1
Explanation: The cost vector contains only two nonzeros, the first and last slots
End of explanation
from docplex.mp.sktrans.transformers import *
res = CplexTransformer().transform(spm, costs, ubs=2*N, lbs=1)
res
Explanation: To run the transformer, we add that column variable have a lower bound of 1 and a n upper bound of 2*N (this is not really necessary).
As expected, the result is the sequence of natural numbers.
End of explanation |
11,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Playing with rasterio and fiona
Variable declarations
sample_points_filepath โ path to sample points shapefile <br />
DEM_filepath โ path to DEM raster <br />
elevation_filepath โ path to export excel file containing elevation values for each sample site
Step1: Import statements
Step2: Transform points | Python Code:
sample_points_filepath = ""
DEM_filepath = ""
elevation_filepath = ""
Explanation: Playing with rasterio and fiona
Variable declarations
sample_points_filepath โ path to sample points shapefile <br />
DEM_filepath โ path to DEM raster <br />
elevation_filepath โ path to export excel file containing elevation values for each sample site
End of explanation
import rasterio
import fiona
import pandas
import numpy
from pyproj import Proj, transform
from fiona.crs import from_epsg
with fiona.open(sample_points_filepath, 'r') as source_points:
points = [f['geometry']['coordinates'] for f in source_points]
original = Proj(source_points.crs)
destination = Proj(from_epsg(4326))
#destination = Proj(' +proj=latlong +ellps=bessel')
with rasterio.drivers():
with rasterio.open(DEM_filepath) as source_dem:
s = source_dem.sample(points)
elevs = numpy.array([n[0] for n in s])
source_dem.close
source_points.close
Explanation: Import statements
End of explanation
points_projected = []
for p in points:
x, y = p
lat, long = transform(original, destination, x, y)
points_projected.append((long,lat))
points_projected_pd = pandas.DataFrame(points_projected, columns=["lat", "long"])
with fiona.open(sample_points_filepath, 'r') as source_points:
names = numpy.array([p['properties']['NAME'] for p in source_points])
IDs = numpy.array([p['properties']['ID'] for p in source_points])
source_points.close
elevs_names = [{"ID":IDs[i],"elevation":elevs[i], "name":names[i], "latitude":points_projected[i][0], "longitude":points_projected[i][1]} for i in range(len(elevs))]
elevs_pd = pandas.DataFrame(elevs_names)
elevs_pd
elevs_pd.to_excel(elevation_filepath)
Explanation: Transform points
End of explanation |
11,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
cs-1
Numpy
Topics
Step1: Every ndarray is a homogeneous collection of exactly the same data-type
every item takes up the same size block of memory
each block of memory in the array is interpreted in exactly the same way.
Array Creation
Step2: Numpy Attributes
Step3: ndarray.shape
the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension. For a matrix with n rows and m columns, shape will be (n,m). The length of the shape tuple is therefore the rank, or number of dimensions, ndim.
Step4: ndarray.size
Step5: ndarray.dtype
Step6: ndarray.iteamsize
Step7: ndarray.data
the buffer containing the actual elements of the array. Normally, we wonโt need to use this attribute because we will access the elements in an array using indexing facilities.
Step8: CS-2
Topics
Data types,Array creation,
Numeric Ranges,Indexing and slicing.
dtype
Step9: Array creation
Step10: numpy.zeros
Returns a new array of specified size, filled with zeros.
Syntax
Step11: numpy.ones
Returns a new array of specified size and type, filled with ones.
Syntax
Step12: Note
Step13: Numeric ranges
This function returns an ndarray object containing evenly spaced values within a given range.
syntax
Step14: numpy.linspace
This function is similar to arange() function. In this function, instead of step size, the number of evenly spaced values between the interval is specified.
syntax
Step15: numpy.logspace
This function returns an ndarray object that contains the numbers that are evenly spaced on a log scale
syntax
Step16: resize changes the shape and size of array in-place.
Step17: eye returns a 2-D array with ones on the diagonal and zeros elsewhere.
Step18: diag extracts a diagonal or constructs a diagonal array.
Step19: Create an array using repeating list (pythonic way)
Step20: Indexing / Slicing
Three types of indexing methods are available โ field access, basic slicing and advanced indexing.
Step21: To indicate a range. array[start
Step22: cs-3
Topics
Step23: numpy.around()
This is a function that returns the value rounded to the desired precision. The function takes the following parameters.
syntax
Step24: numpy.floor()
This function returns the largest integer not greater than the input parameter. The floor of the scalar x is the largest integer i, such that i <= x. Note that in Python, flooring always is rounded away from 0.
Step25: Basic operations
Step26: Statistical Functions
Step27: Variance is the average of squared deviations, i.e., mean(abs(x - x.mean())**2).
In other words, the standard deviation is the square root of variance.
Step28: Copies & Views
Step29: Broadcasting
Step30: Note
Step31: Let's look at transposing arrays. Transposing permutes the dimensions of the array.
Step32: Dot Product
Step33: Iterating Over Arrays
create a new 4 by 3 array of random numbers 0-9.
Step34: NumPy package contains an iterator object numpy.nditer. It is an efficient multidimensional iterator object using which it is possible to iterate over an array.
Step35: ix_() function
Step36: cs-4
Topics
Step37: NumPy package contains numpy.linalg module that provides all the functionality required for linear algebra
Step38: Using Matplotlib with numpy
Step39: Overview | Python Code:
import numpy as np
import numpy.matlib
Explanation: cs-1
Numpy
Topics:
Intro to numpy,
Ndarray Object,
Eg Array creation,
Array Attributes
Numpy:
NumPy is the fundamental package needed for scientific computing with Python. It contains:
a powerful N-dimensional array object
basic linear algebra functions
basic Fourier transforms
sophisticated random number capabilities
Extra features:
โfast, multidimensional arrays
โlibraries of reliable, tested scienti๏ฌc functions
โplotting tools
**NumPy is at the core of nearly every scientific Python application or module since it provides a fast N-d array datatype that can be manipulated in a vectorized form.
Why we need numpy ?
Lists ok for storing small amounts of one-dimensional data.
But, canโt use directly with arithmetical operators (+, -, *, /, โฆ)
Need efficient arrays with arithmetic and better multidimensional tools
How Numpy is useful
Similar to lists, but much more capable, except ๏ฌxed size
NumPy is a hybrid of the older NumArray and Numeric packages , and is meant to replace them both.
NumPy adds a new data structure to Python โ the ndarray.
An N-dimensional array is a homogeneous collection of โitemsโ indexed using N integers
Defined by:
The shape of the array,kind of item the array is composed of.
End of explanation
# Eg : one dimensional
a = np.array([1,2,3,4])
print("One dim ")
print(a)
print(type(a))
#more than one dimension
b = np.array([[1, 2], [3, 4]])
print("Two dims")
print(b)
#using ndim
c=np.array([1,2,3,4,5], ndmin = 2)
print("Two dimensional")
print(c.ndim)
print(c.shape)
print(c)
#dtype:
np.array([1, 2, 3], dtype = complex)
Explanation: Every ndarray is a homogeneous collection of exactly the same data-type
every item takes up the same size block of memory
each block of memory in the array is interpreted in exactly the same way.
Array Creation :
There are a number of ways to initialize new numpy arrays, for example from
โ a Python list or tuples
โ using functions that are dedicated to generating numpy arrays, such as arange, linspace, etc.
โ reading data from files
The basic ndarray is created using an array function in NumPy: numpy.array
syntax : numpy.array(object, dtype = None, copy = True, order = None, subok = False, ndmin = 0)
returns a array object
End of explanation
arrey=np.array([[1,2,3],[4,5,6]])
arrey.ndim
print(arrey)
Explanation: Numpy Attributes :
NumPyโs array class is called ndarray. It is also known by the alias array. Note that numpy.array is not the same as the Standard Python Library class array.array, which only handles one-dimensional arrays and offers less functionality.
ndarray.ndim
the number of axes (dimensions) of the array. In the Python world, the number of dimensions is referred to as rank.This array attribute returns a tuple consisting of array dimensions.
End of explanation
arrey = np.array([[1,2,3],[4,5,6]])
print(arrey)
print(arrey.shape)
#resize ndarray
arrey = np.array([[1,2,3],[4,5,6]])
arrey.shape = (3,2)
print(arrey)
#Resize: NumPy also provides a reshape function to resize an array.
barray = arrey.reshape(2,3)
print(barray)
Explanation: ndarray.shape
the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension. For a matrix with n rows and m columns, shape will be (n,m). The length of the shape tuple is therefore the rank, or number of dimensions, ndim.
End of explanation
arrey.size
Explanation: ndarray.size :
Total number of elements of the array. This is equal to the product of the elements of shape.
End of explanation
arrey.dtype
Explanation: ndarray.dtype :
an object describing the type of the elements in the array. One can create or specify dtypeโs using standard Python types. Additionally NumPy provides types of its own. numpy.int32, numpy.int16, and numpy.float64 are some examples.
End of explanation
#ax = np.array([1,2,3,4,5], dtype = np.int16)
ax = np.array([1,2,3,4,5], dtype = np.float32)
ax.itemsize
Explanation: ndarray.iteamsize:
This array attribute returns the length of each element of array in bytes.
End of explanation
ax.data
Explanation: ndarray.data
the buffer containing the actual elements of the array. Normally, we wonโt need to use this attribute because we will access the elements in an array using indexing facilities.
End of explanation
dt = np.dtype(np.int32)
dt
Explanation: CS-2
Topics
Data types,Array creation,
Numeric Ranges,Indexing and slicing.
dtype:
A dtype object is constructed using the following
syntax โ
numpy.dtype(object, align, copy)
Object โ To be converted to data type object
Align โ If true, adds padding to the field to make it similar to C-struct
Copy โ Makes a new copy of dtype object. If false, the result is reference to builtin data type object.
End of explanation
np.empty([3,3], dtype = int)
Explanation: Array creation:
NumPy offers several functions to create arrays with initial placeholder content.
numpy.empty
Syntax: numpy.empty(shape, dtype = float, order = 'C')
Shape : Shape of an empty array in int or tuple of int
Dtype : Desired output data type. Optional
Order :'C' for C-style row-major array, 'F' for FORTRAN style column-major array
End of explanation
print(np.zeros(5))
np.zeros((3,3))
Explanation: numpy.zeros
Returns a new array of specified size, filled with zeros.
Syntax : numpy.zeros(shape, dtype = float, order = 'F')
End of explanation
np.ones(5)
np.ones([2,2], dtype = int)
Explanation: numpy.ones
Returns a new array of specified size and type, filled with ones.
Syntax : numpy.ones(shape, dtype = None, order = 'C')
End of explanation
x = [1,2,3]
a = np.asarray(x)
print(a)
print(type(a))
a.shape
Explanation: Note: zeros_like,ones_like, empty_like arange,fromfunction, fromfile
numpy.asarray
This function is similar to numpy.array except for the fact that it has fewer parameters.
syntax : numpy.asarray(a, dtype = None, order = None)
End of explanation
np.arange(5,9,2)
Explanation: Numeric ranges
This function returns an ndarray object containing evenly spaced values within a given range.
syntax: numpy.arange(start, stop, step, dtype)
End of explanation
np.linspace(10,20,num=5,endpoint=False,retstep=False)
Explanation: numpy.linspace
This function is similar to arange() function. In this function, instead of step size, the number of evenly spaced values between the interval is specified.
syntax: numpy.linspace(start, stop, num, endpoint, retstep, dtype)
retstep : If true, returns samples and step between the consecutive numbers.
endpoint : True by default, hence the stop value is included in the sequence. If false, it is not included
End of explanation
np.logspace(1.0, 2.0, num = 5)
np.logspace(1.0, 2.0, num = 5,base=2)
Explanation: numpy.logspace
This function returns an ndarray object that contains the numbers that are evenly spaced on a log scale
syntax : numpy.logscale(start, stop, num, endpoint, base, dtype)
End of explanation
o = np.linspace(0, 4, 9)
print(o)
o.resize(3, 3)
o
Explanation: resize changes the shape and size of array in-place.
End of explanation
np.eye(2)
#import numpy.matlib
#np.matlib.eye(n = 3, M = 4, k = 0, dtype = float)
Explanation: eye returns a 2-D array with ones on the diagonal and zeros elsewhere.
End of explanation
y=[1,2,3]
np.diag(y)
Explanation: diag extracts a diagonal or constructs a diagonal array.
End of explanation
#using numpy
np.repeat([1, 2, 3], 3)
p = np.ones([2, 3], int)
p
#vstack to stack arrays in sequence vertically (row wise).
np.vstack([p, 2*p])
#hstack to stack arrays in sequence horizontally (column wise)
np.hstack([p, 2*p])
Explanation: Create an array using repeating list (pythonic way)
End of explanation
s = np.arange(13)*2
s
#indexing
s[0], s[4], s[-1]
Explanation: Indexing / Slicing
Three types of indexing methods are available โ field access, basic slicing and advanced indexing.
End of explanation
s[1:5]
#Use negatives to count from the back.
s[-4:]
#can be used to indicate step-size. array[start:stop:stepsize]
#Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.
s[5::2]
#Let's look at a multidimensional array.
m = np.arange(36)
m.resize((6, 6))
m
#Use bracket notation to slice: array[row, column]
m[2, 2]
#to select a range of rows or columns
m[3, 3:]
#We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30.
m[m > 30]
#Here we are assigning all values in the array that are greater than 30 to the value of 30
m[m > 30] = 30
m
x = np.arange(10)
print(x)
s=slice(2,7,2)
print("Done",x[s])
Explanation: To indicate a range. array[start:stop]
Leaving start or stop empty will default to the beginning/end of the array.
End of explanation
print(np.sin(0))
a = np.array([0,30,45,60,90])
print ('Sine of different angles:')
# Convert to radians by multiplying with pi/180
print (np.sin(a*np.pi/180))
print ('Cosine values for angles in array:')
print (np.cos(a*np.pi/180) )
print ('Tangent values for given angles:')
print (np.tan(a*np.pi/180))
#inverse tri
a = np.array([0,30,45,60,90])
#print 'Array containing sine values:'
sin = np.sin(a*np.pi/180)
print( sin )
print ('\n')
print ('Compute sine inverse of angles. Returned values are in radians.')
inv = np.arcsin(sin)
print (inv )
print ('\n')
print( 'Check result by converting to degrees:' )
print (np.degrees(inv))
print ('\n')
print ('arccos and arctan functions behave similarly:' )
cos = np.cos(a*np.pi/180)
print (cos)
print ('\n')
print ('Inverse of cos:')
inv = np.arccos(cos)
print (inv)
print ('\n')
print ('In degrees:')
print (np.degrees(inv))
print ('\n')
print ('Tan function:' )
tan = np.tan(a*np.pi/180)
print (tan)
print ('Inverse of tan:')
inv = np.arctan(tan)
print (inv)
print ('\n')
print ('In degrees:' )
print (np.degrees(inv))
Explanation: cs-3
Topics :
Math functions,
Basic operations,
Statistical Functions,
Copies & Views,
Broadcasting,
Iterating Over Array,
ix() function
Math functions :NumPy contains a large number of various mathematical operations. NumPy provides standard trigonometric functions, functions for arithmetic operations, handling complex numbers, etc.
Trigonometric Functions:
NumPy has standard trigonometric functions which return trigonometric ratios for a given angle in radians.
np.sin()
np.cos()
np.tan()
arcsin, arcos, and arctan functions return the trigonometric inverse of sin, cos, and tan of the given angle.
The result of these functions can be verified by numpy.degrees() function by converting radians to degrees.
End of explanation
#round off
a = np.array([1.0,5.55, 123, 0.567, 25.532])
print ('Original array:')
print (a )
print ('\n')
print ('After rounding:')
print (np.around(a))
print (np.around(a, decimals = 1))
Explanation: numpy.around()
This is a function that returns the value rounded to the desired precision. The function takes the following parameters.
syntax : numpy.around(a,decimals)
End of explanation
a = np.array([-1.7, 1.5, -0.2, 0.6, 10])
print ('array:')
print (a)
print ('\n')
print ('The modified array:')
#returns largest intgres
print (np.floor(a))
#returns lowest intgers
print (np.ceil(a))
Explanation: numpy.floor()
This function returns the largest integer not greater than the input parameter. The floor of the scalar x is the largest integer i, such that i <= x. Note that in Python, flooring always is rounded away from 0.
End of explanation
x=np.array([1,2,3])
y=np.array([4,5,6])
print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]
print('\n')
print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]
print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]
print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]
print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]
a = np.arange(9, dtype = np.float).reshape(3,3)
print ('First array:')
print (a )
print ('Second array:' )
b = np.array([10,10,10])
print (b )
print ('\n')
print ('Add the two arrays:')
print (np.add(a,b))
print ('\n')
print ('Subtract the two arrays:')
print (np.subtract(a,b))
print ('\n')
print ('Multiply the two arrays:')
print (np.multiply(a,b))
print ('Divide the two arrays:')
print (np.divide(a,b))
Explanation: Basic operations:
Input arrays for performing arithmetic operations such as add(), subtract(), multiply(), and divide() must be either of the same shape or should conform to array broadcasting rules.
Use +, -, , / and * to perform element wise addition, subtraction, multiplication, division and power.
End of explanation
a = np.array([-4, -2, 1, 3, 5])
a.sum()
a.max()
a.min()
np.average(a)
a.mean()
a.std() #Standard deviation is the square root of the average of squared deviations from mean
Explanation: Statistical Functions:
NumPy has quite a few useful statistical functions for finding minimum, maximum, percentile standard deviation and variance, etc. from the given elements in the array.
End of explanation
np.var([1,2,3,4])
a.argmax()
a.argmin()
Explanation: Variance is the average of squared deviations, i.e., mean(abs(x - x.mean())**2).
In other words, the standard deviation is the square root of variance.
End of explanation
#no copy
a = np.arange(6)
print ('Our array is:' )
print (a )
print ('Applying id() function:')
print (id(a))
print ('a is assigned to b:' )
b = a
print (b)
print ('b has same id():')
print (id(b))
print ('Change shape of b:')
b.shape = 3,2
print (b)
print ('Shape of a also gets changed:')
print (a)
#view
a = np.array([1,2,3,4])
#print 'Array a:'
print (a )
print(id(a))
#Create view of a:
b = a.view()
print( b )
b.shape=(2,2)
print(id(b))
print (b is a)
print(b.shape)
print(a.shape)
#copy
a = np.array([[10,10], [2,3], [4,5]])
print ('Array a is:')
print( a)
# 'Create a deep copy of a:'
b = a.copy()
print ('Array b is:')
print (b)
#b does not share any memory of a
print ('Can we write b is a')
print (b is a)
Explanation: Copies & Views :
No Copy:
Simple assignments do not make the copy of array object. Instead, it uses the same id() of the original array to access it. The id() returns a universal identifier of Python object, similar to the pointer in C.
View or Shallow Copy:
NumPy has ndarray.view() method which is a new array object that looks at the same data of the original array.
Deep copy:
The ndarray.copy() function creates a deep copy. It is a complete copy of the array and its data, and doesnโt share with the original array.
End of explanation
#normal example
a = np.array([1,2,3,4])
b = np.array([10,20,30,40])
print(a.shape)
print(b.shape)
c = a * b
print (c)
#Broadcasting
x = np.arange(4)
y = np.ones(5)
xb=x.reshape(4,1)
print(xb)
#bd
print(xb + y)
(xb + y).shape
Explanation: Broadcasting :
The term broadcasting refers to the ability of NumPy to treat arrays of different shapes during arithmetic operations.
End of explanation
#Matrix operations
z = np.array([y, y**2])
print(len(z)) # number of rows of array
Explanation: Note : If the dimensions of two arrays are dissimilar, element-to-element operations are not possible. However, operations on arrays of non-similar shapes is still possible in NumPy, because of the broadcasting capability.
End of explanation
y=np.arange(5)
z = np.array([y, y ** 2])
z
#The shape of array z is (2,3) before transposing.
z.shape
z.T
Explanation: Let's look at transposing arrays. Transposing permutes the dimensions of the array.
End of explanation
x=np.array([1,2,3])
y=np.array([4,5,6])
x.dot(y) # dot product 1*4 + 2*5 + 3*6
Explanation: Dot Product:
[x1,x2,x2]clo[y1,y2,y3] = x1y1+x2xy2+x3y3
End of explanation
tp = np.random.randint(0, 10, (4,3))
tp
#Iterate by row:
for row in tp:
print(row)
#Iterate by index:
for i, row in enumerate(tp):
print('row', i, 'is', row)
#Use zip to iterate over multiple iterables.
tp2=tp*2
tp2
for i, j in zip(tp, tp2):
print(i,'+',j,'=',i+j)
Explanation: Iterating Over Arrays
create a new 4 by 3 array of random numbers 0-9.
End of explanation
a = np.arange(0,60,5)
a = a.reshape(3,4)
print ('Original array is:')
print (a)
print ('\n')
print ('Modified array is:')
for x in np.nditer(a):
print (x)
Explanation: NumPy package contains an iterator object numpy.nditer. It is an efficient multidimensional iterator object using which it is possible to iterate over an array.
End of explanation
a = np.array([2,3,4,5])
b = np.array([8,5,4])
c = np.array([5,4,6,8,3])
ax,bx,cx = np.ix_(a,b,c)
result = ax+bx*cx
result
result[3,2,4]
a[3]+b[2]*c[4]
Explanation: ix_() function:
The ix_ function can be used to combine different vectors so as to obtain the result for each n-tuplet.
End of explanation
#NumPy package contains a Matrix library numpy.matlib.
import numpy.matlib
#matlib.empty()
#numpy.matlib.empty(shape, dtype, order)
print (np.matlib.empty((2,2)))
print('\n')
#ones
print (np.matlib.ones((2,2)))
print('\n')
#random
print (np.matlib.rand(3,3))
print('\n')
#This function returns the matrix filled with zeros.
#numpy.matlib.zeros()
print (np.matlib.zeros((2,2)))
print('\n')
#numpy.matlib.eye()
#This function returns a matrix with 1 along the diagonal elements and the zeros elsewhere. The function takes the following parameters.
#numpy.matlib.eye(n, M,k, dtype)
print (np.matlib.eye(n = 3, M = 3, k = 1, dtype = float))
print('\n')
#numpy.matlib.identity()
#The numpy.matlib.identity() function returns the Identity matrix of the given size.
#An identity matrix is a square matrix with all diagonal elements as 1.
np.matlib.identity(3)
#creation matrix
i = np.matrix('1,2,3,4')
print(i)
#array to matrix
list=[1,2,3,4]
k = np.asmatrix (list)
print(k)
print(type(k))
Explanation: cs-4
Topics :
Matlib subpackage,
matrix,
linear algebra method,
matplotlib using numpy.
End of explanation
#det
b = np.array([[6,1,1], [4, -2, 5], [2,8,7]])
print (b)
print('\n')
print (np.linalg.det(b))
print('\n')
print (6*(-2*7 - 5*8) - 1*(4*7 - 5*2) + 1*(4*8 - -2*2))
#dot
#Dot product of the two arrays
#vdot
#Dot product of the two vectors
#linear
dou = np.array([[1,2],[3,4]])
bou = np.array([[11,12],[13,14]])
print(np.dot(dou,bou)) #[[1*11+2*13, 1*12+2*14],[3*11+4*13, 3*12+4*14]]
print('\n')
print(np.vdot(dou,bou)) #1*11 + 2*12 + 3*13 + 4*14
#Solve the system of equations 3 * x0 + x1 = 9 and x0 + 2 * x1 = 8:
al = np.array([[3,1], [1,2]])
bl = np.array([9,8])
x = np.linalg.solve(al, bl)
print(x)
a = np.array([[1,1,1],[0,2,5],[2,5,-1]])
#'Array a
print (a)
print('\n')
ainv = np.linalg.inv(a)
print(ainv)
Explanation: NumPy package contains numpy.linalg module that provides all the functionality required for linear algebra
End of explanation
from matplotlib import pyplot as plt
x = np.arange(1,11)
y = 2 * x + 5
plt.title("Matplotlib demo")
plt.xlabel("x axis caption")
plt.ylabel("y axis caption")
plt.plot(x,y)
plt.show()
N = 8
y = np.zeros(N)
y
x1 = np.linspace(0, 10, N, endpoint=True)
x2 = np.linspace(0, 10, N, endpoint=False)
plt.plot(x1, y, 'o')
plt.plot(x2, y + 0.5, 'o')
plt.ylim([-0.5, 1])
plt.show()
Explanation: Using Matplotlib with numpy
End of explanation
import time
import numpy as np
size_of_vec = 100000
def pure_python_version():
t1 = time.time()
X = range(size_of_vec)
Y = range(size_of_vec)
Z = []
for i in range(len(X)):
Z.append(X[i] + Y[i])
return time.time() - t1
def numpy_version():
t1 = time.time()
X = np.arange(size_of_vec)
Y = np.arange(size_of_vec)
Z = X + Y
return time.time() - t1
t1 = pure_python_version()
t2 = numpy_version()
print(t1, t2)
#print("this example Numpy is " + str(t1/t2) + " faster!")
Explanation: Overview
End of explanation |
11,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the recommendation library
Step1: Understanding Movie Similarity
Try with different movies
Try with different types of similarity metrics (look in /src/similarity.py)
Which similarity metric works the best?
Step2: Creating recommendations for your personal ratings
Try with different similarity metrics (look in /src/similarity.py)
Try with different values of K (K is the number of neigbhours to consider when generating the recommendations)
Which combination of K and number of metrics works better?, discuss it with others. | Python Code:
import os
os.chdir('..')
#ย Import all the packages we need to generate recommendations
import numpy as np
import pandas as pd
import src.utils as utils
import src.recommenders as recommenders
import src.similarity as similarity
# imports necesary for plotting
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Enable logging on Jupyter notebook
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
#ย loads dataset
dataset_folder = os.path.join(os.getcwd(), 'data')
dataset_folder_ready = utils.load_dataset(dataset_folder)
#ย adds personal ratings to original dataset ratings file.
ratings_file = os.path.join(dataset_folder, 'ml-latest-small','ratings-merged.csv')
[ratings, my_customer_number] = utils.merge_datasets(dataset_folder_ready, ratings_file)
# the data is stored in a long pandas dataframe
# we need to pivot the data to create a [user x movie] matrix
ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_value=0)
ratings_matrix = ratings_matrix.transpose()
Explanation: Using the recommendation library
End of explanation
# find similar movies
#ย try with different movie titles and see what happens
movie_title = 'Star Wars: Episode VI - Return of the Jedi (1983)'
similarity_type = "cosine"
logger.info('top-10 movies similar to %s, using %s similarity', movie_title, similarity_type)
print(similarity.compute_nearest_neighbours(movie_title, ratings_matrix, similarity_type)[0:10])
# find similar movies
#ย try with different movie titles and see what happens
movie_title = 'All About My Mother (Todo sobre mi madre) (1999)'
similarity_type = "pearson"
logger.info('top-10 movies similar to: %s, using %s similarity', movie_title, similarity_type)
print(similarity.compute_nearest_neighbours(movie_title, ratings_matrix, similarity_type)[0:10])
Explanation: Understanding Movie Similarity
Try with different movies
Try with different types of similarity metrics (look in /src/similarity.py)
Which similarity metric works the best?
End of explanation
# get recommendations for a single user
recommendations = recommenders.recommend_uknn(ratings, my_customer_number, K=200, similarity_metric='cosine', N=10)
recommendations
# get recommendations for a single user
recommendations = recommenders.recommend_iknn(ratings, my_customer_number, K=100, similarity_metric='cosine')
recommendations
Explanation: Creating recommendations for your personal ratings
Try with different similarity metrics (look in /src/similarity.py)
Try with different values of K (K is the number of neigbhours to consider when generating the recommendations)
Which combination of K and number of metrics works better?, discuss it with others.
End of explanation |
11,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning with Shogun
By Saurabh Mahindre - <a href="https
Step1: In a general problem setting for the supervised learning approach, the goal is to learn a mapping from inputs $x_i\in\mathcal{X} $ to outputs $y_i \in \mathcal{Y}$, given a labeled set of input-output pairs $ \mathcal{D} = {(x_i,y_i)}^{\text N}{i=1} $$\subseteq \mathcal{X} \times \mathcal{Y}$. Here $ \mathcal{D}$ is called the training set, and $\text N$ is the number of training examples. In the simplest setting, each training input $x_i$ is a $\mathcal{D}$ -dimensional vector of numbers, representing, say, the height and weight of a person. These are called $\textbf {features}$, attributes or covariates. In general, however, $x_i$ could be a complex structured object, such as an image.<ul><li>When the response variable $y_i$ is categorical and discrete, $y_i \in$ {1,...,C} (say male or female) it is a classification problem.</li><li>When it is continuous (say the prices of houses) it is a regression problem.</li></ul>
For the unsupervised learning
approach we are only given inputs, $\mathcal{D} = {(x_i)}^{\text N}{i=1}$ , and the goal is to find โinteresting
patternsโ in the data.
Using datasets
Let us consider an example, we have a dataset about various attributes of individuals and we know whether or not they are diabetic. The data reveals certain configurations of attributes that correspond to diabetic patients and others that correspond to non-diabetic patients. When given a set of attributes for a new patient, the goal is to predict whether the patient is diabetic or not. This type of learning problem falls under Supervised learning, in particular, classification.
Shogun provides the capability to load datasets of different formats using CFile.</br> A real world dataset
Step2: This results in a LibSVMFile object which we will later use to access the data.
Feature representations
To get off the mark, let us see how Shogun handles the attributes of the data using CFeatures class. Shogun supports wide range of feature representations. We believe it is a good idea to have different forms of data, rather than converting them all into matrices. Among these are
Step3: In numpy, this is a matrix of 2 row-vectors of dimension 768. However, in Shogun, this will be a matrix of 768 column vectors of dimension 2. This is beacuse each data sample is stored in a column-major fashion, meaning each column here corresponds to an individual sample and each row in it to an atribute like BMI, Glucose concentration etc. To convert the extracted matrix into Shogun format, RealFeatures are used which are nothing but the above mentioned Dense features of 64bit Float type. To do this call RealFeatures with the matrix (this should be a 64bit 2D numpy array) as the argument.
Step4: Some of the general methods you might find useful are
Step5: Assigning labels
In supervised learning problems, training data is labelled. Shogun provides various types of labels to do this through Clabels. Some of these are
Step6: The labels can be accessed using get_labels and the confidence vector using get_values. The total number of labels is available using get_num_labels.
Step7: Preprocessing data
It is usually better to preprocess data to a standard form rather than handling it in raw form. The reasons are having a well behaved-scaling, many algorithms assume centered data, and that sometimes one wants to de-noise data (with say PCA). Preprocessors do not change the domain of the input features. It is possible to do various type of preprocessing using methods provided by CPreprocessor class. Some of these are
Step8: Horizontal and vertical lines passing through zero are included to make the processing of data clear. Note that the now processed data has zero mean.
<a id='supervised'>Supervised Learning with Shogun's <a href='http
Step9: We will now apply on test features to get predictions. For visualising the classification boundary, the whole XY is used as test data, i.e. we predict the class on every point in the grid.
Step10: Let us have a look at the weight vector of the separating hyperplane. It should tell us about the linear relationship between the features. The decision boundary is now plotted by solving for $\bf{w}\cdot\bf{x}$ + $\text{b}=0$. Here $\text b$ is a bias term which allows the linear function to be offset from the origin of the used coordinate system. Methods get_w() and get_bias() are used to get the necessary values.
Step11: For this problem, a linear classifier does a reasonable job in distinguishing labelled data. An interpretation could be that individuals below a certain level of BMI and glucose are likely to have no Diabetes.
For problems where the data cannot be separated linearly, there are more advanced classification methods, as for example all of Shogun's kernel machines, but more on this later. To play with this interactively have a look at this
Step12: Let's see the accuracy by applying on test features.
Step13: To evaluate more efficiently cross-validation is used. As you might have wondered how are the parameters of the classifier selected? Shogun has a model selection framework to select the best parameters. More description of these things in this notebook.
More predictions
Step14: The tool we will use here to perform regression is Kernel ridge regression. Kernel Ridge Regression is a non-parametric version of ridge regression where the kernel trick is used to solve a related linear ridge regression problem in a higher-dimensional space, whose results correspond to non-linear regression in the data-space. Again we train on the data and apply on the XY grid to get predicitions.
Step15: The out variable now contains a relationship between the attributes. Below is an attempt to establish such relationship between the attributes individually. Separate feature instances are created for each attribute. You could skip the code and have a look at the plots directly if you just want the essence. | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
#To import all Shogun classes
from shogun import *
Explanation: Machine Learning with Shogun
By Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a> as a part of <a href="http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann - <a href="https://github.com/karlnapf">github.com/karlnapf</a> - <a href="http://herrstrathmann.de/">herrstrathmann.de</a>
In this notebook we will see how machine learning problems are generally represented and solved in Shogun. As a primer to Shogun's many capabilities, we will see how various types of data and its attributes are handled and also how prediction is done.
Introduction
Using datasets
Feature representations
Labels
Preprocessing data
Supervised Learning with Shogun's CMachine interface
Evaluating performance and Model selection
Example: Regression
Introduction
Machine learning concerns the construction and study of systems that can learn from data via exploiting certain types of structure within these. The uncovered patterns are then used to predict future data, or to perform other kinds of decision making. Two main classes (among others) of Machine Learning algorithms are: predictive or supervised learning and descriptive or Unsupervised learning. Shogun provides functionality to address those (and more) problem classes.
End of explanation
#Load the file
data_file=LibSVMFile(os.path.join(SHOGUN_DATA_DIR, 'uci/diabetes/diabetes_scale.svm'))
Explanation: In a general problem setting for the supervised learning approach, the goal is to learn a mapping from inputs $x_i\in\mathcal{X} $ to outputs $y_i \in \mathcal{Y}$, given a labeled set of input-output pairs $ \mathcal{D} = {(x_i,y_i)}^{\text N}{i=1} $$\subseteq \mathcal{X} \times \mathcal{Y}$. Here $ \mathcal{D}$ is called the training set, and $\text N$ is the number of training examples. In the simplest setting, each training input $x_i$ is a $\mathcal{D}$ -dimensional vector of numbers, representing, say, the height and weight of a person. These are called $\textbf {features}$, attributes or covariates. In general, however, $x_i$ could be a complex structured object, such as an image.<ul><li>When the response variable $y_i$ is categorical and discrete, $y_i \in$ {1,...,C} (say male or female) it is a classification problem.</li><li>When it is continuous (say the prices of houses) it is a regression problem.</li></ul>
For the unsupervised learning
approach we are only given inputs, $\mathcal{D} = {(x_i)}^{\text N}{i=1}$ , and the goal is to find โinteresting
patternsโ in the data.
Using datasets
Let us consider an example, we have a dataset about various attributes of individuals and we know whether or not they are diabetic. The data reveals certain configurations of attributes that correspond to diabetic patients and others that correspond to non-diabetic patients. When given a set of attributes for a new patient, the goal is to predict whether the patient is diabetic or not. This type of learning problem falls under Supervised learning, in particular, classification.
Shogun provides the capability to load datasets of different formats using CFile.</br> A real world dataset: Pima Indians Diabetes data set is used now. We load the LibSVM format file using Shogun's LibSVMFile class. The LibSVM format is: $$\space \text {label}\space \text{attribute1:value1 attribute2:value2 }...$$$$\space.$$$$\space .$$ LibSVM uses the so called "sparse" format where zero values do not need to be stored.
End of explanation
f=SparseRealFeatures()
trainlab=f.load_with_labels(data_file)
mat=f.get_full_feature_matrix()
#exatract 2 attributes
glucose_conc=mat[1]
BMI=mat[5]
#generate a numpy array
feats=array(glucose_conc)
feats=vstack((feats, array(BMI)))
print feats, feats.shape
Explanation: This results in a LibSVMFile object which we will later use to access the data.
Feature representations
To get off the mark, let us see how Shogun handles the attributes of the data using CFeatures class. Shogun supports wide range of feature representations. We believe it is a good idea to have different forms of data, rather than converting them all into matrices. Among these are: $\hspace {20mm}$<ul><li>String features: Implements a list of strings. Not limited to character strings, but could also be sequences of floating point numbers etc. Have varying dimensions. </li> <li>Dense features: Implements dense feature matrices</li> <li>Sparse features: Implements sparse matrices.</li><li>Streaming features: For algorithms working on data streams (which are too large to fit into memory) </li></ul>
SpareRealFeatures (sparse features handling 64 bit float type data) are used to get the data from the file. Since LibSVM format files have labels included in the file, load_with_labels method of SpareRealFeatures is used. In this case it is interesting to play with two attributes, Plasma glucose concentration and Body Mass Index (BMI) and try to learn something about their relationship with the disease. We get hold of the feature matrix using get_full_feature_matrix and row vectors 1 and 5 are extracted. These are the attributes we are interested in.
End of explanation
#convert to shogun format
feats_train=RealFeatures(feats)
Explanation: In numpy, this is a matrix of 2 row-vectors of dimension 768. However, in Shogun, this will be a matrix of 768 column vectors of dimension 2. This is beacuse each data sample is stored in a column-major fashion, meaning each column here corresponds to an individual sample and each row in it to an atribute like BMI, Glucose concentration etc. To convert the extracted matrix into Shogun format, RealFeatures are used which are nothing but the above mentioned Dense features of 64bit Float type. To do this call RealFeatures with the matrix (this should be a 64bit 2D numpy array) as the argument.
End of explanation
#Get number of features(attributes of data) and num of vectors(samples)
feat_matrix=feats_train.get_feature_matrix()
num_f=feats_train.get_num_features()
num_s=feats_train.get_num_vectors()
print('Number of attributes: %s and number of samples: %s' %(num_f, num_s))
print('Number of rows of feature matrix: %s and number of columns: %s' %(feat_matrix.shape[0], feat_matrix.shape[1]))
print('First column of feature matrix (Data for first individual):')
print feats_train.get_feature_vector(0)
Explanation: Some of the general methods you might find useful are:
get_feature_matrix(): The feature matrix can be accessed using this.
get_num_features(): The total number of attributes can be accesed using this.
get_num_vectors(): To get total number of samples in data.
get_feature_vector(): To get all the attribute values (A.K.A feature vector) for a particular sample by passing the index of the sample as argument.</li></ul>
End of explanation
#convert to shogun format labels
labels=BinaryLabels(trainlab)
Explanation: Assigning labels
In supervised learning problems, training data is labelled. Shogun provides various types of labels to do this through Clabels. Some of these are:<ul><li>Binary labels: Binary Labels for binary classification which can have values +1 or -1.</li><li>Multiclass labels: Multiclass Labels for multi-class classification which can have values from 0 to (num. of classes-1).</li><li>Regression labels: Real-valued labels used for regression problems and are returned as output of classifiers.</li><li>Structured labels: Class of the labels used in Structured Output (SO) problems</li></ul></br> In this particular problem, our data can be of two types: diabetic or non-diabetic, so we need binary labels. This makes it a Binary Classification problem, where the data has to be classified in two groups.
End of explanation
n=labels.get_num_labels()
print 'Number of labels:', n
Explanation: The labels can be accessed using get_labels and the confidence vector using get_values. The total number of labels is available using get_num_labels.
End of explanation
preproc=PruneVarSubMean(True)
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor()
# Store preprocessed feature matrix.
preproc_data=feats_train.get_feature_matrix()
# Plot the raw training data.
figure(figsize=(13,6))
pl1=subplot(121)
gray()
_=scatter(feats[0, :], feats[1,:], c=labels, s=50)
vlines(0, -1, 1, linestyle='solid', linewidths=2)
hlines(0, -1, 1, linestyle='solid', linewidths=2)
title("Raw Training Data")
_=xlabel('Plasma glucose concentration')
_=ylabel('Body mass index')
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
pl1.legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2)
#Plot preprocessed data.
pl2=subplot(122)
_=scatter(preproc_data[0, :], preproc_data[1,:], c=labels, s=50)
vlines(0, -5, 5, linestyle='solid', linewidths=2)
hlines(0, -5, 5, linestyle='solid', linewidths=2)
title("Training data after preprocessing")
_=xlabel('Plasma glucose concentration')
_=ylabel('Body mass index')
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
pl2.legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2)
gray()
Explanation: Preprocessing data
It is usually better to preprocess data to a standard form rather than handling it in raw form. The reasons are having a well behaved-scaling, many algorithms assume centered data, and that sometimes one wants to de-noise data (with say PCA). Preprocessors do not change the domain of the input features. It is possible to do various type of preprocessing using methods provided by CPreprocessor class. Some of these are:<ul><li>Norm one: Normalize vector to have norm 1.</li><li>PruneVarSubMean: Substract the mean and remove features that have zero variance. </li><li>Dimension Reduction: Lower the dimensionality of given simple features.<ul><li>PCA: Principal component analysis.</li><li>Kernel PCA: PCA using kernel methods.</li></ul></li></ul> The training data will now be preprocessed using CPruneVarSubMean. This will basically remove data with zero variance and subtract the mean. Passing a True to the constructor makes the class normalise the varaince of the variables. It basically dividies every dimension through its standard-deviation. This is the reason behind removing dimensions with constant values. It is required to initialize the preprocessor by passing the feature object to init before doing anything else. The raw and processed data is now plotted.
End of explanation
#prameters to svm
C=0.9
svm=LibLinear(C, feats_train, labels)
svm.set_liblinear_solver_type(L2R_L2LOSS_SVC)
#train
svm.train()
size=100
Explanation: Horizontal and vertical lines passing through zero are included to make the processing of data clear. Note that the now processed data has zero mean.
<a id='supervised'>Supervised Learning with Shogun's <a href='http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMachine.html'>CMachine</a> interface</a>
CMachine is Shogun's interface for general learning machines. Basically one has to train() the machine on some training data to be able to learn from it. Then we apply() it to test data to get predictions. Some of these are: <ul><li>Kernel machine: Kernel based learning tools.</li><li>Linear machine: Interface for all kinds of linear machines like classifiers.</li><li>Distance machine: A distance machine is based on a a-priori choosen distance.</li><li>Gaussian process machine: A base class for Gaussian Processes. </li><li>And many more</li></ul>
Moving on to the prediction part, Liblinear, a linear SVM is used to do the classification (more on SVMs in this notebook). A linear SVM will find a linear separation with the largest possible margin. Here C is a penalty parameter on the loss function.
End of explanation
x1=linspace(-5.0, 5.0, size)
x2=linspace(-5.0, 5.0, size)
x, y=meshgrid(x1, x2)
#Generate X-Y grid test data
grid=RealFeatures(array((ravel(x), ravel(y))))
#apply on test grid
predictions = svm.apply(grid)
#get output labels
z=predictions.get_values().reshape((size, size))
#plot
jet()
figure(figsize=(9,6))
title("Classification")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
_=scatter(preproc_data[0, :], preproc_data[1,:], c=trainlab, cmap=gray(), s=50)
_=xlabel('Plasma glucose concentration')
_=ylabel('Body mass index')
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2)
gray()
Explanation: We will now apply on test features to get predictions. For visualising the classification boundary, the whole XY is used as test data, i.e. we predict the class on every point in the grid.
End of explanation
w=svm.get_w()
b=svm.get_bias()
x1=linspace(-2.0, 3.0, 100)
#solve for w.x+b=0
def solve (x1):
return -( ( (w[0])*x1 + b )/w[1] )
x2=map(solve, x1)
#plot
figure(figsize=(7,6))
plot(x1,x2, linewidth=2)
title("Decision boundary using w and bias")
_=scatter(preproc_data[0, :], preproc_data[1,:], c=trainlab, cmap=gray(), s=50)
_=xlabel('Plasma glucose concentration')
_=ylabel('Body mass index')
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2)
print 'w :', w
print 'b :', b
Explanation: Let us have a look at the weight vector of the separating hyperplane. It should tell us about the linear relationship between the features. The decision boundary is now plotted by solving for $\bf{w}\cdot\bf{x}$ + $\text{b}=0$. Here $\text b$ is a bias term which allows the linear function to be offset from the origin of the used coordinate system. Methods get_w() and get_bias() are used to get the necessary values.
End of explanation
#split features for training and evaluation
num_train=700
feats=array(glucose_conc)
feats_t=feats[:num_train]
feats_e=feats[num_train:]
feats=array(BMI)
feats_t1=feats[:num_train]
feats_e1=feats[num_train:]
feats_t=vstack((feats_t, feats_t1))
feats_e=vstack((feats_e, feats_e1))
feats_train=RealFeatures(feats_t)
feats_evaluate=RealFeatures(feats_e)
Explanation: For this problem, a linear classifier does a reasonable job in distinguishing labelled data. An interpretation could be that individuals below a certain level of BMI and glucose are likely to have no Diabetes.
For problems where the data cannot be separated linearly, there are more advanced classification methods, as for example all of Shogun's kernel machines, but more on this later. To play with this interactively have a look at this: web demo
Evaluating performance and Model selection
How do you assess the quality of a prediction? Shogun provides various ways to do this using CEvaluation. The preformance is evaluated by comparing the predicted output and the expected output. Some of the base classes for performance measures are:
Binary class evaluation: used to evaluate binary classification labels.
Clustering evaluation: used to evaluate clustering.
Mean absolute error: used to compute an error of regression model.
Multiclass accuracy: used to compute accuracy of multiclass classification.
Evaluating on training data should be avoided since the learner may adjust to very specific random features of the training data which are not very important to the general relation. This is called overfitting. Maximising performance on the training examples usually results in algorithms explaining the noise in data (rather than actual patterns), which leads to bad performance on unseen data. The dataset will now be split into two, we train on one part and evaluate performance on other using CAccuracyMeasure.
End of explanation
label_t=trainlab[:num_train]
labels=BinaryLabels(label_t)
label_e=trainlab[num_train:]
labels_true=BinaryLabels(label_e)
svm=LibLinear(C, feats_train, labels)
svm.set_liblinear_solver_type(L2R_L2LOSS_SVC)
#train and evaluate
svm.train()
output=svm.apply(feats_evaluate)
#use AccuracyMeasure to get accuracy
acc=AccuracyMeasure()
acc.evaluate(output,labels_true)
accuracy=acc.get_accuracy()*100
print 'Accuracy(%):', accuracy
Explanation: Let's see the accuracy by applying on test features.
End of explanation
temp_feats=RealFeatures(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
labels=RegressionLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
#rescale to 0...1
preproc=RescaleFeatures()
preproc.init(temp_feats)
temp_feats.add_preprocessor(preproc)
temp_feats.apply_preprocessor(True)
mat = temp_feats.get_feature_matrix()
dist_centres=mat[7]
lower_pop=mat[12]
feats=array(dist_centres)
feats=vstack((feats, array(lower_pop)))
print feats, feats.shape
#convert to shogun format features
feats_train=RealFeatures(feats)
Explanation: To evaluate more efficiently cross-validation is used. As you might have wondered how are the parameters of the classifier selected? Shogun has a model selection framework to select the best parameters. More description of these things in this notebook.
More predictions: Regression
This section will demonstrate another type of machine learning problem on real world data.</br> The task is to estimate prices of houses in Boston using the Boston Housing Dataset provided by StatLib library. The attributes are: Weighted distances to employment centres and percentage lower status of the population. Let us see if we can predict a good relationship between the pricing of houses and the attributes. This type of problems are solved using Regression analysis.
The data set is now loaded using LibSVMFile as in the previous sections and the attributes required (7th and 12th vector ) are converted to Shogun format features.
End of explanation
from mpl_toolkits.mplot3d import Axes3D
size=100
x1=linspace(0, 1.0, size)
x2=linspace(0, 1.0, size)
x, y=meshgrid(x1, x2)
#Generate X-Y grid test data
grid=RealFeatures(array((ravel(x), ravel(y))))
#Train on data(both attributes) and predict
width=1.0
tau=0.5
kernel=GaussianKernel(feats_train, feats_train, width)
krr=KernelRidgeRegression(tau, kernel, labels)
krr.train(feats_train)
kernel.init(feats_train, grid)
out = krr.apply().get_labels()
Explanation: The tool we will use here to perform regression is Kernel ridge regression. Kernel Ridge Regression is a non-parametric version of ridge regression where the kernel trick is used to solve a related linear ridge regression problem in a higher-dimensional space, whose results correspond to non-linear regression in the data-space. Again we train on the data and apply on the XY grid to get predicitions.
End of explanation
#create feature objects for individual attributes.
feats_test=RealFeatures(x1.reshape(1,len(x1)))
feats_t0=array(dist_centres)
feats_train0=RealFeatures(feats_t0.reshape(1,len(feats_t0)))
feats_t1=array(lower_pop)
feats_train1=RealFeatures(feats_t1.reshape(1,len(feats_t1)))
#Regression with first attribute
kernel=GaussianKernel(feats_train0, feats_train0, width)
krr=KernelRidgeRegression(tau, kernel, labels)
krr.train(feats_train0)
kernel.init(feats_train0, feats_test)
out0 = krr.apply().get_labels()
#Regression with second attribute
kernel=GaussianKernel(feats_train1, feats_train1, width)
krr=KernelRidgeRegression(tau, kernel, labels)
krr.train(feats_train1)
kernel.init(feats_train1, feats_test)
out1 = krr.apply().get_labels()
#Visualization of regression
fig=figure(figsize(20,6))
#first plot with only one attribute
fig.add_subplot(131)
title("Regression with 1st attribute")
_=scatter(feats[0, :], labels.get_labels(), cmap=gray(), s=20)
_=xlabel('Weighted distances to employment centres ')
_=ylabel('Median value of homes')
_=plot(x1,out0, linewidth=3)
#second plot with only one attribute
fig.add_subplot(132)
title("Regression with 2nd attribute")
_=scatter(feats[1, :], labels.get_labels(), cmap=gray(), s=20)
_=xlabel('% lower status of the population')
_=ylabel('Median value of homes')
_=plot(x1,out1, linewidth=3)
#Both attributes and regression output
ax=fig.add_subplot(133, projection='3d')
z=out.reshape((size, size))
gray()
title("Regression")
ax.plot_wireframe(y, x, z, linewidths=2, alpha=0.4)
ax.set_xlabel('% lower status of the population')
ax.set_ylabel('Distances to employment centres ')
ax.set_zlabel('Median value of homes')
ax.view_init(25, 40)
Explanation: The out variable now contains a relationship between the attributes. Below is an attempt to establish such relationship between the attributes individually. Separate feature instances are created for each attribute. You could skip the code and have a look at the plots directly if you just want the essence.
End of explanation |
11,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Glass Brain
Step1: 1. Upload all statistical maps into the data folder
The data folder can be found in the same folder as this notebook. Just drag and drop your NIfTI file into the data folder and press the upload button.
2. Specify relevant visualization parameters
Step2: 3. Run the visualization script
Step3: 4. Look at your data | Python Code:
%matplotlib inline
Explanation: Visualize Glass Brain
End of explanation
stats_file = '../test_data/ALL_N95_Mean_cope2_thresh_zstat1.nii.gz'
view = 'ortho'
colormap = 'RdBu_r'
threshold = '2.3'
black_bg
Explanation: 1. Upload all statistical maps into the data folder
The data folder can be found in the same folder as this notebook. Just drag and drop your NIfTI file into the data folder and press the upload button.
2. Specify relevant visualization parameters
End of explanation
%run ../scripts/mni_glass_brain.py --cbar --display_mode $view --cmap $colormap --thr_abs $threshold $stats_file
Explanation: 3. Run the visualization script
End of explanation
from IPython.display import Image, display
from glob import glob as gg
outputs = gg('../test_data/*ortho.png')
for o in outputs:
a = Image(filename=o)
display(a)
plotting.plot_glass_brain??
Explanation: 4. Look at your data
End of explanation |
11,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Reinforcement Learning in Tensorflow Part 2
Step1: Loading the CartPole Environment
If you don't already have the OpenAI gym installed, use pip install gym to grab it.
Step2: What happens if we try running the environment with random actions? How well do we do? (Hint
Step3: The goal of the task is to achieve a reward of 200 per episode. For every step the agent keeps the pole in the air, the agent recieves a +1 reward. By randomly choosing actions, our reward for each episode is only a couple dozen. Let's make that better with RL!
Setting up our Neural Network agent
This time we will be using a Policy neural network that takes observations, passes them through a single hidden layer, and then produces a probability of choosing a left/right movement. To learn more about this network, see Andrej Karpathy's blog on Policy Gradient networks.
Step5: Advantage function
This function allows us to weigh the rewards our agent recieves. In the context of the Cart-Pole task, we want actions that kept the pole in the air a long time to have a large reward, and actions that contributed to the pole falling to have a decreased or negative reward. We do this by weighing the rewards from the end of the episode, with actions at the end being seen as negative, since they likely contributed to the pole falling, and the episode ending. Likewise, early actions are seen as more positive, since they weren't responsible for the pole falling.
Step6: Running the Agent and Environment
Here we run the neural network agent, and have it act in the CartPole environment. | Python Code:
from __future__ import division
import numpy as np
try:
import cPickle as pickle
except:
import pickle
import tensorflow as tf
%matplotlib inline
import matplotlib.pyplot as plt
import math
try:
xrange = xrange
except:
xrange = range
Explanation: Simple Reinforcement Learning in Tensorflow Part 2: Policy Gradient Method
This tutorial contains a simple example of how to build a policy-gradient based agent that can solve the CartPole problem. For more information, see this Medium post.
For more Reinforcement Learning algorithms, including DQN and Model-based learning in Tensorflow, see my Github repo, DeepRL-Agents.
Parts of this tutorial are based on code by Andrej Karpathy and korymath.
End of explanation
import gym
env = gym.make('CartPole-v0')
Explanation: Loading the CartPole Environment
If you don't already have the OpenAI gym installed, use pip install gym to grab it.
End of explanation
env.reset()
random_episodes = 0
reward_sum = 0
while random_episodes < 10:
env.render()
observation, reward, done, _ = env.step(np.random.randint(0,2))
reward_sum += reward
if done:
random_episodes += 1
print("Reward for this episode was:",reward_sum)
reward_sum = 0
env.reset()
Explanation: What happens if we try running the environment with random actions? How well do we do? (Hint: not so well.)
End of explanation
# hyperparameters
H = 10 # number of hidden layer neurons
batch_size = 5 # every how many episodes to do a param update?
learning_rate = 1e-2 # feel free to play with this to train faster or more stably.
gamma = 0.99 # discount factor for reward
D = 4 # input dimensionality
tf.reset_default_graph()
#This defines the network as it goes from taking an observation of the environment to
#giving a probability of chosing to the action of moving left or right.
observations = tf.placeholder(tf.float32, [None,D] , name="input_x")
W1 = tf.get_variable("W1", shape=[D, H],
initializer=tf.contrib.layers.xavier_initializer())
layer1 = tf.nn.relu(tf.matmul(observations,W1))
W2 = tf.get_variable("W2", shape=[H, 1],
initializer=tf.contrib.layers.xavier_initializer())
score = tf.matmul(layer1,W2)
probability = tf.nn.sigmoid(score)
#From here we define the parts of the network needed for learning a good policy.
tvars = tf.trainable_variables()
input_y = tf.placeholder(tf.float32,[None,1], name="input_y")
advantages = tf.placeholder(tf.float32,name="reward_signal")
# The loss function. This sends the weights in the direction of making actions
# that gave good advantage (reward over time) more likely, and actions that didn't less likely.
loglik = tf.log(input_y*(input_y - probability) + (1 - input_y)*(input_y + probability))
loss = -tf.reduce_mean(loglik * advantages)
newGrads = tf.gradients(loss,tvars)
# Once we have collected a series of gradients from multiple episodes, we apply them.
# We don't just apply gradeients after every episode in order to account for noise in the reward signal.
adam = tf.train.AdamOptimizer(learning_rate=learning_rate) # Our optimizer
W1Grad = tf.placeholder(tf.float32,name="batch_grad1") # Placeholders to send the final gradients through when we update.
W2Grad = tf.placeholder(tf.float32,name="batch_grad2")
batchGrad = [W1Grad,W2Grad]
updateGrads = adam.apply_gradients(zip(batchGrad,tvars))
Explanation: The goal of the task is to achieve a reward of 200 per episode. For every step the agent keeps the pole in the air, the agent recieves a +1 reward. By randomly choosing actions, our reward for each episode is only a couple dozen. Let's make that better with RL!
Setting up our Neural Network agent
This time we will be using a Policy neural network that takes observations, passes them through a single hidden layer, and then produces a probability of choosing a left/right movement. To learn more about this network, see Andrej Karpathy's blog on Policy Gradient networks.
End of explanation
def discount_rewards(r):
take 1D float array of rewards and compute discounted reward
discounted_r = np.zeros_like(r)
running_add = 0
for t in reversed(xrange(0, r.size)):
running_add = running_add * gamma + r[t]
discounted_r[t] = running_add
return discounted_r
Explanation: Advantage function
This function allows us to weigh the rewards our agent recieves. In the context of the Cart-Pole task, we want actions that kept the pole in the air a long time to have a large reward, and actions that contributed to the pole falling to have a decreased or negative reward. We do this by weighing the rewards from the end of the episode, with actions at the end being seen as negative, since they likely contributed to the pole falling, and the episode ending. Likewise, early actions are seen as more positive, since they weren't responsible for the pole falling.
End of explanation
xs,hs,dlogps,drs,ys,tfps = [],[],[],[],[],[]
running_reward = None
reward_sum = 0
episode_number = 1
total_episodes = 10000
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
rendering = False
sess.run(init)
observation = env.reset() # Obtain an initial observation of the environment
# Reset the gradient placeholder. We will collect gradients in
# gradBuffer until we are ready to update our policy network.
gradBuffer = sess.run(tvars)
for ix,grad in enumerate(gradBuffer):
gradBuffer[ix] = grad * 0
while episode_number <= total_episodes:
# Rendering the environment slows things down,
# so let's only look at it once our agent is doing a good job.
if reward_sum/batch_size > 100 or rendering == True :
env.render()
rendering = True
# Make sure the observation is in a shape the network can handle.
x = np.reshape(observation,[1,D])
# Run the policy network and get an action to take.
tfprob = sess.run(probability,feed_dict={observations: x})
action = 1 if np.random.uniform() < tfprob else 0
xs.append(x) # observation
y = 1 if action == 0 else 0 # a "fake label"
ys.append(y)
# step the environment and get new measurements
observation, reward, done, info = env.step(action)
reward_sum += reward
drs.append(reward) # record reward (has to be done after we call step() to get reward for previous action)
if done:
episode_number += 1
# stack together all inputs, hidden states, action gradients, and rewards for this episode
epx = np.vstack(xs)
epy = np.vstack(ys)
epr = np.vstack(drs)
tfp = tfps
xs,hs,dlogps,drs,ys,tfps = [],[],[],[],[],[] # reset array memory
# compute the discounted reward backwards through time
discounted_epr = discount_rewards(epr)
# size the rewards to be unit normal (helps control the gradient estimator variance)
discounted_epr -= np.mean(discounted_epr)
discounted_epr //= np.std(discounted_epr)
# Get the gradient for this episode, and save it in the gradBuffer
tGrad = sess.run(newGrads,feed_dict={observations: epx, input_y: epy, advantages: discounted_epr})
for ix,grad in enumerate(tGrad):
gradBuffer[ix] += grad
# If we have completed enough episodes, then update the policy network with our gradients.
if episode_number % batch_size == 0:
sess.run(updateGrads,feed_dict={W1Grad: gradBuffer[0],W2Grad:gradBuffer[1]})
for ix,grad in enumerate(gradBuffer):
gradBuffer[ix] = grad * 0
# Give a summary of how well our network is doing for each batch of episodes.
running_reward = reward_sum if running_reward is None else running_reward * 0.99 + reward_sum * 0.01
print('Average reward for episode %f. Total average reward %f.' % (reward_sum//batch_size, running_reward//batch_size))
if reward_sum//batch_size > 200:
print("Task solved in",episode_number,'episodes!')
break
reward_sum = 0
observation = env.reset()
print(episode_number,'Episodes completed.')
Explanation: Running the Agent and Environment
Here we run the neural network agent, and have it act in the CartPole environment.
End of explanation |
11,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Step1: Reformer
Step2: Setting up data and model
In this notebook, we'll be pushing the limits of just how many tokens we can fit on a single TPU device. The TPUs available in Colab have 8GB of memory per core, and 8 cores. We will set up a Reformer model that can fit a copy of "Crime and Punishment" on each of the 8 TPU cores (over 500,000 tokens per 8GB of memory).
Step4: As we see above, "Crime and Punishment" has just over half a million tokens with the BPE vocabulary we have selected.
Normally we would have a dataset with many examples, but for this demonstration we fit a language model on the single novel only. We don't want the model to just memorize the dataset by encoding the words in its position embeddings, so at each training iteration we will randomly select how much padding to put before the text vs. after it.
We have 8 TPU cores, so we will separately randomize the amount of padding for each core.
Step6: Sample from the model | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google LLC.
End of explanation
# Install JAX.
!pip install --upgrade jax
!pip install --upgrade jaxlib
!pip install --upgrade trax
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
!pip install --upgrade -q sentencepiece
!pip install --upgrade -q gin
from tensorflow.compat.v1.io.gfile import GFile
import gin
import os
import jax
import trax
from trax.data import inputs
import numpy as np
import jax.numpy as jnp
from scipy.special import softmax
from sentencepiece import SentencePieceProcessor
Explanation: Reformer: Text Generation
This notebook was designed to run on TPU.
To use TPUs in Colab, click "Runtime" on the main menu bar and select Change runtime type. Set "TPU" as the hardware accelerator.
End of explanation
# Import a copy of "Crime and Punishment", by Fyodor Dostoevsky
with GFile('gs://trax-ml/reformer/crime-and-punishment-2554.txt') as f:
text = f.read()
# The file read above includes metadata and licensing information.
# For training our language model, we will only use the actual novel text.
start = text.find('CRIME AND PUNISHMENT') # skip header
start = text.find('CRIME AND PUNISHMENT', start + 1) # skip header
start = text.find('CRIME AND PUNISHMENT', start + 1) # skip translator preface
end = text.rfind('End of Project') # skip extra text at the end
text = text[start:end].strip()
# Load a BPE vocabulaary with 320 types. This mostly consists of single letters
# and pairs of letters, but it has some common words and word pieces, too.
!gsutil cp gs://trax-ml/reformer/cp.320.* .
TOKENIZER = SentencePieceProcessor()
TOKENIZER.load('cp.320.model')
# Tokenize
IDS = TOKENIZER.EncodeAsIds(text)
IDS = np.asarray(IDS, dtype=np.int32)
PAD_AMOUNT = 512 * 1024 - len(IDS)
print("Number of tokens:", IDS.shape[0])
Explanation: Setting up data and model
In this notebook, we'll be pushing the limits of just how many tokens we can fit on a single TPU device. The TPUs available in Colab have 8GB of memory per core, and 8 cores. We will set up a Reformer model that can fit a copy of "Crime and Punishment" on each of the 8 TPU cores (over 500,000 tokens per 8GB of memory).
End of explanation
# Set up the data pipeline.
def my_inputs(n_devices):
while True:
inputs = []
mask = []
pad_amounts = np.random.choice(PAD_AMOUNT, n_devices)
for i in range(n_devices):
inputs.append(np.pad(IDS, (pad_amounts[i], PAD_AMOUNT - pad_amounts[i]),
mode='constant'))
mask.append(np.pad(np.ones_like(IDS, dtype=np.float32),
(pad_amounts[i], PAD_AMOUNT - pad_amounts[i]),
mode='constant'))
inputs = np.stack(inputs)
mask = np.stack(mask)
yield (inputs, inputs, mask)
print("(device count, tokens per device) = ",
next(my_inputs(trax.fastmath.device_count()))[0].shape)
# Configure hyperparameters.
gin.parse_config(
import trax.layers
import trax.models
import trax.optimizers
import trax.data.inputs
import trax.supervised.trainer_lib
# Parameters that will vary between experiments:
# ==============================================================================
train.model = @trax.models.ReformerLM
# Our model will have 6 layers, alternating between the LSH attention proposed
# in the Reformer paper and local attention within a certain context window.
n_layers = 6
attn_type = [
@trax.layers.SelfAttention,
@LSHSelfAttention,
@trax.layers.SelfAttention,
@LSHSelfAttention,
@trax.layers.SelfAttention,
@LSHSelfAttention,
]
share_qk = False # LSH attention ignores this flag and always shares q & k
n_heads = 2
attn_kv = 64
dropout = 0.05
n_tokens = 524288
# Parameters for multifactor:
# ==============================================================================
multifactor.constant = 0.01
multifactor.factors = 'constant * linear_warmup * cosine_decay'
multifactor.warmup_steps = 100
multifactor.steps_per_cycle = 900
# Parameters for Adam:
# ==============================================================================
Adam.weight_decay_rate=0.0
Adam.b1 = 0.86
Adam.b2 = 0.92
Adam.eps = 1e-9
# Parameters for SelfAttention:
# ==============================================================================
trax.layers.SelfAttention.attention_dropout = 0.05
trax.layers.SelfAttention.chunk_len = 64
trax.layers.SelfAttention.n_chunks_before = 1
trax.layers.SelfAttention.n_parallel_heads = 1
# Parameters for LSHSelfAttention:
# ==============================================================================
LSHSelfAttention.attention_dropout = 0.0
LSHSelfAttention.chunk_len = 64
LSHSelfAttention.n_buckets = [64, 128]
LSHSelfAttention.n_chunks_after = 0
LSHSelfAttention.n_chunks_before = 1
LSHSelfAttention.n_hashes = 1
LSHSelfAttention.n_parallel_heads = 1
LSHSelfAttention.predict_drop_len = 128
LSHSelfAttention.predict_mem_len = 1024
# Parameters for ReformerLM:
# ==============================================================================
ReformerLM.attention_type = %attn_type
ReformerLM.d_attention_key = %attn_kv
ReformerLM.d_attention_value = %attn_kv
ReformerLM.d_model = 256
ReformerLM.d_ff = 512
ReformerLM.dropout = %dropout
ReformerLM.ff_activation = @trax.layers.Relu
ReformerLM.max_len = %n_tokens
ReformerLM.mode = 'train'
ReformerLM.n_heads = %n_heads
ReformerLM.n_layers = %n_layers
ReformerLM.vocab_size = 320
ReformerLM.axial_pos_shape = (512, 1024)
ReformerLM.d_axial_pos_embs= (64, 192)
)
# Set up a Trainer.
output_dir = os.path.expanduser('~/train_dir/')
!rm -f ~/train_dir/model.pkl.gz # Remove old model
trainer = trax.supervised.Trainer(
model=trax.models.ReformerLM,
loss_fn=trax.layers.CrossEntropyLoss(),
optimizer=trax.optimizers.Adam,
lr_schedule=trax.lr.multifactor(),
inputs=trax.data.inputs.Inputs(my_inputs),
output_dir=output_dir)
# Run one training step, to make sure the model fits in memory.
# The first time trainer.train_epoch is called, it will JIT the entire network
# architecture, which takes around 2 minutes. The JIT-compiled model is saved
# so subsequent runs will be much faster than the first.
trainer.train_epoch(n_steps=1, n_eval_steps=1)
# Train for 600 steps total
# The first ~20 steps are slow to run, but after that it reaches steady-state
# speed. This will take at least 30 minutes to run to completion, but can safely
# be interrupted by selecting "Runtime > Interrupt Execution" from the menu.
# The language model won't be exceptionally good when trained for just a few
# steps and with minimal regularization. However, we can still sample from it to
# see what it learns.
trainer.train_epoch(n_steps=9, n_eval_steps=1)
for _ in range(59):
trainer.train_epoch(n_steps=10, n_eval_steps=1)
Explanation: As we see above, "Crime and Punishment" has just over half a million tokens with the BPE vocabulary we have selected.
Normally we would have a dataset with many examples, but for this demonstration we fit a language model on the single novel only. We don't want the model to just memorize the dataset by encoding the words in its position embeddings, so at each training iteration we will randomly select how much padding to put before the text vs. after it.
We have 8 TPU cores, so we will separately randomize the amount of padding for each core.
End of explanation
# As we report in the Reformer paper, increasing the number of hashing rounds
# helps with quality. We can even increase the number of hashing rounds at
# evaluation time only.
gin.parse_config(LSHSelfAttention.n_hashes = 4)
# Load the trained Reformer in 'predict' mode
model = trax.models.ReformerLM(mode='predict')
model.init_from_file(os.path.join(output_dir,'model.pkl.gz'),
weights_only=True)
# Sample from ReformerLM
output_token_ids = trax.supervised.decoding.autoregressive_sample(
model, temperature=0.0)
# Decode token IDs
# Reformer outputed a batch with one item, we access it using [0]
# tolist() converts from int64 to int, the type SentencePiece expects
TOKENIZER.DecodeIds(output_token_ids[0].tolist())
Explanation: Sample from the model
End of explanation |
11,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Block Codes
Block codes take serial source symbols and group them into k-symbol blocks. They then take n-k check symbols to make code
words of length n > k. The code is denoted (n,k). The following shows a general block diagram of block encoder.
The block encoder takes k source bits and encodes it into a length n codeword. A block decoder then works in reverse. The length n channel symbol codewords are decoded into the original length k source bits.
Single Error Correction Block Codes
Several block codes are able to correct only one error per block. Two common single error correction codes are cyclic codes and hamming codes. In scikit-dsp-comm there is a module called fec_block.py. This module contains two classes so far
Step1: After the cyclic code object cc1 is created, the cc1.cyclic_encoder method can be used to encode source data bits. In the following example, we generate 16 distinct source symbols to get 16 distinct channel symbol codewords using the cyclic_encoder method. The cyclic_encoder method takes an array of source bits as a paramter. The array of source bits must be a length of a multiple of $k$. Otherwise, the method will throw an error.
Step2: Now, a bit error is introduced into each of the codewords. Then, the codwords with the error are decoded using the cyclic_decoder method. The cyclic_decoder method takes an array of codewords of length $n$ as a parameter and returns an array of source bits. Even with 1 error introduced into each codeword, All of the original source bits are still decoded properly.
Step3: The following example generates many random source symbols. It then encodes the symbols using the cyclic encoder. It then simulates a channel by adding noise. It then implements hard decisions on each of the incoming bits and puts the received noisy bits into the cyclic decoder. Source bits are then returned and errors are counted until 100 bit errors are received. Once 100 bit errors are received, the bit error probability is calculated. This code can be run at a variety of SNRs and with various code rates.
Step4: There is a function in the fec_block module called block_single_error_Pb_bound that can be used to generate the theoretical bit error probability bounds for single error correction block codes. Measured bit error probabilities from the previous example were recorded to compare to the bounds.
Step5: These plots show that the simulated bit error probability is very close to the theoretical bit error probabilites.
Hamming Code
Hamming codes are another form of single error correction block codes. Hamming codes use parity-checks in order to generate and decode block codes. The code rates of Hamming codes are generated the same way as cyclic codes. In this case a parity-check length of length $j$ is chosen, and n and k are calculated by $n=2^j-1$ and $k=n-j$. Hamming codes are generated first by defining a parity-check matrix $H$. The parity-check matrix is a j x n matrix containing binary numbers from 1 to n as the columns. For a $j=3$ ($k=4$, $n=7$) Hamming code. The parity-check matrix starts out as the following
Step6: $k$ and $n$ are calculated form the number of parity checks $j$ and can be accessed by hh1.k and hh1.n. The $j$ x $n$ parity-check matrix $H$ and the $k$ x $n$ generator matrix $G$ can be accessed by hh1.H and hh1.G. These are exactly as described previously.
Step7: The fec_hamming class has an encoder method called hamm_encoder. This method works the same way as the cyclic encoder. It takes an array of source bits with a length that is a multiple of $k$ and returns an array of codewords. This class has another method called hamm_decoder which can decode an array of codewords. The array of codewords must have a length that is a multiple of $n$. The following example generates random source bits, encodes them using a hamming encoder, simulates transmitting them over a channel, uses hard decisions after the receiver to get a received array of codewords, and decodes the codewords using the hamming decoder. It runs until it counds 100 bit errors and then calculates the bit error probability. This can be used to simulate hamming codes with different rates (different numbers of parity checks) at different SNRs.
Step8: The fec_block.block_single_error_Pb_bound function can also be used to generate the bit error probability bounds for hamming codes. The following example generates theoretical bit error probability bounds for hamming codes and compares it with simulated bit error probabilities from the previous examples. | Python Code:
cc1 = block.FECCyclic('1011')
Explanation: Block Codes
Block codes take serial source symbols and group them into k-symbol blocks. They then take n-k check symbols to make code
words of length n > k. The code is denoted (n,k). The following shows a general block diagram of block encoder.
The block encoder takes k source bits and encodes it into a length n codeword. A block decoder then works in reverse. The length n channel symbol codewords are decoded into the original length k source bits.
Single Error Correction Block Codes
Several block codes are able to correct only one error per block. Two common single error correction codes are cyclic codes and hamming codes. In scikit-dsp-comm there is a module called fec_block.py. This module contains two classes so far: fec_cyclic for cyclic codes and fec_hamming for hamming codes. Each class has methods for encoding, decoding, and plotting theoretical bit error probability bounds.
Cyclic Codes
A (n,k) cyclic code can easily be generated with an n-k stage shift register with appropriate feedback according to Ziemer and Tranter pgs 646 and 647. The following shows a block diagram for a cyclic encoder.
This block diagram can be expanded to larger codes as well. A generator polynomial can be used to determine the position of the binary adders. The previous example uses a generator polynomial of '1011'. This means that there is a binary adder after the input, after second shift register, and after the third shift register.
The source symbol length and the channel symbol length can be determined from the number of shift registers $j$. The length of the generator polynomial is always $1+j$. In this case we have 3 shift registers, so $j=3$. We have $k=4$ source bits and $n=7$ channel bits. For other shift register lengths, we can use the following equations. $n=j^2-1$ and $k = n-j$. The following table (from Ziemer and Peterson pg 429) shows the source symbol length, channel symbol length, and the code rate for various shift register lengths for single error correction codes.
| j | k | n | R=k/n |
|---|-----|-----|-------|
|3 |4 |7 |0.57 |
|4 |11 |15 |0.73 |
|5 |26 |31 |0.84 |
|6 |57 |63 |0.90 |
|7 |120 |127 |0.94 |
|8 |247 |255 |0.97 |
|9 |502 |511 |0.98 |
|10 |1013 |1023 |0.99 |
The following block diagram shows a block decoder (from Ziemer and Tranter page 647). The block decoder takes in a codeword of channel symbol length n and decodes it to the original source bits of length k.
The fec_cyclic class can be used to generate a cyclic code object. The cyclic code object can be initialized by a generator polynomial. The length of the generator determines the source symbol length, the channel symbol length, and the rate. The following shows the generator polynomial '1011' considered in the two example block diagrams.
End of explanation
# Generate 16 distinct codewords
codewords = zeros((16,7),dtype=int)
x = zeros((16,4))
for i in range(0,16):
xbin = block.binary(i,4)
xbin = array(list(xbin)).astype(int)
x[i,:] = xbin
x = reshape(x,size(x)).astype(int)
codewords = cc1.cyclic_encoder(x)
print(reshape(codewords,(16,7)))
Explanation: After the cyclic code object cc1 is created, the cc1.cyclic_encoder method can be used to encode source data bits. In the following example, we generate 16 distinct source symbols to get 16 distinct channel symbol codewords using the cyclic_encoder method. The cyclic_encoder method takes an array of source bits as a paramter. The array of source bits must be a length of a multiple of $k$. Otherwise, the method will throw an error.
End of explanation
# introduce 1 bit error into each code word and decode
codewords = reshape(codewords,(16,7))
for i in range(16):
error_pos = i % 6
codewords[i,error_pos] = (codewords[i,error_pos] +1) % 2
codewords = reshape(codewords,size(codewords))
decoded_blocks = cc1.cyclic_decoder(codewords)
print(reshape(decoded_blocks,(16,4)))
Explanation: Now, a bit error is introduced into each of the codewords. Then, the codwords with the error are decoded using the cyclic_decoder method. The cyclic_decoder method takes an array of codewords of length $n$ as a parameter and returns an array of source bits. Even with 1 error introduced into each codeword, All of the original source bits are still decoded properly.
End of explanation
cc1 = block.FECCyclic('101001')
N_blocks_per_frame = 2000
N_bits_per_frame = N_blocks_per_frame*cc1.k
EbN0 = 6
total_bit_errors = 0
total_bit_count = 0
while total_bit_errors < 100:
# Create random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y = cc1.cyclic_encoder(x)
# Add channel noise to bits and scale to +/- 1
yn = dc.cpx_awgn(2*y-1,EbN0-10*log10(cc1.n/cc1.k),1) # Channel SNR is dB less
# Scale back to 0 and 1
yn = ((sign(yn.real)+1)/2).astype(int)
z = cc1.cyclic_decoder(yn)
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
Explanation: The following example generates many random source symbols. It then encodes the symbols using the cyclic encoder. It then simulates a channel by adding noise. It then implements hard decisions on each of the incoming bits and puts the received noisy bits into the cyclic decoder. Source bits are then returned and errors are counted until 100 bit errors are received. Once 100 bit errors are received, the bit error probability is calculated. This code can be run at a variety of SNRs and with various code rates.
End of explanation
SNRdB = arange(0,12,.1)
#SNRdB = arange(9.4,9.6,0.1)
Pb_uc = block.block_single_error_Pb_bound(3,SNRdB,False)
Pb_c_3 = block.block_single_error_Pb_bound(3,SNRdB)
Pb_c_4 = block.block_single_error_Pb_bound(4,SNRdB)
Pb_c_5 = block.block_single_error_Pb_bound(5,SNRdB)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc,'k-')
semilogy(SNRdB,Pb_c_3,'c--')
semilogy(SNRdB,Pb_c_4,'m--')
semilogy(SNRdB,Pb_c_5,'g--')
semilogy([4,5,6,7,8,9],[1.44e-2,5.45e-3,2.37e-3,6.63e-4,1.33e-4,1.31e-5],'cs')
semilogy([5,6,7,8],[4.86e-3,1.16e-3,2.32e-4,2.73e-5],'ms')
semilogy([5,6,7,8],[4.31e-3,9.42e-4,1.38e-4,1.15e-5],'gs')
axis([0,12,1e-10,1e0])
title('Cyclic code BEP')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Bit Error Probability')
legend(('Uncoded BPSK','(7,4), hard',\
'(15,11), hard', '(31,26), hard',\
'(7,4) sim', '(15,11) sim', \
'(31,26) sim'),loc='lower left')
grid();
Explanation: There is a function in the fec_block module called block_single_error_Pb_bound that can be used to generate the theoretical bit error probability bounds for single error correction block codes. Measured bit error probabilities from the previous example were recorded to compare to the bounds.
End of explanation
hh1 = block.FECHamming(3)
Explanation: These plots show that the simulated bit error probability is very close to the theoretical bit error probabilites.
Hamming Code
Hamming codes are another form of single error correction block codes. Hamming codes use parity-checks in order to generate and decode block codes. The code rates of Hamming codes are generated the same way as cyclic codes. In this case a parity-check length of length $j$ is chosen, and n and k are calculated by $n=2^j-1$ and $k=n-j$. Hamming codes are generated first by defining a parity-check matrix $H$. The parity-check matrix is a j x n matrix containing binary numbers from 1 to n as the columns. For a $j=3$ ($k=4$, $n=7$) Hamming code. The parity-check matrix starts out as the following:
\begin{equation}
\mathbf{H} = \left[\begin{array}
{rrr}
0 & 0 & 0 & 1 & 1 & 1 & 1\
0 & 1 & 1 & 0 & 0 & 1 & 1\
1 & 0 & 1 & 0 & 1 & 0 & 1
\end{array}\right]
\end{equation}
The parity-chekc matrix can be reordered to provice a systematic code by interchanging the columns to create an identity matrix on the right side of the matrix. In this case, this is done by interchangeing columsn 1 and 7, columns 2 and 6, and columsn 4 and 5. The resulting parity-check matrix is the following.
\begin{equation}
\mathbf{H} = \left[\begin{array}
{rrr}
1 & 1 & 0 & 1 & 1 & 0 & 0\
1 & 1 & 1 & 0 & 0 & 1 & 0\
1 & 0 & 1 & 1 & 0 & 0 & 1
\end{array}\right]
\end{equation}
Next, a generator matrix $G$ is created by restructuring the parity-check matrix. The $G$ matrix is gathered from the $H$ matrix through the following relationship.
\begin{equation}
\mathbf{G} = \left[\begin{array}
{rrr}
I_k & ... & H_p
\end{array}\right]
\end{equation}
where $H_p$ is defined as the transpose of the first k columns of H. For this example we arrive at the following $G$ matrix. G always ends up being a k x n matrix.
\begin{equation}
\mathbf{G} = \left[\begin{array}
{rrr}
1 & 0 & 0 & 0 & 1 & 1 & 1\
0 & 1 & 0 & 0 & 1 & 1 & 0\
0 & 0 & 1 & 0 & 0 & 1 & 1\
0 & 0 & 0 & 1 & 1 & 0 & 1
\end{array}\right]
\end{equation}
Codewords can be generated by multiplying a source symbol matrix by the generator matrix.
\begin{equation}
codeword = xG
\end{equation}
Where the codeword is a column vector of length $n$ and x is a row vector of length $n$. This is the basic operation of the encoder. The decoder is slightly more complicated. The decoder starts by taking the parity-check matrix $H$ and multiplying it by the codeword column vector. This gives the "syndrome" of the block. The syndrome tells us whether or not there is an error in the codeword. If no errors are present, the syndrome will be 0. If there is an error in the codeword, the syndrome will tell us which bit has the error.
\begin{equation}
S = H \cdot codeword
\end{equation}
If the syndrome is nonzero, then it can be used to correct the error bit in the codeword. After that, the original source blocks can be decoded from the codewords by the following equation.
\begin{equation}
source = R\cdot codeword
\end{equation}
Where $R$ is a k x n matrix where R is made up of a k x k identity matrix and a k x n-k matrix of zeros. Again, the Hamming code is only capable of correcting one error per block, so if more than one error is present in the block, then the syndrome cannot be used to correct the error.
The hamming code class can be found in the fec_block module as fec_hamming. Hamming codes are sometimes generated using generator polynomials just like with cyclic codes. This is not completely necessary, however, if the previously described process is used. This process simply relies on choosing a number of parity bits and then systematic single-error correction hamming codes are automatically generated. The following will go through an example of a $j=3$ ($k=4$, $n=7$) hamming code.
Hamming Block Code Class Definition:
End of explanation
print('k = ' + str(hh1.k))
print('n = ' + str(hh1.n))
print('H = \n' + str(hh1.H))
print('G = \n' + str(hh1.G))
Explanation: $k$ and $n$ are calculated form the number of parity checks $j$ and can be accessed by hh1.k and hh1.n. The $j$ x $n$ parity-check matrix $H$ and the $k$ x $n$ generator matrix $G$ can be accessed by hh1.H and hh1.G. These are exactly as described previously.
End of explanation
hh1 = block.FECHamming(5)
N_blocks_per_frame = 20000
N_bits_per_frame = N_blocks_per_frame*hh1.k
EbN0 = 8
total_bit_errors = 0
total_bit_count = 0
while total_bit_errors < 100:
# Create random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y = hh1.hamm_encoder(x)
# Add channel noise to bits and scale to +/- 1
yn = dc.cpx_awgn(2*y-1,EbN0-10*log10(hh1.n/hh1.k),1) # Channel SNR is dB less
# Scale back to 0 and 1
yn = ((sign(yn.real)+1)/2).astype(int)
z = hh1.hamm_decoder(yn)
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
Explanation: The fec_hamming class has an encoder method called hamm_encoder. This method works the same way as the cyclic encoder. It takes an array of source bits with a length that is a multiple of $k$ and returns an array of codewords. This class has another method called hamm_decoder which can decode an array of codewords. The array of codewords must have a length that is a multiple of $n$. The following example generates random source bits, encodes them using a hamming encoder, simulates transmitting them over a channel, uses hard decisions after the receiver to get a received array of codewords, and decodes the codewords using the hamming decoder. It runs until it counds 100 bit errors and then calculates the bit error probability. This can be used to simulate hamming codes with different rates (different numbers of parity checks) at different SNRs.
End of explanation
SNRdB = arange(0,12,.1)
Pb_uc = block.block_single_error_Pb_bound(3,SNRdB,False)
Pb_c_3 = block.block_single_error_Pb_bound(3,SNRdB)
Pb_c_4 = block.block_single_error_Pb_bound(4,SNRdB)
Pb_c_5 = block.block_single_error_Pb_bound(5,SNRdB)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc,'k-')
semilogy(SNRdB,Pb_c_3,'c--')
semilogy(SNRdB,Pb_c_4,'m--')
semilogy(SNRdB,Pb_c_5,'g--')
semilogy([5,6,7,8,9,10],[6.64e-3,2.32e-3,5.25e-4,1.16e-4,1.46e-5,1.19e-6],'cs')
semilogy([5,6,7,8,9],[4.68e-3,1.19e-3,2.48e-4,3.6e-5,1.76e-6],'ms')
semilogy([5,6,7,8,9],[4.42e-3,1.11e-3,1.41e-4,1.43e-5,6.73e-7],'gs')
axis([0,12,1e-10,1e0])
title('Hamming code BEP')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Bit Error Probability')
legend(('Uncoded BPSK','(7,4), hard',\
'(15,11), hard', '(31,26), hard',\
'(7,4) sim', '(15,11) sim', \
'(31,26) sim'),loc='lower left')
grid();
Explanation: The fec_block.block_single_error_Pb_bound function can also be used to generate the bit error probability bounds for hamming codes. The following example generates theoretical bit error probability bounds for hamming codes and compares it with simulated bit error probabilities from the previous examples.
End of explanation |
11,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing in Context sub history
Lecture one
Number munging
This is iPython.
It is swell.
It is Python in a brower.
Pure CS types not love.
We hackish types adore!
Download anaconda (esp if on windows)
Step1: Our first data format
Rk,G,Date,Age,Tm,,Opp,,GS,MP,FG,FGA,FG%,3P,3PA,3P%,FT,FTA,FT%,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,GmSc,+/-
1,1,2013-10-29,28-303,MIA,,CHI,W (+12),1,38
Step2: Comma-separated value (CSVs) (files)
LeBron James' first five games of the 2013-2014 NBA season
Step3: You can compose indexes! this is the 0th item of the 74th list.
BUT I'm not going to torture you with this lower level analysis (for now)
Pandas first-line python tool for Exploratory Data Analysis
rich data structures
powerful ways to slice, dice, reformate, fix, and eliminate data
taste of what can do
rich queries like databases
dataframes
The library Pandas provides us with a powerful overlay that lets us use matrices but always keep their row and column names
Step4: Now we read a big csv file using a function from pandas called pd.read_csv()
Step5: Note at the bottom that the display tells us how many rows and columns we're dealing with.
As a general rule, pandas dataframe objects default to slicing by column using a syntax you'll know from dicts as in df["course_id"].
Step6: Instead of (column, row) we use name_of_dataframe[column name][row #]
Step7: Why? A good question. Now try passing a list of just one row
Step8: We can pick out columns using their names and with a slice of rows.
Step9: In inputing CSV, Pandas parses each column and attempts to discern what sort of data is within. It's good but not infallible.
- Pandas is particularly good with dates
Step10: note that we pass a list of columns to pick out multiple columns
Step11: Now we can count how many times someone started
Step12: What are | Python Code:
#This is a comment
#This is all blackboxed for now--DON'T worry about it
# Render our plots inline
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
pd.set_option('display.mpl_style', 'default') # Make the graphs a bit prettier
plt.rcParams['figure.figsize'] = (15, 5)
Explanation: Computing in Context sub history
Lecture one
Number munging
This is iPython.
It is swell.
It is Python in a brower.
Pure CS types not love.
We hackish types adore!
Download anaconda (esp if on windows)
End of explanation
#looks much nicer on a wide screen!
Explanation: Our first data format
Rk,G,Date,Age,Tm,,Opp,,GS,MP,FG,FGA,FG%,3P,3PA,3P%,FT,FTA,FT%,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,GmSc,+/-
1,1,2013-10-29,28-303,MIA,,CHI,W (+12),1,38:01,5,11,.455,0,1,.000,7,9,.778,0,6,6,8,1,0,2,0,17,16.9,+8
2,2,2013-10-30,28-304,MIA,@,PHI,L (-4),1,36:38,9,17,.529,4,7,.571,3,4,.750,0,4,4,13,0,0,4,3,25,21.4,-8
3,3,2013-11-01,28-306,MIA,@,BRK,L (-1),1,42:14,11,19,.579,1,2,.500,3,5,.600,1,6,7,6,2,1,5,2,26,19.9,-3
4,4,2013-11-03,28-308,MIA,,WAS,W (+10),1,34:41,9,14,.643,3,5,.600,4,5,.800,0,3,3,5,1,0,6,2,25,17.0,+16
5,5,2013-11-05,28-310,MIA,@,TOR,W (+9),1,36:01,13,20,.650,1,3,.333,8,8,1.000,2,6,8,8,0,1,1,2,35,33.9,+3
End of explanation
import csv
import urllib
url = "https://gist.githubusercontent.com/aparrish/cb1672e98057ea2ab7a1/raw/13166792e0e8436221ef85d2a655f1965c400f75/lebron_james.csv"
stats = list(csv.reader(urllib.urlopen(url)))
#example courtesy the great Allison Parrish!
#What different things do urllib.urlopen(url) then csv.reader() and then list() do?
stats[0]
len(stats)
stats[74][0]
Explanation: Comma-separated value (CSVs) (files)
LeBron James' first five games of the 2013-2014 NBA season
End of explanation
import pandas as pd #we've already done this but just to remind you you'll need to
#Let's start with yet another way to read csv files, this time from `pandas`
import os
directory=("/Users/mljones/repositories/comp_in_context_trial/")
os.chdir(directory)
Explanation: You can compose indexes! this is the 0th item of the 74th list.
BUT I'm not going to torture you with this lower level analysis (for now)
Pandas first-line python tool for Exploratory Data Analysis
rich data structures
powerful ways to slice, dice, reformate, fix, and eliminate data
taste of what can do
rich queries like databases
dataframes
The library Pandas provides us with a powerful overlay that lets us use matrices but always keep their row and column names: a spreadsheet on speed. It allows us to work directly with the datatype "Dataframes" that keeps track of values and their names for us. And it allows us to perform many operations on slices of the dataframe without having to run for loops and the like. This is more convenient and involves faster processing.
End of explanation
df=pd.read_csv('HMXPC_13.csv', sep=",")
df
Explanation: Now we read a big csv file using a function from pandas called pd.read_csv()
End of explanation
df["course_id"]
df["course_id"][3340:3350] #pick out a list of values from ONE column
Explanation: Note at the bottom that the display tells us how many rows and columns we're dealing with.
As a general rule, pandas dataframe objects default to slicing by column using a syntax you'll know from dicts as in df["course_id"].
End of explanation
df[3340:3350] # SLICE a list of ROWS
#This was _not_ in class PREPARE FOR TERRIBLE ERROR!
#THIS DOESN'T WORK
df[3340]
#That's icky.
#to pick out one row use `.ix`
df.ix[3340]
Explanation: Instead of (column, row) we use name_of_dataframe[column name][row #]
End of explanation
df.ix[[3340]]
Explanation: Why? A good question. Now try passing a list of just one row:
End of explanation
df['final_cc_cname_DI'][100:110]
df.dtypes
Explanation: We can pick out columns using their names and with a slice of rows.
End of explanation
df=pd.read_csv('HMXPC_13.csv', sep="," , parse_dates=['start_time_DI', 'last_event_DI'])
Explanation: In inputing CSV, Pandas parses each column and attempts to discern what sort of data is within. It's good but not infallible.
- Pandas is particularly good with dates: you simply tell it which columns to parse as dates.
Let's refine our reading of the CSV to parse the dates.
End of explanation
df["start_time_DI"]
Explanation: note that we pass a list of columns to pick out multiple columns
End of explanation
startdates=df['start_time_DI'].value_counts()
# Exercise to the reader: how might you do this without using the `.value_counts()` method?
startdates
startdates.plot()
startdates.plot(title="I can't it's not butter.")
Explanation: Now we can count how many times someone started
End of explanation
startdates.plot(kind="bar")
#Ok, let's consider how many times different people played a video
df["nplay_video"].dropna().plot()
Explanation: What are
End of explanation |
11,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook a Q learner with dyna and a custom predictor will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value).
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
First a non-learning test
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
Step4: What are the metrics for "holding the position"? | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
import pickle
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent_predictor import AgentPredictor
from functools import partial
from sklearn.externals import joblib
NUM_THREADS = 1
LOOKBACK = -1
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
DYNA = 20
BASE_DAYS = 112
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
estimator_close = joblib.load('../../data/best_predictor.pkl')
estimator_volume = joblib.load('../../data/best_volume_predictor.pkl')
agents = [AgentPredictor(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=DYNA,
name='Agent_{}'.format(i),
estimator_close=estimator_close,
estimator_volume=estimator_volume,
env=env,
prediction_window=BASE_DAYS) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
Explanation: In this notebook a Q learner with dyna and a custom predictor will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value).
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
import pickle
with open('../../data/dyna_q_with_predictor.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
TEST_DAYS_AHEAD = 112
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
TEST_DAYS_AHEAD = 112
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
Explanation: What are the metrics for "holding the position"?
End of explanation |
11,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of performing Vector mathmatical function using Python List structures
Vector methods to be created
Step1: Other vector operations that could be done
Step2: List Comprehensions are Powerful tools in Python
Expect to see them throughout code one has to maintain but also understand they are not always the optimal solution
When an iteration is needed to build a composite value, list comprehensions are considered the most readable or understandable way to achieve this. Loops may be used instead if one wants the "side effect" of an interation while functional tools may be used if optimization and code speed is important.
For instance, the above examples could also have been performed with an annoymous lambda or reduce, like | Python Code:
class vector_math:
'''
This is the base class for vector math - which allows for initialization with two vectors.
'''
def __init__(self, vectors = [[1,2,2],[3,4,3]]):
self.vect1 = vectors[0]
self.vect2 = vectors[1]
def set_vects(self, vectors):
self.vect1 = vectors[0]
self.vect2 = vectors[1]
def sum_vects(self):
return [x + y for x, y in zip(self.vect1, self.vect2)]
def sub_vects(self):
# default should be [-2,-2,-1]
return [x - y for x, y in zip(self.vect1, self.vect2)]
# Can expand out to for x, y in zip: ... to show what it and sum do
def multi_vects(self):
#default should be [3,8,6]
return [x * y for x, y in zip(self.vect1, self.vect2)]
def multi_scalar(self, scalar, vect):
return [e * scalar for e in vect]
# Show difference between just element * number and using tuple from zip()
def multi_scalar_l(self, scalar, vect):
return lambda e: e * scalar, vect
def mean_vects(self):
mean_vect = self.sum_vects()
return self.multi_scalar(1/len(mean_vect), mean_vect)
def dot_product(self):
return sum(self.multi_vects())
vect = vector_math()
sum_vect = vect.sum_vects()
print("Sum of vectors = {}".format(sum_vect))
print("Subtraction of vectors = {}".format(vect.sub_vects()))
print("Product of vectors = {}".format(vect.multi_vects()))
print("Product of Sum of vectors and 2 = {}\n".format(vect.multi_scalar(2, sum_vect)))
# Yep can still use character returns and others in format
print("Average of vectors = {}".format(["{:.2f}".format(e) for e in vect.mean_vects()]))
# Now there are other ways to reduce the decimal places but this was just to show a nested format call
# TODO: Consider adding timeit to show difference between calling multi_scalar directly and calling mean_vect:
#print("Average of vectors through calling scalar = {}".format(
# ["{:.2f}".format(e) for e in vect.multi_scalar(1/len(sum_vect), sum_vect)]))
print("The Dot Product is {}".format(vect.dot_product()))
Explanation: Example of performing Vector mathmatical function using Python List structures
Vector methods to be created:
* Sum vectors
* Add vector elements of same sized vectors
* Return resulting vector
* Subtract vectors
* Subtract vector elements of same sized vectors
* Return resulting vector
* Product of vectors
* Product of components of vectors
* Return resulting vector
* Product of vector and scalar
* Return scalar product of each element of vector
* Mean of vectors
* Sum Vector method / number of elements for each element (or 1/len scalar multiply)
* Dot Product
* Sum of component wise products
* Multiply vectors
* Sum vectors
* Return resulting vector
Teaching notes delete when finished
Remember to explain that in the real world numpy and other libraries would be used to do this
For teaching list methods
Particuliarly allows for a number of list comprehensions to be explained
Basic Class definition and issues
Start with just calling a definition directly (which will Error with a not found)
Show how adding self.function_name() works and explain
Move into using decorators
Start with a vector with a small number of elements
So students can do calculations in their heads and follow along
End of explanation
from math import sqrt
# Using the vect variables showing without functions
sum_of_squares = sum([x * y for x, y in zip(vect.vect1, vect.vect1)])
magnitude = sqrt(sum_of_squares)
distance = sqrt(sum([(x - y) ** 2 for x, y in zip(vect.vect1, vect.vect2)]))
print("Sum of Squares is {}".format(sum_of_squares))
print("Magnitude is {:.2f}".format(magnitude))
print("Distance is {}".format(distance))
Explanation: Other vector operations that could be done
End of explanation
import dis
import time
# For instruction - shows disassemble of methods and performs quick time check
vect = [2,3,3,3,4,5,6,6,4,3,2,1,3,4,5,6,4,3,2,1,3,4,5,6,4,3,2]
t1 = time.time()
print("list comp")
dis.dis(compile("[e * 2 for e in vect]", '<stdin>', 'exec'))
d_l = time.time() - t1
print(d_l)
t2 = time.time()
print("\n\n\nlambda")
dis.dis(compile("lambda e: e * 2, vect", '<stdin>', 'exec'))
d_lam = time.time() - t2
print(d_lam)
Explanation: List Comprehensions are Powerful tools in Python
Expect to see them throughout code one has to maintain but also understand they are not always the optimal solution
When an iteration is needed to build a composite value, list comprehensions are considered the most readable or understandable way to achieve this. Loops may be used instead if one wants the "side effect" of an interation while functional tools may be used if optimization and code speed is important.
For instance, the above examples could also have been performed with an annoymous lambda or reduce, like:
def multi_scalar(self, vect, scalar):
return lambda e: e * scalar, vect
In this case, the lambda would be faster by a minimal amount and actually have one less function call - which are expensive in Python. This is not always true as the need for an increasing amount of functional methods can change both the speed and amount of function call required. code example is below
End of explanation |
11,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TUTORIAL 13 - Elliptic Optimal Control
Keywords
Step1: 3. Affine Decomposition
For this problem the affine decomposition is straightforward.
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh_2.ipynb notebook.
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the EllipticOptimalControl class
Step5: 4.4. Prepare reduction with a POD-Galerkin method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis | Python Code:
from dolfin import *
from rbnics import *
Explanation: TUTORIAL 13 - Elliptic Optimal Control
Keywords: optimal control, inf-sup condition, POD-Galerkin
1. Introduction
This tutorial addresses a distributed optimal control problem for the Graetz conduction-convection equation on the domain $\Omega$ shown below:
<img src="data/mesh2.png" width="60%"/>
The problem is characterized by 3 parameters. The first parameter $\mu_0$ represents the Pรฉclet number, which describes the heat transfer between the two domains. The second and third parameters, $\mu_1$ and $\mu_2$, control the parameter dependent observation function $y_d(\boldsymbol{\mu})$ such that:
$$ y_d(\boldsymbol{\mu})=
\begin{cases}
\mu_1 \quad \text{in} \; \hat{\Omega}_1 \
\mu_2 \quad \text{in} \; \hat{\Omega}_2
\end{cases}
$$
The ranges of the three parameters are the following: $$\mu_0 \in [3,20], \mu_1 \in [0.5,1.5], \mu_2 \in [1.5,2.5]$$
The parameter vector $\boldsymbol{\mu}$ is thus given by $$\boldsymbol{\mu}=(\mu_0,\mu_1,\mu_2)$$ on the parameter domain $$\mathbb{P}=[3,20] \times [0.5,1.5] \times [1.5,2.5].$$
In order to obtain a faster approximation of the optimal control problem, we pursue an optimize-then-discretize approach using the POD-Galerkin method.
2. Parametrized Formulation
Let $y(\boldsymbol{\mu})$, the state function, be the temperature field in the domain $\Omega$ and $u(\boldsymbol{\mu})$, the control function, act as a heat source. The observation domain $\hat{\Omega}$ is defined as: $\hat{\Omega}=\hat{\Omega}_1 \cup \hat{\Omega}_2$.
Consider the following optimal control problem:
$$
\underset{y,u}{min} \; J(y,u;\boldsymbol{\mu}) = \frac{1}{2} \left\lVert y(\boldsymbol{\mu})-y_d(\boldsymbol{\mu})\right\rVert ^2_{L^2(\hat{\Omega})}, \
s.t.
\begin{cases}
-\frac{1}{\mu_0}\Delta y(\boldsymbol{\mu}) + x_2(1-x_2)\frac{\partial y(\boldsymbol{\mu})}{\partial x_1} = u(\boldsymbol{\mu}) \quad \text{in} \; \Omega, \
\frac{1}{\mu_0} \nabla y(\boldsymbol{\mu}) \cdot \boldsymbol{n} = 0 \qquad \qquad \qquad \quad \enspace \; \text{on} \; \Gamma_N, \
y(\boldsymbol{\mu})=1 \qquad \qquad \qquad \qquad \qquad \enspace \text{on} \; \Gamma_{D1}, \
y(\boldsymbol{\mu})=2 \qquad \qquad \qquad \qquad \qquad \enspace \text{on} \; \Gamma_{D2}
\end{cases}
$$
The corresponding weak formulation comes from solving for the gradient of the Lagrangian function as detailed in the previous tutorial.
Since this problem is recast in the framework of saddle-point problems, the reduced basis problem must satisfy the inf-sup condition, thus an aggregated space for the state and adjoint variables is defined.
End of explanation
class EllipticOptimalControl(EllipticOptimalControlProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
EllipticOptimalControlProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
yup = TrialFunction(V)
(self.y, self.u, self.p) = split(yup)
zvq = TestFunction(V)
(self.z, self.v, self.q) = split(zvq)
self.dx = Measure("dx")(subdomain_data=subdomains)
self.ds = Measure("ds")(subdomain_data=boundaries)
# Regularization coefficient
self.alpha = 0.01
# Store the velocity expression
self.vel = Expression("x[1] * (1 - x[1])", element=self.V.sub(0).ufl_element())
# Customize linear solver parameters
self._linear_solver_parameters.update({
"linear_solver": "mumps"
})
# Return custom problem name
def name(self):
return "EllipticOptimalControl2POD"
# Return theta multiplicative terms of the affine expansion of the problem.
def compute_theta(self, term):
mu = self.mu
if term in ("a", "a*"):
theta_a0 = 1.0 / mu[0]
theta_a1 = 1.0
return (theta_a0, theta_a1)
elif term in ("c", "c*"):
theta_c0 = 1.0
return (theta_c0,)
elif term == "m":
theta_m0 = 1.0
return (theta_m0,)
elif term == "n":
theta_n0 = self.alpha
return (theta_n0,)
elif term == "f":
theta_f0 = 1.0
return (theta_f0,)
elif term == "g":
theta_g0 = mu[1]
theta_g1 = mu[2]
return (theta_g0, theta_g1)
elif term == "h":
theta_h0 = 0.24 * mu[1]**2 + 0.52 * mu[2]**2
return (theta_h0,)
elif term == "dirichlet_bc_y":
theta_bc0 = 1.
return (theta_bc0,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
def assemble_operator(self, term):
dx = self.dx
if term == "a":
y = self.y
q = self.q
vel = self.vel
a0 = inner(grad(y), grad(q)) * dx
a1 = vel * y.dx(0) * q * dx
return (a0, a1)
elif term == "a*":
z = self.z
p = self.p
vel = self.vel
as0 = inner(grad(z), grad(p)) * dx
as1 = - vel * p.dx(0) * z * dx
return (as0, as1)
elif term == "c":
u = self.u
q = self.q
c0 = u * q * dx
return (c0,)
elif term == "c*":
v = self.v
p = self.p
cs0 = v * p * dx
return (cs0,)
elif term == "m":
y = self.y
z = self.z
m0 = y * z * dx(1) + y * z * dx(2)
return (m0,)
elif term == "n":
u = self.u
v = self.v
n0 = u * v * dx
return (n0,)
elif term == "f":
q = self.q
f0 = Constant(0.0) * q * dx
return (f0,)
elif term == "g":
z = self.z
g0 = z * dx(1)
g1 = z * dx(2)
return (g0, g1)
elif term == "h":
h0 = 1.0
return (h0,)
elif term == "dirichlet_bc_y":
bc0 = [DirichletBC(self.V.sub(0), Constant(i), self.boundaries, i) for i in (1, 2)]
return (bc0,)
elif term == "dirichlet_bc_p":
bc0 = [DirichletBC(self.V.sub(2), Constant(0.0), self.boundaries, i) for i in (1, 2)]
return (bc0,)
elif term == "inner_product_y":
y = self.y
z = self.z
x0 = inner(grad(y), grad(z)) * dx
return (x0,)
elif term == "inner_product_u":
u = self.u
v = self.v
x0 = u * v * dx
return (x0,)
elif term == "inner_product_p":
p = self.p
q = self.q
x0 = inner(grad(p), grad(q)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
Explanation: 3. Affine Decomposition
For this problem the affine decomposition is straightforward.
End of explanation
mesh = Mesh("data/mesh2.xml")
subdomains = MeshFunction("size_t", mesh, "data/mesh2_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/mesh2_facet_region.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh_2.ipynb notebook.
End of explanation
scalar_element = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
element = MixedElement(scalar_element, scalar_element, scalar_element)
V = FunctionSpace(mesh, element, components=["y", "u", "p"])
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
problem = EllipticOptimalControl(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(3.0, 20.0), (0.5, 1.5), (1.5, 2.5)]
problem.set_mu_range(mu_range)
Explanation: 4.3. Allocate an object of the EllipticOptimalControl class
End of explanation
pod_galerkin_method = PODGalerkin(problem)
pod_galerkin_method.set_Nmax(20)
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
lifting_mu = (3.0, 1.0, 2.0)
problem.set_mu(lifting_mu)
pod_galerkin_method.initialize_training_set(100)
reduced_problem = pod_galerkin_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
online_mu = (15.0, 0.6, 1.8)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
print("Reduced output for mu =", online_mu, "is", reduced_problem.compute_output())
plot(reduced_solution, reduced_problem=reduced_problem, component="y")
plot(reduced_solution, reduced_problem=reduced_problem, component="u")
plot(reduced_solution, reduced_problem=reduced_problem, component="p")
Explanation: 4.6. Perform an online solve
End of explanation
pod_galerkin_method.initialize_testing_set(100)
pod_galerkin_method.error_analysis()
Explanation: 4.7. Perform an error analysis
End of explanation
pod_galerkin_method.speedup_analysis()
Explanation: 4.8. Perform a speedup analysis
End of explanation |
11,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SC-4-5 Feature Engineering and Classification
Step1: The strategy, unlike our first attempt, requires a real train/test split in the dataset because we're going to fit an actual model (although a true LOO cross validation is still of course possible). But we need a train_test_split function which is able ot deal with lists of NetworkX objects.
Feature Engineering
The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on.
We do not necessarily assume that all of the graphs have the same number of vertices, although if there are marked differences, we would need to handle missing data for those graphs which had many fewer eigenvalues (or restrict our slice of the spectrum to the smallest number of eigenvalues present).
Step2: First Classifier
We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
Step3: Finding Optimal Hyperparameters | Python Code:
import numpy as np
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cPickle as pickle
from copy import deepcopy
from sklearn.utils import shuffle
import sklearn_mmadsen.graphs as skmg
%matplotlib inline
plt.style.use("fivethirtyeight")
sns.set()
all_graphs = pickle.load(open("train-sc-4-5-cont-graphs.pkl",'r'))
all_labels = pickle.load(open("train-sc-4-5-cont-labels.pkl",'r'))
Explanation: SC-4-5 Feature Engineering and Classification
End of explanation
train_graphs, train_labels, test_graphs, test_labels = skmg.graph_train_test_split(all_graphs, all_labels, test_fraction=0.10)
print "train size: %s" % len(train_graphs)
print "test size: %s" % len(test_graphs)
Explanation: The strategy, unlike our first attempt, requires a real train/test split in the dataset because we're going to fit an actual model (although a true LOO cross validation is still of course possible). But we need a train_test_split function which is able ot deal with lists of NetworkX objects.
Feature Engineering
The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on.
We do not necessarily assume that all of the graphs have the same number of vertices, although if there are marked differences, we would need to handle missing data for those graphs which had many fewer eigenvalues (or restrict our slice of the spectrum to the smallest number of eigenvalues present).
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
train_matrix = skmg.graphs_to_eigenvalue_matrix(train_graphs, num_eigenvalues=10)
test_matrix = skmg.graphs_to_eigenvalue_matrix(test_graphs, num_eigenvalues=10)
clf = GradientBoostingClassifier(n_estimators = 250)
clf.fit(train_matrix, train_labels)
pred_label = clf.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
Explanation: First Classifier
We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
pipeline = Pipeline([
('clf', GradientBoostingClassifier())
])
params = {
'clf__learning_rate': [5.0,2.0,1.0, 0.75, 0.5, 0.25, 0.1, 0.05, 0.01],
'clf__n_estimators': [10,25,50,100,250,500]
}
grid_search = GridSearchCV(pipeline, params, n_jobs = -1, verbose = 1)
grid_search.fit(train_matrix, train_labels)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters:")
best_params = grid_search.best_estimator_.get_params()
for param in sorted(params.keys()):
print("param: %s: %r" % (param, best_params[param]))
pred_label = grid_search.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
Explanation: Finding Optimal Hyperparameters
End of explanation |
11,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Passband Luminosity
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: And we'll add a single light curve dataset so that we can see how passband luminosities affect the resulting synthetic light curve model.
Step3: Lastly, just to make things a bit easier, we'll turn off limb-darkening and irradiation (reflection) and use blackbody atmospheres.
Step4: Relevant Parameters
The 'pblum_ref' parameter exists for each component-dataset pair and it determines how the intensities for that star in that passband should be scaled, i.e. by the pblum provided by that component ('self') or coupled to the pblum provided by another component.
By default the passband luminosities are coupled (see below for explanations of coupled vs decoupled), with the passband luminosity being defined by the primary component in the system.
Step5: The 'pblum' parameter is only relevant for each component-dataset pair in which pblum_ref=='self'. This component will then have its intensities scaled such that they match the value provided by pblum. In general, a pblum of 4pi will result in an out-of-eclipse flux of ~1.
Step6: NOTE
Step7: Now note that only a single pblum parameter is visible.
Step8: Let's see how changing the value of pblum affects the computed light curve. By default, pblum is set to be 4 pi, giving a total flux for the primary star of ~1.
Since the secondary star in the default binary is identical to the primary star, we'd expect an out-of-eclipse flux of the binary to be ~2.
Step9: If we now set pblum to be only 2 pi, we should expect the entire light curve to be scaled in half.
Step10: And if we halve the temperature of the secondary star - the resulting light curve changes to the new sum of fluxes, where the primary star dominates since the secondary star flux is reduced by a factor of 16, so we expect a total out-of-eclipse flux of ~0.5 + ~0.5/16 = ~0.53.
Step11: Let us undo our changes before we look at decoupled luminosities.
Step12: Decoupled Luminosities
The luminosities are decoupled when pblums are provided for the individual components. To accomplish this, all 'pblum_ref' parameters should be set to 'self'.
Step13: Now we see that both pblums are available and can have different values.
Step14: If we set these to 4pi, then we'd expect each star to contribute 1.0 in flux units, meaning the baseline of the light curve should be at approximately 2.0
Step15: Now let's make a significant temperature-ratio by making a very cool secondary star. Since the luminosities are decoupled - this temperature change won't affect the resulting light curve very much (compare this to the case above with coupled luminosities). What is happening here is that even though the secondary star is cooler, its luminosity is being rescaled to the same value as the primary star, so the eclipse depth doesn't change (you would see a similar lack-of-effect if you changed the radii).
Step16: In most cases you will not want decoupled luminosities as they can easily break the self-consistency of your model.
Now we'll just undo our changes before we look at accessing model luminosities.
Step17: Accessing Model Luminosities
NEW IN PHOEBE 2.1
Step18: By default this exposes pblums for all component-dataset pairs in the form of a dictionary. Alternatively, you can pass a label or list of labels to component and/or dataset.
Step19: Note that this same logic is applied (at t0) to initialize all passband luminosities within the backend, so does not need to be called before run_compute.
In order to access passband luminosities at times other than t0, you can add a mesh dataset and request the pblum column to be exposed. For stars that have pblum defined (as opposed to coupled to another star in the system), this value should be equivalent to the value of the parameter (at t0, and in simple circular cases will probably be equivalent at all times).
Let's create a mesh dataset at a few times and then access the synthetic luminosities.
Step20: Since the luminosities are passband-dependent, they are stored with the same dataset as the light curve (or RV), but with the mesh method, and are available at each of the times at which a mesh was stored.
Step21: Now let's compare the value of the synthetic luminosities to those of the input pblum
Step22: In this case, since our two stars are identical, the synthetic luminosity of the secondary star should be the same as the primary (and the same as pblum@primary).
Step23: However, if we change the temperature of the secondary star again, since the pblums are coupled, we'd expect the synthetic luminosity of the primary to remain fixed but the secondary to decrease.
Step24: Now, we'll just undo our changes before continuing
Step25: Role of Pblum
Let's now look at the intensities in the mesh to see how they're being scaled under-the-hood.
Step26: 'abs_normal_intensities' are the intensities per triangle in absolute units, i.e. W/m^3.
Step27: The values of 'normal_intensities', however, are significantly samller (in this case). These are the intensities in relative units which will eventually be integrated to give us flux for a light curve.
Step28: 'normal_intensities' are scaled from 'abs_normal_intensities' so that the computed luminosity matches the prescribed luminosity (pblum).
Here we compute the luminosity by summing over each triangle's intensity in the normal direction, and multiply it by pi to account for blackbody intensity emitted in all directions in the solid angle, and by the area of that triangle. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Passband Luminosity
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
Explanation: And we'll add a single light curve dataset so that we can see how passband luminosities affect the resulting synthetic light curve model.
End of explanation
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0,0])
b.set_value_all('atm', 'blackbody')
b.set_value('irrad_method', 'none')
Explanation: Lastly, just to make things a bit easier, we'll turn off limb-darkening and irradiation (reflection) and use blackbody atmospheres.
End of explanation
print b['pblum_ref']
print b['pblum_ref@primary']
Explanation: Relevant Parameters
The 'pblum_ref' parameter exists for each component-dataset pair and it determines how the intensities for that star in that passband should be scaled, i.e. by the pblum provided by that component ('self') or coupled to the pblum provided by another component.
By default the passband luminosities are coupled (see below for explanations of coupled vs decoupled), with the passband luminosity being defined by the primary component in the system.
End of explanation
print b['pblum']
Explanation: The 'pblum' parameter is only relevant for each component-dataset pair in which pblum_ref=='self'. This component will then have its intensities scaled such that they match the value provided by pblum. In general, a pblum of 4pi will result in an out-of-eclipse flux of ~1.
End of explanation
b['pblum_ref@primary'] = 'self'
b['pblum_ref@secondary'] = 'primary'
Explanation: NOTE: other parameters also affect flux-levels, including limb darkening and distance
Coupled Luminosities
Passband luminosities are considered coupled when a single pblum value is provided, while the passband luminosity of the other component(s) is scaled by the same factor. To accomplish this, ONE pblum_ref in the system must be set as 'self' and ALL OTHER pbscales must refer to that component. This is the default case, set explicitly by:
End of explanation
print b['pblum']
Explanation: Now note that only a single pblum parameter is visible.
End of explanation
b.run_compute()
afig, mplfig = b.plot(show=True)
Explanation: Let's see how changing the value of pblum affects the computed light curve. By default, pblum is set to be 4 pi, giving a total flux for the primary star of ~1.
Since the secondary star in the default binary is identical to the primary star, we'd expect an out-of-eclipse flux of the binary to be ~2.
End of explanation
b['pblum@primary'] = 2 * np.pi
b.run_compute()
afig, mplfig = b.plot(show=True)
Explanation: If we now set pblum to be only 2 pi, we should expect the entire light curve to be scaled in half.
End of explanation
b['teff@secondary'] = 0.5 * b.get_value('teff@primary')
print b['teff']
b.run_compute()
afig, mplfig = b.plot(show=True)
Explanation: And if we halve the temperature of the secondary star - the resulting light curve changes to the new sum of fluxes, where the primary star dominates since the secondary star flux is reduced by a factor of 16, so we expect a total out-of-eclipse flux of ~0.5 + ~0.5/16 = ~0.53.
End of explanation
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
Explanation: Let us undo our changes before we look at decoupled luminosities.
End of explanation
b.set_value_all('pblum_ref', 'self')
Explanation: Decoupled Luminosities
The luminosities are decoupled when pblums are provided for the individual components. To accomplish this, all 'pblum_ref' parameters should be set to 'self'.
End of explanation
print b['pblum']
Explanation: Now we see that both pblums are available and can have different values.
End of explanation
b.set_value_all('pblum', 4*np.pi)
b.run_compute()
afig, mplfig = b.plot(show=True)
Explanation: If we set these to 4pi, then we'd expect each star to contribute 1.0 in flux units, meaning the baseline of the light curve should be at approximately 2.0
End of explanation
print b['teff']
b['teff@secondary'] = 3000
b.run_compute()
afig, mplfig = b.plot(show=True)
Explanation: Now let's make a significant temperature-ratio by making a very cool secondary star. Since the luminosities are decoupled - this temperature change won't affect the resulting light curve very much (compare this to the case above with coupled luminosities). What is happening here is that even though the secondary star is cooler, its luminosity is being rescaled to the same value as the primary star, so the eclipse depth doesn't change (you would see a similar lack-of-effect if you changed the radii).
End of explanation
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
b['pblum_ref@primary'] = 'self'
b['pblum_ref@secondary'] = 'primary'
Explanation: In most cases you will not want decoupled luminosities as they can easily break the self-consistency of your model.
Now we'll just undo our changes before we look at accessing model luminosities.
End of explanation
print b.compute_pblums()
Explanation: Accessing Model Luminosities
NEW IN PHOEBE 2.1: Passband luminosities at t0@system per-star (including following all coupling logic) can be computed and exposed on the fly by calling compute_pblums.
End of explanation
print b.compute_pblums(dataset='lc01', component='primary')
Explanation: By default this exposes pblums for all component-dataset pairs in the form of a dictionary. Alternatively, you can pass a label or list of labels to component and/or dataset.
End of explanation
b.add_dataset('mesh', times=np.linspace(0,1,5), dataset='mesh01', columns=['areas', 'pblum@lc01', 'ldint@lc01', 'ptfarea@lc01', 'abs_normal_intensities@lc01', 'normal_intensities@lc01'])
b.run_compute()
Explanation: Note that this same logic is applied (at t0) to initialize all passband luminosities within the backend, so does not need to be called before run_compute.
In order to access passband luminosities at times other than t0, you can add a mesh dataset and request the pblum column to be exposed. For stars that have pblum defined (as opposed to coupled to another star in the system), this value should be equivalent to the value of the parameter (at t0, and in simple circular cases will probably be equivalent at all times).
Let's create a mesh dataset at a few times and then access the synthetic luminosities.
End of explanation
print b.filter(qualifier='pblum', context='model').twigs
Explanation: Since the luminosities are passband-dependent, they are stored with the same dataset as the light curve (or RV), but with the mesh method, and are available at each of the times at which a mesh was stored.
End of explanation
t0 = b.get_value('t0@system')
print b.get_value(qualifier='pblum', time=t0, component='primary', kind='mesh', context='model')
print b.get_value('pblum@primary@dataset')
Explanation: Now let's compare the value of the synthetic luminosities to those of the input pblum
End of explanation
print b.get_value(qualifier='pblum', time=t0, component='primary', kind='mesh', context='model')
print b.get_value(qualifier='pblum', time=t0, component='secondary', kind='mesh', context='model')
Explanation: In this case, since our two stars are identical, the synthetic luminosity of the secondary star should be the same as the primary (and the same as pblum@primary).
End of explanation
b['teff@secondary@component'] = 3000
print b.compute_pblums()
b.run_compute()
print b.get_value(qualifier='pblum', time=t0, component='primary', kind='mesh', context='model')
print b.get_value(qualifier='pblum', time=t0, component='secondary', kind='mesh', context='model')
Explanation: However, if we change the temperature of the secondary star again, since the pblums are coupled, we'd expect the synthetic luminosity of the primary to remain fixed but the secondary to decrease.
End of explanation
b.set_value_all('teff@component', 6000)
Explanation: Now, we'll just undo our changes before continuing
End of explanation
areas = b.get_value(qualifier='areas', dataset='mesh01', time=t0, component='primary', unit='m^2')
ldint = b.get_value(qualifier='ldint', component='primary', time=t0)
ptfarea = b.get_value(qualifier='ptfarea', component='primary', time=t0)
abs_normal_intensities = b.get_value(qualifier='abs_normal_intensities', dataset='lc01', time=t0, component='primary')
normal_intensities = b.get_value(qualifier='normal_intensities', dataset='lc01', time=t0, component='primary')
Explanation: Role of Pblum
Let's now look at the intensities in the mesh to see how they're being scaled under-the-hood.
End of explanation
np.median(abs_normal_intensities)
Explanation: 'abs_normal_intensities' are the intensities per triangle in absolute units, i.e. W/m^3.
End of explanation
np.median(normal_intensities)
Explanation: The values of 'normal_intensities', however, are significantly samller (in this case). These are the intensities in relative units which will eventually be integrated to give us flux for a light curve.
End of explanation
pblum = b.get_value(qualifier='pblum', component='primary', context='dataset')
print np.sum(normal_intensities * ldint * np.pi * areas) * ptfarea, pblum
Explanation: 'normal_intensities' are scaled from 'abs_normal_intensities' so that the computed luminosity matches the prescribed luminosity (pblum).
Here we compute the luminosity by summing over each triangle's intensity in the normal direction, and multiply it by pi to account for blackbody intensity emitted in all directions in the solid angle, and by the area of that triangle.
End of explanation |
11,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
11,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Distribution Analysis of the data
Now that we have familarity with the basic characterstics, lets look at the distribution of various variables starting with the continuous variable
Distribution analysis of continuous variable using the describe()
Step1: Distribution analysis of categorical variable by using the value_counts()
Step2: Converting the categorical predictor to numeric using Label Encoder from the sklearn library
Step3: Data Visualization of various predictors for this study
Step4: Distributions of observations within categories
At a certain point, the categorical scatterplot approach becomes limited in the information it can provide about the distribution of values within each category. There are several ways to summarize this information in ways that facilitate easy comparisons across the category levels.
Boxplots
This kind of plot shows the three quartile values of the distribution along with extreme values. The โwhiskersโ extend to points that lie within 1.5 IQRs of the lower and upper quartile, and then observations that fall outside this range are displayed independently. Importantly, this means that each value in the boxplot corresponds to an actual observation in the data.
Step5: Violinplots
A different approach is a violinplot(), which combines a boxplot with the kernel density estimation procedure.
Step6: Statistical distribution within categories
Often, rather than showing the distribution within each category, you might want to show the central tendency of the values.
Barplots
Step7: A special case for the bar plot is when you want to show the number of observations in each category rather than computing a statistic for a second variable. This is similar to a histogram over a categorical, rather than quantitative, variable. In seaborn, itโs easy to do so with the countplot() function
Step8: Point plots
An alternative style for visualizing the same information is offered by the pointplot() function. This function also encodes the value of the estimate with height on the other axis, but rather than show a full bar it just plots the point estimate and confidence interval. Additionally, pointplot connects points from the same hue category. This makes it easy to see how the main relationship is changing as a function of a second variable | Python Code:
sub1.describe()
Explanation: Distribution Analysis of the data
Now that we have familarity with the basic characterstics, lets look at the distribution of various variables starting with the continuous variable
Distribution analysis of continuous variable using the describe()
End of explanation
sub1['extraction_type_class'].value_counts()
sub1['payment_type'].value_counts()
sub1['quality_group'].value_counts()
sub1['quantity_group'].value_counts()
sub1['waterpoint_type_group'].value_counts()
sub1['water_quality'].value_counts()
sub1['source_type'].value_counts()
Explanation: Distribution analysis of categorical variable by using the value_counts()
End of explanation
from sklearn.preprocessing import LabelEncoder
var_mod = ['extraction_type_class','payment_type','quality_group','quantity_group','waterpoint_type_group','water_quality','source_type']
le = LabelEncoder()
for i in var_mod:
sub1[i] = le.fit_transform(sub1[i])
sub1.dtypes
Explanation: Converting the categorical predictor to numeric using Label Encoder from the sklearn library
End of explanation
%matplotlib inline
sub1['permit'].hist(bins=10)
t1=pd.crosstab(sub2['water_quality'],sub2['source_type'])
t1.plot(kind='hist', stacked=True, grid=False, legend=True, title="Water quality based on type of water source")
t2=pd.crosstab(sub2['source_type'],sub2['payment_type'])
t2.plot(kind='hist', stacked=True, grid=False, legend=True, title="Water quality and types of payment")
%matplotlib inline
sns.violinplot(x=sub1.extraction_type_class, y=sub1.source_type)
%matplotlib inline
sns.pointplot(x="extraction_type_class", y="water_quality", data=sub1)
%matplotlib inline
sns.violinplot(x="waterpoint_type_group", y="source_type", hue="water_quality", data=sub1)
Explanation: Data Visualization of various predictors for this study
End of explanation
sns.boxplot(x="source_type", y="payment_type",data=sub1)
Explanation: Distributions of observations within categories
At a certain point, the categorical scatterplot approach becomes limited in the information it can provide about the distribution of values within each category. There are several ways to summarize this information in ways that facilitate easy comparisons across the category levels.
Boxplots
This kind of plot shows the three quartile values of the distribution along with extreme values. The โwhiskersโ extend to points that lie within 1.5 IQRs of the lower and upper quartile, and then observations that fall outside this range are displayed independently. Importantly, this means that each value in the boxplot corresponds to an actual observation in the data.
End of explanation
sns.violinplot(x="extraction_type_class", y="source_type", scale="count", data=sub1)
sns.violinplot(x="extraction_type_class", y="source_type", split="True", data=sub1)
Explanation: Violinplots
A different approach is a violinplot(), which combines a boxplot with the kernel density estimation procedure.
End of explanation
sns.barplot(x="extraction_type_class", y="source_type", data=sub1)
sns.barplot(x="quantity_group", y="quality_group", data=sub1)
Explanation: Statistical distribution within categories
Often, rather than showing the distribution within each category, you might want to show the central tendency of the values.
Barplots
End of explanation
sns.countplot(x="quality_group", data=data)
sns.countplot(y="payment_type", data=sub1)
Explanation: A special case for the bar plot is when you want to show the number of observations in each category rather than computing a statistic for a second variable. This is similar to a histogram over a categorical, rather than quantitative, variable. In seaborn, itโs easy to do so with the countplot() function
End of explanation
sns.pointplot(x="extraction_type_class", y="waterpoint_type_group", data=sub1)
sns.pointplot(x="extraction_type_class", y="water_quality", markers=["^", "o"], linestyles=["-", "--"],data=sub1)
Explanation: Point plots
An alternative style for visualizing the same information is offered by the pointplot() function. This function also encodes the value of the estimate with height on the other axis, but rather than show a full bar it just plots the point estimate and confidence interval. Additionally, pointplot connects points from the same hue category. This makes it easy to see how the main relationship is changing as a function of a second variable
End of explanation |
11,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note
Step1: ipython shell
Tab Completion and History Search is Great
Start typing and use the 'tab' key for auto complete. Can use this on python functions, modules, variables, files, and more...
iPython stores history. You can search it (ctrl-r) or you can use the up and down arrows.
Run a file
Run it and get access to the functions and modules inside
(notice I used 'tab' completion). Try using the 'up' arrow and running it again.
Here's something else useful
Step2: Ok, backup, how do I interact with the notebook and what is it?
First step, the notebook is made up of cells. These cells can be of different types. If you create a cell (click the '+' sign on the menu bar) then you can use the pull down to make it either
code
Step3: So, you exectued this by hitting the 'play' button in the tool bar or you used 'shift-enter'. Some other ways
Step4: My favs...
Step5: MUCHS INFOS
Step6: Magics for running code under other interpreters
IPython has a %%script cell magic, which lets you run a cell in a subprocess of any interpreter on your system, such as
Step7: Exercise
Step8: Hints
Step9: Running Shell Commands
There are some magics for some shell commands (like ls and pwd - this is relaly great on a windows system by the way) but you can also run arbitrary system commands with a '!'.
Step10: Managing the IPython Kernel
Code is run in a separate process called the IPython Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the "Stop" button in the toolbar above.
Step11: If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via
ctypes to segfault the Python interpreter
Step12: Side note on versions
There can be (and probably are) different python versions and ipython versions. This is normal. Don't Panic. Everybody got their towel?
Step13: I promised you latex!
You just need to surround equations in markdown with '$' signs.
$y = x^{2}$
$\frac{dN}{dE} = \frac{N_{\text peak}}{E_{\text peak}} (E/E_{\text peak})^{\gamma} (e^{1 - E/E_{\text peak}})^{\gamma+2}$
Note that it's using MathJax for the rendering so if you're offline, you might not get latex.
Running a remote file!
Note
Step14: Debugging
iPython has a powerful debugger. Let's see how it works a bit
Step15: Lots of useful features in the python debugger (and could probably do with a seperate lecture). We'll save that for another day...
nbconvert
You can convert notebooks into lots of different formats with the nbconvert command (type 'ipython nbconvert' for all of the options). Example
Step18: The %cython magic
Probably the most important magic is the %cython magic. The %%cython magic uses manages everything using temporary files in the ~/.ipython/cython/ directory. All of the symbols in the Cython module are imported automatically by the magic.
cython is a way of running c code inside iPython. Sometimes a c function can be much faster than the equivalent function in python. | Python Code:
%%writefile hello.py
#!/usr/bin/env python
def printHello():
print "Hello World"
print "File Loaded"
Explanation: Note: This is basically a grab-bag of things...
Advanced iPython
iPython: interactive Python
Many different ways to work with Python:
type 'python' from the command line
run a python script/program from the command line ('python my_prog.py')
iPython adds functionallity and interactivity to python that makes it more useful in your day-to-day life.
It is an interactive shell for the Python programming language that offers enhanced introspection, additional shell syntax, tab completion and rich history.
Fernando Pรฉrez, Brian E. Granger, IPython: A System for Interactive Scientific Computing, Computing in Science and Engineering, vol. 9, no. 3, pp. 21-29, May/June 2007, doi:10.1109/MCSE.2007.53. URL: http://ipython.org
python shell
Go out of the notebook and play with the python shell. Show some of the limitations.
run a python script
Go out of the notebook and create a python script to run.
hello.py:
End of explanation
cat mystuff.ipynb
Explanation: ipython shell
Tab Completion and History Search is Great
Start typing and use the 'tab' key for auto complete. Can use this on python functions, modules, variables, files, and more...
iPython stores history. You can search it (ctrl-r) or you can use the up and down arrows.
Run a file
Run it and get access to the functions and modules inside
(notice I used 'tab' completion). Try using the 'up' arrow and running it again.
Here's something else useful:
(what did this do...)
Note that this is a 'magic command' - I'll talk about this in a bit.
The four most helpful commands
<table>
<thead valign="bottom">
<tr class="row-odd"><th class="head">command</th>
<th class="head">description</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td>?</td>
<td>Introduction and overview of IPython’s features.</td>
</tr>
<tr class="row-odd"><td>%quickref</td>
<td>Quick reference.</td>
</tr>
<tr class="row-even"><td>help</td>
<td>Python’s own help system.</td>
</tr>
<tr class="row-odd"><td>object?</td>
<td>Details about ‘object’, use ‘object??’ for extra details.</td>
</tr>
</tbody>
</table>
The notebook (ipython notebook or jupyter)
How does this work... Let's look at what it says about itself (from http://ipython.org/ipython-doc/3/notebook/notebook.html)
Introduction
The notebook extends the console-based approach to interactive computing in a qualitatively new direction, providing a web-based application suitable for capturing the whole computation process: developing, documenting, and executing code, as well as communicating the results. The IPython notebook combines two components:
A web application: a browser-based tool for interactive authoring of documents which combine explanatory text, mathematics, computations and their rich media output.
Notebook documents: a representation of all content visible in the web application, including inputs and outputs of the computations, explanatory text, mathematics, images, and rich media representations of objects.
Main features of the web application
In-browser editing for code, with automatic syntax highlighting, indentation, and tab completion/introspection.
The ability to execute code from the browser, with the results of computations attached to the code which generated them.
Displaying the result of computation using rich media representations, such as HTML, LaTeX, PNG, SVG, etc. For example, publication-quality figures rendered by the matplotlib library,
can be included inline.
In-browser editing for rich text using the Markdown markup language, which can provide commentary for the code, is not limited to plain text.
The ability to easily include mathematical notation within markdown cells using LaTeX, and rendered natively by MathJax.
What is a notebook:
It's just a JSON formatted text file. Let's look at the really simple one we just created from the iPython interpreter.
End of explanation
2+4
Explanation: Ok, backup, how do I interact with the notebook and what is it?
First step, the notebook is made up of cells. These cells can be of different types. If you create a cell (click the '+' sign on the menu bar) then you can use the pull down to make it either
code: actual python code you want to execute
markdown: notes in markdown format
raw: raw text (like code you want to display like the json code above)
heading: you can make a cell a heading
Let's play with the four types below:
Code:
End of explanation
%timeit range(1000)
%%timeit x = range(10000)
max(x)
%lsmagic
Explanation: So, you exectued this by hitting the 'play' button in the tool bar or you used 'shift-enter'. Some other ways:
Shift-enter: run cell, go to next cell
Ctrl-enter: run cell in place
Alt-enter: run cell, insert below
Markdown
Fancy Markdown Cell
code code code
Bullet 1
Bullet 2
numbered
numbered
Some verbose words
Markdown Reference
Raw text
Can I do this quickly?
Yep - take a look at the 'Keyboard Shortcuts' menu.
First of all, there are two 'modes': 'command' and 'edit'. When you're in a cell, you're in 'edit mode' and when you're out of the cell you're in 'command mode' You go into 'command mode' by hitting 'esc' and into edit by hitting 'return'. Try it a couple times. Move up and down through cells with the arrow keys when in 'command mode'.
go through the keyboard shortcuts
My Favs:
* r,m,y in command mode
* โZ : undo
* d: delete
* (also, cntl-e and cntl-a work in a cell, if you live in unix, you'll understand why this is awesome)
Once you start getting the shortcuts down, you're crazy productive.
Exercise: Copy and Paste is Neat
Try copy and pasting our for loop code from the iPython console into a cell (only use the keyboard):
The menubar and toolbar
Let's go over all of these functions and talk about what they do...
It's like Magic (functions)
IPython has a set of predefined โmagic functionsโ that you can call with a command line style syntax. There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes. Cell magics are prefixed with a double %%, and they are functions that get as an argument not only the rest of the line, but also the lines below it in a separate argument.
Examples:
You've already seen a few magic functions above (%run and %notebook). Here's some others.
End of explanation
ls
%matplotlib inline
Explanation: My favs...
End of explanation
%%capture capt
from __future__ import print_function
import sys
print('Hello stdout')
print('and stderr', file=sys.stderr)
capt.stdout, capt.stderr
capt.show()
Explanation: MUCHS INFOS: https://ipython.org/ipython-doc/dev/interactive/magics.html
Some others to try: %edit, %capture
End of explanation
%%script python
import sys
print 'hello from Python %s' % sys.version
%%bash
echo "hello from $BASH"
Explanation: Magics for running code under other interpreters
IPython has a %%script cell magic, which lets you run a cell in a subprocess of any interpreter on your system, such as: bash, ruby, perl, zsh, R, etc.
It can even be a script of your own, which expects input on stdin.
To use it, simply pass a path or shell command to the program you want to run on the %%script line, and the rest of the cell will be run by that script, and stdout/err from the subprocess are captured and displayed.
End of explanation
%%script ./lnum.py
my first line
my second
more
Explanation: Exercise: write your own script that numbers input lines
Write a file, called lnum.py, such that the following cell works as shown (hint: don't forget about the executable bit!):
End of explanation
a = 3
b = 4
a + b
a*b
a - b
_
___
_49
Out[62]
_i
In[50]
Explanation: Hints:
Useful function: sys.stdin.readlines()
Another useful function: enumerate()
You could use the notebook to query what these do...
Out and In
You can access the input and output of previous cells.
End of explanation
!python --version
!ping www.google.com
Explanation: Running Shell Commands
There are some magics for some shell commands (like ls and pwd - this is relaly great on a windows system by the way) but you can also run arbitrary system commands with a '!'.
End of explanation
import time
time.sleep(10)
Explanation: Managing the IPython Kernel
Code is run in a separate process called the IPython Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the "Stop" button in the toolbar above.
End of explanation
import sys
from ctypes import CDLL
# This will crash a Linux or Mac system; equivalent calls can be made on Windows
dll = 'dylib' if sys.platform == 'darwin' else 'so.6'
libc = CDLL("libc.%s" % dll)
libc.time(-1) # BOOM!!
Explanation: If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via
ctypes to segfault the Python interpreter:
End of explanation
!python --version
!ipython --version
Explanation: Side note on versions
There can be (and probably are) different python versions and ipython versions. This is normal. Don't Panic. Everybody got their towel?
End of explanation
%load http://matplotlib.sourceforge.net/mpl_examples/pylab_examples/integral_demo.py
Explanation: I promised you latex!
You just need to surround equations in markdown with '$' signs.
$y = x^{2}$
$\frac{dN}{dE} = \frac{N_{\text peak}}{E_{\text peak}} (E/E_{\text peak})^{\gamma} (e^{1 - E/E_{\text peak}})^{\gamma+2}$
Note that it's using MathJax for the rendering so if you're offline, you might not get latex.
Running a remote file!
Note: do this after the plotting bit.
End of explanation
%pdb
Explanation: Debugging
iPython has a powerful debugger. Let's see how it works a bit:
When iPython encounters an exception (in this case a ZeroDivisionError) it'll drop us into a pdb session if we use the %debug magic. Commands:
? for "help"
? s for "help for command s"
l for "some more context"
s for "step into"
n for "step over"
c for "continue to next breakpoint"
You can also turn on automatic debugging.
End of explanation
!ipython nbconvert mystuff.ipynb --to pdf
Explanation: Lots of useful features in the python debugger (and could probably do with a seperate lecture). We'll save that for another day...
nbconvert
You can convert notebooks into lots of different formats with the nbconvert command (type 'ipython nbconvert' for all of the options). Example:
End of explanation
#Note that I had to install cython to get this to work.
# try doing 'conda update cython' if you get an error
%load_ext Cython
%%cython
cimport numpy
cpdef cysum(numpy.ndarray[double] A):
Compute the sum of an array
cdef double a=0
for i in range(A.shape[0]):
a += A[i]
return a
def pysum(A):
Compute the sum of an array
a = 0
for i in range(A.shape[0]):
a += A[i]
return a
import numpy as np
for sz in (100, 1000, 10000):
A = np.random.random(sz)
print("==>Python %i" % sz, end=' ')
%timeit pysum(A)
print("==>np.sum %i" % sz, end=' ')
%timeit A.sum()
print("==>Cython %i" % sz, end=' ')
%timeit cysum(A)
Explanation: The %cython magic
Probably the most important magic is the %cython magic. The %%cython magic uses manages everything using temporary files in the ~/.ipython/cython/ directory. All of the symbols in the Cython module are imported automatically by the magic.
cython is a way of running c code inside iPython. Sometimes a c function can be much faster than the equivalent function in python.
End of explanation |
11,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using a CFSv2 forecast
CFSv2 is a seasonal forecast system, used for analysing past climate and also making seasonal, up to 9-month, forecasts. Here we give a brief example on how to use Planet OS API to merge 9-month forecasts started at different initial times, into a single ensemble forecast.
Ensemble forecasting is a traditional technique in medium range (up to 10 days) weather forecasts, seasonal forecasts and climate modelling. By changing initial conditions or model parameters, a range of forecasts is created, which differ from each other slightly, due to the chaotic nature of fluid dynamics (which weather modelling is a subset of). For weather forecasting, the ensemble is usually created by small changes in initial conditions, but for seasonal forecast, it is much easier to just take real initial conditions every 6-hours. Here we are going to show, first how to merge the different dates into a single plot with the help of python pandas library, and in addition we show that even 6-hour changes in initial conditions can lead to large variability in long range forecasts.
If you have more interest in Planet OS API, please refer to our official documentation.
Please also note that the API_client python routine, used in this notebook, is still experimental and will change in the future, so take it just as a guidance using the API, and not as an official tool.
Step1: The API needs a file APIKEY with your API key in the work folder. We initialize a datahub and dataset objects.
Step2: In order to the automatic location selection to work, add your custom location to the API_client.python.lib.predef_locations file.
Step3: Here we clean the table just a bit and create time based index.
Step4: Next, we resample the data to 1-month totals.
Step5: Finally, we are visualizing the monthly precipitation for each different forecast, in a single plot. | Python Code:
%matplotlib notebook
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from API_client.python import datahub
from API_client.python.lib import dataset
from API_client.python.lib import variables
Explanation: Using a CFSv2 forecast
CFSv2 is a seasonal forecast system, used for analysing past climate and also making seasonal, up to 9-month, forecasts. Here we give a brief example on how to use Planet OS API to merge 9-month forecasts started at different initial times, into a single ensemble forecast.
Ensemble forecasting is a traditional technique in medium range (up to 10 days) weather forecasts, seasonal forecasts and climate modelling. By changing initial conditions or model parameters, a range of forecasts is created, which differ from each other slightly, due to the chaotic nature of fluid dynamics (which weather modelling is a subset of). For weather forecasting, the ensemble is usually created by small changes in initial conditions, but for seasonal forecast, it is much easier to just take real initial conditions every 6-hours. Here we are going to show, first how to merge the different dates into a single plot with the help of python pandas library, and in addition we show that even 6-hour changes in initial conditions can lead to large variability in long range forecasts.
If you have more interest in Planet OS API, please refer to our official documentation.
Please also note that the API_client python routine, used in this notebook, is still experimental and will change in the future, so take it just as a guidance using the API, and not as an official tool.
End of explanation
dh = datahub.datahub(server='api.planetos.com',version='v1')
ds = dataset.dataset('ncep_cfsv2', dh, debug=False)
ds.vars=variables.variables(ds.variables(), {'reftimes':ds.reftimes,'timesteps':ds.timesteps},ds)
Explanation: The API needs a file APIKEY with your API key in the work folder. We initialize a datahub and dataset objects.
End of explanation
for locat in ['Vรตru']:
ds.vars.Convective_Precipitation_Rate_surface.get_values(count=1000, location=locat, reftime='2018-04-20T18:00:00',
reftime_end='2018-05-02T18:00:00')
ds.vars.Maximum_temperature_height_above_ground.get_values(count=1000, location=locat, reftime='2018-04-20T18:00:00',
reftime_end='2018-05-02T18:00:00')
## uncomment following line to see full pandas table
## ds.vars.Convective_Precipitation_Rate_surface.values['Vรตru']
Explanation: In order to the automatic location selection to work, add your custom location to the API_client.python.lib.predef_locations file.
End of explanation
ddd = ds.vars.Convective_Precipitation_Rate_surface.values['Vรตru'][['reftime','time','Convective_Precipitation_Rate_surface']]
dd_test=ddd.set_index('time')
Explanation: Here we clean the table just a bit and create time based index.
End of explanation
reft_unique = ds.vars.Convective_Precipitation_Rate_surface.values['Vรตru']['reftime'].unique()
nf = []
for reft in reft_unique:
abc = dd_test[dd_test.reftime==reft].resample('M').sum()
abc['Convective_Precipitation_Rate_surface'+'_'+reft.astype(str)] = \
abc['Convective_Precipitation_Rate_surface']*6*3600
del abc['Convective_Precipitation_Rate_surface']
nf.append(abc)
nf2=pd.concat(nf,axis=1)
# uncomment to see full pandas table
nf2
Explanation: Next, we resample the data to 1-month totals.
End of explanation
fig=plt.figure(figsize=(10,8))
nf2.transpose().boxplot()
plt.ylabel('Monthly precipitation mm')
fig.autofmt_xdate()
plt.show()
Explanation: Finally, we are visualizing the monthly precipitation for each different forecast, in a single plot.
End of explanation |
11,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Network - Statistical Encoding - Microsoft Malware
There aren't any examples of using a neural network to model Microsoft Malware, so I thought I'd post one. Also in this kernel, I show statistical one-hot-encoding where only boolean variables that are idependently statistically significant are created.
Load Train.csv
Step1: Statistically Encode Variables
All four variables in the Python variable list FE will get frequency encoded and all thirty-nine variables in list OHE will get statistically one-hot-encoded. In total, forty-three variables are imported from the training csv while thirty-nine were ignored.
Among all our category variables, there are a combined 211,562 values! So we can't one-hot-encode all. (Note that this is without Census_OEMModelIdentifier's 175,366 or Census_SystemVolumeTotalCapacity's 536,849) We will use a trick from statistics. First we'll assume we have a random sample. (Which we don't actually have, but let's pretend.) Then for each value, we will test the following hypotheses
$$H_0
Step2: Example - Census_OEMModelIdentifier
Below is variable Census_OEMModelIdentifier. Observe how NAN is treated like a category value and that it has consistently had the lowest HasDetections rate all of year 2018. Also notice how value=245824 has consistently been high. Finally note that value=188345 and 248045 are high and low respectively in August and September but earlier in the year their positions were reversed! What will their positions be in the test set's October and November computers??
Build and Train Network
We will a build a 3 layer fully connected network with 100 neurons on each hidden layer. We will use ReLU activation, Batch Normalization, 40% Dropout, Adam Optimizer, and Decaying Learning Rate. Unfortunately we don't have an AUC loss function, so we will use Cross Entrophy instead. After each epoch, we will call a custom Keras callback to display the current AUC and continually save the best model.
Step3: Predict Test and Submit to Kaggle
Even after deleting the training data, our network still needs lot of our available RAM, we'll need to load in test.csv by chunks and predict by chunks. Click 'see code' button to see how this is done. | Python Code:
# IMPORT LIBRARIES
import pandas as pd, numpy as np, os, gc
# LOAD AND FREQUENCY-ENCODE
FE = ['EngineVersion','AppVersion','AvSigVersion','Census_OSVersion']
# LOAD AND ONE-HOT-ENCODE
OHE = [ 'RtpStateBitfield','IsSxsPassiveMode','DefaultBrowsersIdentifier',
'AVProductStatesIdentifier','AVProductsInstalled', 'AVProductsEnabled',
'CountryIdentifier', 'CityIdentifier',
'GeoNameIdentifier', 'LocaleEnglishNameIdentifier',
'Processor', 'OsBuild', 'OsSuite',
'SmartScreen','Census_MDC2FormFactor',
'Census_OEMNameIdentifier',
'Census_ProcessorCoreCount',
'Census_ProcessorModelIdentifier',
'Census_PrimaryDiskTotalCapacity', 'Census_PrimaryDiskTypeName',
'Census_HasOpticalDiskDrive',
'Census_TotalPhysicalRAM', 'Census_ChassisTypeName',
'Census_InternalPrimaryDiagonalDisplaySizeInInches',
'Census_InternalPrimaryDisplayResolutionHorizontal',
'Census_InternalPrimaryDisplayResolutionVertical',
'Census_PowerPlatformRoleName', 'Census_InternalBatteryType',
'Census_InternalBatteryNumberOfCharges',
'Census_OSEdition', 'Census_OSInstallLanguageIdentifier',
'Census_GenuineStateName','Census_ActivationChannel',
'Census_FirmwareManufacturerIdentifier',
'Census_IsTouchEnabled', 'Census_IsPenCapable',
'Census_IsAlwaysOnAlwaysConnectedCapable', 'Wdft_IsGamer',
'Wdft_RegionIdentifier']
# LOAD ALL AS CATEGORIES
dtypes = {}
for x in FE+OHE: dtypes[x] = 'category'
dtypes['MachineIdentifier'] = 'str'
dtypes['HasDetections'] = 'int8'
# LOAD CSV FILE
df_train = pd.read_csv('../input/train.csv', usecols=dtypes.keys(), dtype=dtypes)
print ('Loaded',len(df_train),'rows of TRAIN.CSV!')
# DOWNSAMPLE
sm = 2000000
df_train = df_train.sample(sm)
print ('Only using',sm,'rows to train and validate')
x=gc.collect()
Explanation: Neural Network - Statistical Encoding - Microsoft Malware
There aren't any examples of using a neural network to model Microsoft Malware, so I thought I'd post one. Also in this kernel, I show statistical one-hot-encoding where only boolean variables that are idependently statistically significant are created.
Load Train.csv
End of explanation
import math
# CHECK FOR NAN
def nan_check(x):
if isinstance(x,float):
if math.isnan(x):
return True
return False
# FREQUENCY ENCODING
def encode_FE(df,col,verbose=1):
d = df[col].value_counts(dropna=False)
n = col+"_FE"
df[n] = df[col].map(d)/d.max()
if verbose==1:
print('FE encoded',col)
return [n]
# ONE-HOT-ENCODE ALL CATEGORY VALUES THAT COMPRISE MORE THAN
# "FILTER" PERCENT OF TOTAL DATA AND HAS SIGNIFICANCE GREATER THAN "ZSCORE"
def encode_OHE(df, col, filter, zscore, tar='HasDetections', m=0.5, verbose=1):
cv = df[col].value_counts(dropna=False)
cvd = cv.to_dict()
vals = len(cv)
th = filter * len(df)
sd = zscore * 0.5/ math.sqrt(th)
#print(sd)
n = []; ct = 0; d = {}
for x in cv.index:
try:
if cv[x]<th: break
sd = zscore * 0.5/ math.sqrt(cv[x])
except:
if cvd[x]<th: break
sd = zscore * 0.5/ math.sqrt(cvd[x])
if nan_check(x): r = df[df[col].isna()][tar].mean()
else: r = df[df[col]==x][tar].mean()
if abs(r-m)>sd:
nm = col+'_BE_'+str(x)
if nan_check(x): df[nm] = (df[col].isna()).astype('int8')
else: df[nm] = (df[col]==x).astype('int8')
n.append(nm)
d[x] = 1
ct += 1
if (ct+1)>=vals: break
if verbose==1:
print('OHE encoded',col,'- Created',len(d),'booleans')
return [n,d]
# ONE-HOT-ENCODING from dictionary
def encode_OHE_test(df,col,dt):
n = []
for x in dt:
n += encode_BE(df,col,x)
return n
# BOOLEAN ENCODING
def encode_BE(df,col,val):
n = col+"_BE_"+str(val)
if nan_check(val):
df[n] = df[col].isna()
else:
df[n] = df[col]==val
df[n] = df[n].astype('int8')
return [n]
cols = []; dd = []
# ENCODE NEW
for x in FE:
cols += encode_FE(df_train,x)
for x in OHE:
tmp = encode_OHE(df_train,x,0.005,5)
cols += tmp[0]; dd.append(tmp[1])
print('Encoded',len(cols),'new variables')
# REMOVE OLD
for x in FE+OHE:
del df_train[x]
print('Removed original',len(FE+OHE),'variables')
x = gc.collect()
Explanation: Statistically Encode Variables
All four variables in the Python variable list FE will get frequency encoded and all thirty-nine variables in list OHE will get statistically one-hot-encoded. In total, forty-three variables are imported from the training csv while thirty-nine were ignored.
Among all our category variables, there are a combined 211,562 values! So we can't one-hot-encode all. (Note that this is without Census_OEMModelIdentifier's 175,366 or Census_SystemVolumeTotalCapacity's 536,849) We will use a trick from statistics. First we'll assume we have a random sample. (Which we don't actually have, but let's pretend.) Then for each value, we will test the following hypotheses
$$H_0: \text{Prob(HasDetections=1 given value is present)} = 0.5 $$
$$H_A: \text{Prob(HasDetections=1 given value is present)} \ne 0.5$$
The test statistic z-score equals \( \hat{p} \), the observed HasDetections rate given value is present, minus 0.5 divided by the standard deviation of \( \hat{p} \). The Central Limit Theorem tells us
$$\text{z-score} = \frac{\hat{p}-0.5}{SD(\hat{p})} = 2 (\hat{p} - 0.5)\sqrt{n} $$
where \(n\) is the number of occurences of the value. If the absolute value of \(z\) is greater than 2.0, we are 95% confident that Prob(HasDetections=1 given value is present) is not equal 0.5 and we will include a boolean for this value in our model. Actually, we'll use a \(z\) threshold of 5.0 and require \( 10^{-7}n>0.005 \). This adds 350 new boolean variables (instead of naively one-hot-encoding 211,562!).
## Example - Census_FirmwareManufacturerIdentifier
In the plots below, the dotted lines use the right y-axis and solid lines/bars use the left. The top plot below shows 20 values of variable Census_FirmwareManufacturerIdentifier. Notice that I consider NAN a value. Each of these values contains over 0.5% of the data. And all the variables together contain 97% of the data. Value=93 has a HasDetections rate of 52.5% while value=803 has a HasDetections rate of 35.4%. Their z-scores are \(22.2 = 2\times(0.5253-0.5)\times\sqrt{192481} \text{ }\) and \(-71.3 = 2\times(0.3535-0.5)\times\sqrt{59145}\text{ }\) respectively! The probability that value=93 and value=803 have a HasDetections rate of 50% and what we are observing is due to chance is close to nothing. Additionally from the bottom plot, you see that these two values have consistently been high and low throughout all of the year 2018. We can trust that this trend will continue into the test set's October and November computers.
Python Code
To see the Python encoding functions, click 'see code' to the right.
End of explanation
from keras import callbacks
from sklearn.metrics import roc_auc_score
class printAUC(callbacks.Callback):
def __init__(self, X_train, y_train):
super(printAUC, self).__init__()
self.bestAUC = 0
self.X_train = X_train
self.y_train = y_train
def on_epoch_end(self, epoch, logs={}):
pred = self.model.predict(np.array(self.X_train))
auc = roc_auc_score(self.y_train, pred)
print("Train AUC: " + str(auc))
pred = self.model.predict(self.validation_data[0])
auc = roc_auc_score(self.validation_data[1], pred)
print ("Validation AUC: " + str(auc))
if (self.bestAUC < auc) :
self.bestAUC = auc
self.model.save("bestNet.h5", overwrite=True)
return
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, Dropout, BatchNormalization, Activation
from keras.callbacks import LearningRateScheduler
from keras.optimizers import Adam
#SPLIT TRAIN AND VALIDATION SET
X_train, X_val, Y_train, Y_val = train_test_split(
df_train[cols], df_train['HasDetections'], test_size = 0.5)
# BUILD MODEL
model = Sequential()
model.add(Dense(100,input_dim=len(cols)))
model.add(Dropout(0.4))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(100))
model.add(Dropout(0.4))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=Adam(lr=0.01), loss="binary_crossentropy", metrics=["accuracy"])
annealer = LearningRateScheduler(lambda x: 1e-2 * 0.95 ** x)
# TRAIN MODEL
model.fit(X_train,Y_train, batch_size=32, epochs = 20, callbacks=[annealer,
printAUC(X_train, Y_train)], validation_data = (X_val,Y_val), verbose=2)
Explanation: Example - Census_OEMModelIdentifier
Below is variable Census_OEMModelIdentifier. Observe how NAN is treated like a category value and that it has consistently had the lowest HasDetections rate all of year 2018. Also notice how value=245824 has consistently been high. Finally note that value=188345 and 248045 are high and low respectively in August and September but earlier in the year their positions were reversed! What will their positions be in the test set's October and November computers??
Build and Train Network
We will a build a 3 layer fully connected network with 100 neurons on each hidden layer. We will use ReLU activation, Batch Normalization, 40% Dropout, Adam Optimizer, and Decaying Learning Rate. Unfortunately we don't have an AUC loss function, so we will use Cross Entrophy instead. After each epoch, we will call a custom Keras callback to display the current AUC and continually save the best model.
End of explanation
del df_train
del X_train, X_val, Y_train, Y_val
x = gc.collect()
# LOAD BEST SAVED NET
from keras.models import load_model
model = load_model('bestNet.h5')
pred = np.zeros((7853253,1))
id = 1
chunksize = 2000000
for df_test in pd.read_csv('../input/test.csv',
chunksize = chunksize, usecols=list(dtypes.keys())[0:-1], dtype=dtypes):
print ('Loaded',len(df_test),'rows of TEST.CSV!')
# ENCODE TEST
cols = []
for x in FE:
cols += encode_FE(df_test,x,verbose=0)
for x in range(len(OHE)):
cols += encode_OHE_test(df_test,OHE[x],dd[x])
# PREDICT TEST
end = (id)*chunksize
if end>7853253: end = 7853253
pred[(id-1)*chunksize:end] = model.predict_proba(df_test[cols])
print(' encoded and predicted part',id)
id += 1
# SUBMIT TO KAGGLE
df_test = pd.read_csv('../input/test.csv', usecols=['MachineIdentifier'])
df_test['HasDetections'] = pred
df_test.to_csv('submission.csv', index=False)
Explanation: Predict Test and Submit to Kaggle
Even after deleting the training data, our network still needs lot of our available RAM, we'll need to load in test.csv by chunks and predict by chunks. Click 'see code' button to see how this is done.
End of explanation |
11,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Libraries
Step1: EMUstack
Step2: Spectra plot
Step3: Triangulation field plot | Python Code:
# libraries
import numpy as np
import sys
sys.path.append("../backend/")
%matplotlib inline
import matplotlib.pylab as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import objects
import materials
# import plotting
from stack import *
#parallel
import concurrent.futures
%matplotlib inline
Explanation: Libraries
End of explanation
# light parameters
wl_1 = 300
wl_2 = 800
n_wl = 128
# Set up light objects
wavelengths = np.linspace(wl_1,wl_2, n_wl)
light_list = [objects.Light(wl, max_order_PWs = 2,theta=0.0,phi=0.0) for wl in wavelengths]
# nanodisk array r and pitch in nm
nd_r = 100
nd_p = 600
nd_h = 100
# defining the layers: period must be consistent throughout simulation!!!
NHs = objects.NanoStruct('2D_array', nd_p, 2.0*nd_r, height_nm = nd_h,
inclusion_a = materials.Au, background = materials.Air, loss = True,
inc_shape='circle',
plotting_fields=False,plot_real=1,
make_mesh_now = True, force_mesh = True, lc_bkg = 0.12, lc2= 5.0, lc3= 3.0,plt_msh=True)#lc_bkg = 0.08, lc2= 5.0)
superstrate = objects.ThinFilm(period = nd_p, height_nm = 'semi_inf',
material = materials.Air, loss = False)
substrate = objects.ThinFilm(period = nd_p, height_nm = 'semi_inf',
material = materials.Air, loss = False)
# EMUstack Function
def simulate_stack(light):
# evaluate each layer individually
sim_NHs = NHs.calc_modes(light)
sim_superstrate = superstrate.calc_modes(light)
sim_substrate = substrate.calc_modes(light)
# build the stack solution
stackSub = Stack((sim_substrate, sim_NHs, sim_superstrate))
stackSub.calc_scat(pol = 'TM')
return stackSub
%%time
# computation
with concurrent.futures.ProcessPoolExecutor() as executor:
stacks_list = list(executor.map(simulate_stack, light_list))
Explanation: EMUstack: Au Nanodisk array with interpolators
We show the potential of interpolators with a simple calculation. We calculate the Transmission and Reflection spectra of an Au Nanodisk Array with pitch nd_p=600nm and disk radius and height nd_r=100 nm and nd_h=100 nm. Then we use the interpolators to plot all the field componentes at the system resonance within the notebook.
End of explanation
# spectra
a_list = []
t_list = []
r_list = []
for stack in stacks_list:
a_list.extend(stack.a_list)
t_list.extend(stack.t_list)
r_list.extend(stack.r_list)
layers_steps = len(stacks_list[0].layers) - 1
a_tot = []
t_tot = []
r_tot = []
for i in range(len(wavelengths)):
a_tot.append(float(a_list[layers_steps-1+(i*layers_steps)]))
t_tot.append(float(t_list[layers_steps-1+(i*layers_steps)]))
r_tot.append(float(r_list[i]))
# T and R spectra
plt.figure(figsize=(15,10))
plt.plot(wavelengths,np.array(r_tot),'k',
wavelengths,np.array(t_tot),'b',linewidth = 2.0);
f_size=25;
# labels
plt.xlabel("Wavelength (nm)",fontsize = f_size);
plt.ylabel("T,R",fontsize = f_size);
# ticks
plt.xticks(fontsize=f_size-10);
plt.yticks(fontsize=f_size-10);
# legend
plt.legend( ["R","T"],fontsize = f_size, loc='center left',fancybox=True);
Explanation: Spectra plot
End of explanation
# triangular interpolation computation
ReEx,ImEx,ReEy,ImEy,ReEz,ImEz,AbsE = plotting.fields_interpolator_in_plane(stacks_list[np.array(r_tot).argmax()],lay_interest=1,z_value=0.1)
# field mapping
n_points=500
v_x=np.zeros(n_points**2)
v_y=np.zeros(n_points**2)
i=0
x_min=0.0;x_max=1.0
y_min=-1.0;y_max=0.0
for x in np.linspace(x_min,x_max,n_points):
for y in np.linspace(y_min,y_max,n_points):
v_x[i] = x
v_y[i] = y
i+=1
v_x = np.array(v_x)
v_y = np.array(v_y)
# interpolated fields
m_ReEx = ReEx(v_x,v_y).reshape(n_points,n_points)
m_ReEy = ReEy(v_x,v_y).reshape(n_points,n_points)
m_ReEz = ReEz(v_x,v_y).reshape(n_points,n_points)
m_ImEx = ImEx(v_x,v_y).reshape(n_points,n_points)
m_ImEy = ImEy(v_x,v_y).reshape(n_points,n_points)
m_ImEz = ImEz(v_x,v_y).reshape(n_points,n_points)
m_AbsE = AbsE(v_x,v_y).reshape(n_points,n_points)
v_plots = [m_ReEx,m_ReEy,m_ReEz,m_ImEx,m_ImEy,m_ImEz,m_AbsE]
v_labels = ["ReEx","ReEy","ReEz","ImEx","ImEy","ImEz","AbsE"]
# field plots
plt.figure(figsize=(13,13))
for i_p,plot in enumerate(v_plots):
ax = plt.subplot(3,3,i_p+1)
im = plt.imshow(plot.T,cmap='jet');
# no ticks
plt.xticks([])
plt.yticks([])
# titles
plt.title(v_labels[i_p],fontsize=f_size)
# colorbar
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar = plt.colorbar(im, cax=cax)
cbar.ax.tick_params(labelsize=f_size-10)
plt.tight_layout(1)
Explanation: Triangulation field plot
End of explanation |
11,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch. 4 - A deeper network & Overfitting
In this chapter we will build a neural network with a hidden layer to fit a more complex function. First, what makes a neural network 'deep'? The number of layers. In a neural network we can stack layers on top of each other. The more layers, the deeper the network. In the last chapters, we built a 1 layer neural network (the input layer does not count), which is the same as a logistic regressor. In this chapter we will add a so called 'hidden layer'. A hidden layer is a layer that sits between the input and the output.
But before we do anything, let's quickly load some libraries needed for the code below
Step1: Forward propagation through a 2 layer network
Propagating through a 2 layer network is very similar to propagating through a 1 layer network. In the first step we multiply the input with the weights of the first layer
Step2: Backward propagation through a 2 layer neural network
Backward propagation through a 2 layer neural network is also very similar to a 1 layer neural net. First we calculate the derivative of the loss function
Step3: Forward backward overview
To get a better overview what is happening in our new, deeper network, let's look at an overview. In this overview, I have replaced $tanh(x)$ with $g(x)$ and the derivative $tanh'(x)$ with $g'(x)$. $g(x)$ is a general placeholder for an activation in this overview. It could be tanh, but it could also be ReLU.
Revisiting the problem
As in the last chapter, we would like to train a classifier that approximates a more complex function. Let's look at the problem again
Step4: These two moons are to be seperated. To do so we first need to define some more helper functions to complete our model
Step5: Next to the loss we will keep track of a new metric in this chapter. The accuracy. That is the percentage of examples our model classified correctly. Or more formally
Step6: Parameter initialization works exactly like it did for the smaller logistic regressor. Pay attention to the matrix sizes of the layers.
Step7: Updating the parameters works the same way as it did with the smaller regressor.
Step8: After we have predefined all functions, the training routine of a larger network looks the same as the routine of a smaller network.
Step9: Solving the problem with a bigger network
Now that we have created a bigger network, we will train it. Note how our bigger network gained a new hyper parameter
Step10: The signal and the noise
As you can see, our bigger network has no problems fitting this more complex function. It misses only a few dots. This is normal and not problematic, because our dataset includes noise. Noise are datapoints with wrong values that do not correspond to the actual distribution. Much like on the radio, where some sounds are part of the program (the signal) and some sounds are just errors in the transmission (the noise). In real live situations we encounter noise all the time. While we have to make sure that our models incorporate all the complexity of the signal, we want to avoid fitting it to noise. Let's see what happens if we turn up the noise on our dataset
Step11: This already looks significantly more messy, but the underlying distribution is still the same. The data still shows two moon shapes, although they are now much harder to see. We will now fit our model to this more noisy data.
Step12: As you can see, out model has a significantly harder time fitting this noisy distribution. In fact it seems like it does not find the actual descision boundary at all, but tries to approximate it in a way that fits as many training set points as possible. In the next experiment we will increase the hidden layer size to model a more complex function to this noisy dataset.
Step13: In the output from training you can see that the bigger model archieves a better accuracy and lower loss on the training set. However if you look at the decision boundary, you can see that it carves in many wired shapes just to accomodate a few dots. This kind of behavior is called overfitting. The model does too well on the traings set and failes to find generalizable findings. We can show this very easily by generating a second dataset. This dataset is generated with the exact same method as the first and follows the same general distribution. We will use our model to make a prediction for this new dataset it has not seen in training. This new dataset will be called the dev set, as we use it not to train but to tweak hyper parameters like the layer size on it. | Python Code:
# Package imports
# Matplotlib is a matlab like plotting library
import matplotlib
import matplotlib.pyplot as plt
# Numpy handles matrix operations
import numpy as np
# SciKitLearn is a useful machine learning utilities library
import sklearn
# The sklearn dataset module helps generating datasets
import sklearn.datasets
import sklearn.linear_model
# Display plots inline and change default figure size
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
# Just some helper functions we moved over from the last chapter
# sigmoid function
def sigmoid(x):
'''
Calculates the sigmoid activation of a given input x
See: https://en.wikipedia.org/wiki/Sigmoid_function
'''
return 1/(1+np.exp(-x))
#Log Loss function
def log_loss(y,y_hat):
'''
Calculates the logistic loss between a prediction y_hat and the labels y
See: http://wiki.fast.ai/index.php/Log_Loss
We need to clip values that get too close to zero to avoid zeroing out.
Zeroing out is when a number gets so small that the computer replaces it with 0.
Therefore, we clip numbers to a minimum value.
'''
minval = 0.000000000001
N = y.shape[0]
l = -1/N * np.sum(y * np.log(y_hat.clip(min=minval)) + (1-y) * np.log((1-y_hat).clip(min=minval)))
return l
# Log loss derivative
def log_loss_derivative(y,y_hat):
'''
Calculates the gradient (derivative) of the log loss between point y and y_hat
See: https://stats.stackexchange.com/questions/219241/gradient-for-logistic-loss-function
'''
return (y_hat-y)
Explanation: Ch. 4 - A deeper network & Overfitting
In this chapter we will build a neural network with a hidden layer to fit a more complex function. First, what makes a neural network 'deep'? The number of layers. In a neural network we can stack layers on top of each other. The more layers, the deeper the network. In the last chapters, we built a 1 layer neural network (the input layer does not count), which is the same as a logistic regressor. In this chapter we will add a so called 'hidden layer'. A hidden layer is a layer that sits between the input and the output.
But before we do anything, let's quickly load some libraries needed for the code below
End of explanation
def forward_prop(model,a0):
'''
Forward propagates through the model, stores results in cache.
See: https://stats.stackexchange.com/questions/147954/neural-network-forward-propagation
A0 is the activation at layer zero, it is the same as X
'''
# Load parameters from model
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Linear step
z1 = a0.dot(W1) + b1
# First activation function
a1 = np.tanh(z1)
# Second linear step
z2 = a1.dot(W2) + b2
# Second activation function
a2 = sigmoid(z2)
cache = {'a0':a0,'z1':z1,'a1':a1,'z1':z1,'a2':a2}
return cache
Explanation: Forward propagation through a 2 layer network
Propagating through a 2 layer network is very similar to propagating through a 1 layer network. In the first step we multiply the input with the weights of the first layer:
$$z_1 = A_0.W_1 + b_1$$
To obtain the activation $A_1$ we then again pass $z_1$ through an activation function. This time it is a hyperbolic tangent or tanh function. tanh works a lot like sigmoid except that it can output negative values, too. There has been a lot of research on different activation functions for hidden layer. The most commonly used one today is ReLU, we will use tanh in this chapter since it is still a very useful function and makes explaining some broader concepts easier.
The tanh function is
$$tanh(x) = \frac{e^{2*x}-1}{e^{2x}+1}$$
So
$$A_1 = tanh(z_1)$$
This activation is then the basis for the calculation of $z_2$
$$z_2 = A_1.W_2 + b_2$$
The output $A_2$ is then again calculated with the sigmoid function
$$A_2 = \sigma(z_2)$$
In python code it looks like this:
End of explanation
def tanh_derivative(x):
'''
Calculates the derivative of the tanh function that is used as the first activation function
See: https://socratic.org/questions/what-is-the-derivative-of-tanh-x
'''
return (1 - np.power(x, 2))
def backward_prop(model,cache,y):
'''
Backward propagates through the model to calculate gradients.
Stores gradients in grads dictionary.
See: https://en.wikipedia.org/wiki/Backpropagation
'''
# Load parameters from model
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Load forward propagation results
a0,a1, a2 = cache['a0'],cache['a1'],cache['a2']
# Backpropagation
# Calculate loss derivative with respect to output
dz2 = log_loss_derivative(y=y,y_hat=a2)
# Calculate loss derivative with respect to second layer weights
dW2 = (a1.T).dot(dz2)
# Calculate loss derivative with respect to second layer bias
db2 = np.sum(dz2, axis=0, keepdims=True)
# Calculate loss derivative with respect to first layer
dz1 = dz2.dot(W2.T) * tanh_derivative(a1)
# Calculate loss derivative with respect to first layer weights
dW1 = np.dot(a0.T, dz1)
# Calculate loss derivative with respect to first layer bias
db1 = np.sum(dz1, axis=0)
# Store gradients
grads = {'dW2':dW2,'db2':db2,'dW1':dW1,'db1':db1}
return grads
Explanation: Backward propagation through a 2 layer neural network
Backward propagation through a 2 layer neural network is also very similar to a 1 layer neural net. First we calculate the derivative of the loss function:
$$dz_2 = (A_2 - y)$$
Then we calculate the derivative of the loss function with respect to the weights $W_2$ and bias $b_2$
$$dW_2 = \frac{1}{m}A_1^T.dz_2$$
$$db_2 = \frac{1}{m}\sum dz_2$$
Now comes the tricky part. The derivative $dz_1$ is the derivative of the activation function multipled with the dot product of $dz_2$ and $W_2$.
$$dz_1 = dz_2.W_2^T * tanh'(A1)$$
This is a result of the chain rule being applied over the computational graph. $dW_1$ and $db_1$ are then again calculated as usual.
$$dW_1 = \frac{1}{m}A_0^T.dz_1$$
$$db_1 = \frac{1}{m}\sum dz_1$$
The derivative of the tanh function is
$$tanh'(x) = 1 - x^2$$
While we simply used numpys built in tanh function in the forward pass, we now implemented our own function for the derivative. So here is the backpropagation in python:
End of explanation
# Helper function to plot a decision boundary.
# If you don't fully understand this function don't worry, it just generates the contour plot below.
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y.flatten(), cmap=plt.cm.Spectral)
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.15)
y = y.reshape(200,1)
plt.scatter(X[:,0], X[:,1], s=40, c=y.flatten(), cmap=plt.cm.Spectral)
Explanation: Forward backward overview
To get a better overview what is happening in our new, deeper network, let's look at an overview. In this overview, I have replaced $tanh(x)$ with $g(x)$ and the derivative $tanh'(x)$ with $g'(x)$. $g(x)$ is a general placeholder for an activation in this overview. It could be tanh, but it could also be ReLU.
Revisiting the problem
As in the last chapter, we would like to train a classifier that approximates a more complex function. Let's look at the problem again:
End of explanation
def predict(model, x):
'''
Predicts y_hat as 1 or 0 for a given input X
'''
# Do forward pass
c = forward_prop(model,x)
#get y_hat
y_hat = c['a2']
# Turn values to either 1 or 0
y_hat[y_hat > 0.5] = 1
y_hat[y_hat < 0.5] = 0
return y_hat
Explanation: These two moons are to be seperated. To do so we first need to define some more helper functions to complete our model
End of explanation
def calc_accuracy(model,x,y):
'''
Calculates the accuracy of the model given an input x and a correct output y.
The accuracy is the percentage of examples our model classified correctly
'''
# Get total number of examples
m = y.shape[0]
# Do a prediction with the model
pred = predict(model,x)
# Ensure prediction and truth vector y have the same shape
pred = pred.reshape(y.shape)
# Calculate the number of wrong examples
error = np.sum(np.abs(pred-y))
# Calculate accuracy
return (m - error)/m * 100
Explanation: Next to the loss we will keep track of a new metric in this chapter. The accuracy. That is the percentage of examples our model classified correctly. Or more formally:
$$Acuracy = \frac{Correct Predictions}{Total Number of Examples}$$
There are different methods to evaluate a neural network but accuracy is the most common one. In python code it is calculated like this:
End of explanation
def initialize_parameters(nn_input_dim,nn_hdim,nn_output_dim):
'''
Initializes weights with random number between -1 and 1
Initializes bias with 0
Assigns weights and parameters to model
'''
# First layer weights
W1 = 2 *np.random.randn(nn_input_dim, nn_hdim) - 1
# First layer bias
b1 = np.zeros((1, nn_hdim))
# Second layer weights
W2 = 2 * np.random.randn(nn_hdim, nn_output_dim) - 1
# Second layer bias
b2 = np.zeros((1, nn_output_dim))
# Package and return model
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
return model
Explanation: Parameter initialization works exactly like it did for the smaller logistic regressor. Pay attention to the matrix sizes of the layers.
End of explanation
def update_parameters(model,grads,learning_rate):
'''
Updates parameters accoarding to gradient descent algorithm
See: https://en.wikipedia.org/wiki/Gradient_descent
'''
# Load parameters
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Update parameters
W1 -= learning_rate * grads['dW1']
b1 -= learning_rate * grads['db1']
W2 -= learning_rate * grads['dW2']
b2 -= learning_rate * grads['db2']
# Store and return parameters
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
return model
Explanation: Updating the parameters works the same way as it did with the smaller regressor.
End of explanation
def train(model,X_,y_,learning_rate, num_passes=20000, print_loss=False):
# Gradient descent. For each batch...
for i in range(0, num_passes):
# Forward propagation
cache = forward_prop(model,X_)
#a1, probs = cache['a1'],cache['a2']
# Backpropagation
grads = backward_prop(model,cache,y)
# Gradient descent parameter update
# Assign new parameters to the model
model = update_parameters(model=model,grads=grads,learning_rate=learning_rate)
# Pring loss & accuracy every 100 iterations
if print_loss and i % 100 == 0:
y_hat = cache['a2']
print('Loss after iteration',i,':',log_loss(y,y_hat))
print('Accuracy after iteration',i,':',calc_accuracy(model,X_,y_),'%')
return model
Explanation: After we have predefined all functions, the training routine of a larger network looks the same as the routine of a smaller network.
End of explanation
# Hyper parameters
hiden_layer_size = 3
# I picked this value because it showed good results in my experiments
learning_rate = 0.01
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
# This is what we return at the end
model = initialize_parameters(nn_input_dim=2, nn_hdim= hiden_layer_size, nn_output_dim= 1)
model = train(model,X,y,learning_rate=learning_rate,num_passes=1000,print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model,x))
plt.title("Decision Boundary for hidden layer size 3")
Explanation: Solving the problem with a bigger network
Now that we have created a bigger network, we will train it. Note how our bigger network gained a new hyper parameter: The size of the hidden layer is not given by the input or the output as the size of the other layers is. We can choose it as we like. We will experiment with this new hyper parameter later, for now we will use a hiden layer size of 3
End of explanation
# Now with more noise
# Generate a dataset and plot it
np.random.seed(0)
# The data generator alows us to regulate the noise level
X, y = sklearn.datasets.make_moons(200, noise=0.3)
y = y.reshape(200,1)
plt.scatter(X[:,0], X[:,1], s=40, c=y.flatten(), cmap=plt.cm.Spectral)
Explanation: The signal and the noise
As you can see, our bigger network has no problems fitting this more complex function. It misses only a few dots. This is normal and not problematic, because our dataset includes noise. Noise are datapoints with wrong values that do not correspond to the actual distribution. Much like on the radio, where some sounds are part of the program (the signal) and some sounds are just errors in the transmission (the noise). In real live situations we encounter noise all the time. While we have to make sure that our models incorporate all the complexity of the signal, we want to avoid fitting it to noise. Let's see what happens if we turn up the noise on our dataset
End of explanation
# Hyper parameters
hiden_layer_size = 3
# I picked this value because it showed good results in my experiments
learning_rate = 0.01
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
# This is what we return at the end
model = initialize_parameters(nn_input_dim=2, nn_hdim= hiden_layer_size, nn_output_dim= 1)
model = train(model,X,y,learning_rate=learning_rate,num_passes=1000,print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model,x))
plt.title("Decision Boundary for hidden layer size 3")
Explanation: This already looks significantly more messy, but the underlying distribution is still the same. The data still shows two moon shapes, although they are now much harder to see. We will now fit our model to this more noisy data.
End of explanation
# Hyper parameters
hiden_layer_size = 500
# I picked this value because it showed good results in my experiments
learning_rate = 0.01
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
# This is what we return at the end
model = initialize_parameters(nn_input_dim=2, nn_hdim= hiden_layer_size, nn_output_dim= 1)
model = train(model,X,y,learning_rate=learning_rate,num_passes=1000,print_loss=True)
# Plot the decision boundary
# This might take a little while as our model is very big now
plot_decision_boundary(lambda x: predict(model,x))
plt.title("Decision Boundary for hidden layer size 500")
Explanation: As you can see, out model has a significantly harder time fitting this noisy distribution. In fact it seems like it does not find the actual descision boundary at all, but tries to approximate it in a way that fits as many training set points as possible. In the next experiment we will increase the hidden layer size to model a more complex function to this noisy dataset.
End of explanation
# Generate a dev dataset and plot it
np.random.seed(1)
# The data generator alows us to regulate the noise level
X_dev, y_dev = sklearn.datasets.make_moons(200, noise=0.5)
y_dev = y_dev.reshape(200,1)
plt.scatter(X_dev[:,0], X_dev[:,1], s=40, c=y_dev.flatten(), cmap=plt.cm.Spectral)
calc_accuracy(model=model,x=X_dev,y=y_dev)
Explanation: In the output from training you can see that the bigger model archieves a better accuracy and lower loss on the training set. However if you look at the decision boundary, you can see that it carves in many wired shapes just to accomodate a few dots. This kind of behavior is called overfitting. The model does too well on the traings set and failes to find generalizable findings. We can show this very easily by generating a second dataset. This dataset is generated with the exact same method as the first and follows the same general distribution. We will use our model to make a prediction for this new dataset it has not seen in training. This new dataset will be called the dev set, as we use it not to train but to tweak hyper parameters like the layer size on it.
End of explanation |
11,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solution
Step1: Print out the wind_speed and sst variables to check yourself.
Step2: 2. Create a function to calculate the heat flux
Wind speed and temperature should be the required input arguments
Make the constants ($\rho$, $c_p$, etc) to be keyword (aka optional) arguments with default values
Use return statement to return the output
You've already forgotten it, but the formula is $Q = \rho c_p C_H (u_{atm} - u_{sea}) (T_{sea} - T_{atm}) $
Step3: 3. In a loop, calculate the heat flux with wind speed and temperature as inputs
First, create an empty list heat_flux of $Q$ values
Then, write a loop
Every iteration
Step4: Print out heat_flux variable to check that the values are sensible (no pun intended).
Step5: 4. Open a new text file for writing
Now, you need to open a file. Explore the built-in function open()
Step6: The recommended way of writing/reading files is using the context statement with
Step7: Use a text editor of your choice (or Jupyter!) to check the contents of the file. | Python Code:
## Your code
wind_speed = list(range(0,20,2))
sst = [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
Explanation: Solution: build a simple program (1 h)
By doing this exercise you will apply Python basics that we learned today: loops, lists, functions, strings. In addition, you will try to write data to a text file.
Synopsis
Create data of wind and sea surface temperature within some range
Functionalise the equation for the heat flux
In a loop calculate the heat flux with wind speed and temperature as inputs
Open a new text file for writing
Loop over the function output and write the data to the file
Hint: use string formatting to specify number of decimal places
Equation
The bulk formula for the sea-to-air heat flux is
$Q = \rho c_p C_H (u_{atm} - u_{sea}) (T_{sea} - T_{atm}) $
where
* $Q$ is the sensible heat flux,
* $\rho$ = 1.2 $kg~m^{โ3}$ is the density of air at sea level,
* $c_p$ = 1004.5 $J kg^{-1} K^{-1} $ is the specific heat capacity,
* $C_H$ = 1.2$\times 10^{-3}$ is the exchange coefficient for heat
* $u_{sea}$ = 0.5 $m~s^{โ1}$ is the ocean surface velocity,
* $u_{atm}$ is the wind speed at 10 m above the sea surface,
* $T_{sea}$ is the sea surface temperature (SST), and
* $T_{atm}$ = 17 $^\circ C$ is the air temperature at 10 m above the sea surface.
1. Create data of within the following range
wind speed: 0-20 $m~s^{โ1}$, every 2 $m~s^{โ1}$
SST: 5-15 $^\circ C$, every 1 $^\circ C$
Hint: use range() function and wrap it in a list() function.
If you want to create lists of arbitrary values (e.g. non-integer), use my_list = [ ] notation.
End of explanation
print(wind_speed)
sst
Explanation: Print out the wind_speed and sst variables to check yourself.
End of explanation
def calc_heat_flux(u_atm, t_sea, rho=1.2, c_p=1004.5, c_h=1.2e-3, u_sea=1, t_atm=17):
q = rho * c_p * c_h * (u_atm - u_sea) * (t_sea - t_atm)
return q
Explanation: 2. Create a function to calculate the heat flux
Wind speed and temperature should be the required input arguments
Make the constants ($\rho$, $c_p$, etc) to be keyword (aka optional) arguments with default values
Use return statement to return the output
You've already forgotten it, but the formula is $Q = \rho c_p C_H (u_{atm} - u_{sea}) (T_{sea} - T_{atm}) $
End of explanation
heat_flux = []
for u, t in zip(wind_speed, sst):
q = calc_heat_flux(u, t)
heat_flux.append(q)
Explanation: 3. In a loop, calculate the heat flux with wind speed and temperature as inputs
First, create an empty list heat_flux of $Q$ values
Then, write a loop
Every iteration: after $Q$ is computed, append it to the heat_flux list
End of explanation
heat_flux
Explanation: Print out heat_flux variable to check that the values are sensible (no pun intended).
End of explanation
# open?
Explanation: 4. Open a new text file for writing
Now, you need to open a file. Explore the built-in function open():
End of explanation
# with open('heat_flux_data.txt', 'w') as f:
# for h in heat_flux:
# f.write('{:3.1f}\n'.format(h))
Explanation: The recommended way of writing/reading files is using the context statement with:
```python
example
with open('super_descriptive_file_name', mode='r') as your_file:
your_file.read()
```
Some commonly used I/O modes:
mode='w' means that we opened file for writing. This mode overwrites any existing data.
mode='r' - open a file for reading
mode='a' - open a file for appending data
mode='x' - open for exclusive creation, failing if the file already exists
mode='b' - binary mode
5. Loop over the function output and write the data to the file
Open a file named heat_flux_data.txt for writing*
Use with statement
Inside the with code block, write a loop to iterate through the heat_flux values and write each of them on a new line
Instead of read() as in the example above, use write() method
Note: write() method needs string type input
You can convert numeric values to string type using str() function or, even better, format() method
Add "\n" character to the string to indicate a line break
End of explanation
# !cat heat_flux_data.txt
Explanation: Use a text editor of your choice (or Jupyter!) to check the contents of the file.
End of explanation |
11,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Lesson
Step2: Project 1
Step3: Transforming Text into Numbers
Step4: Project 2
Step5: Project 3
Step6: Training the network
Step7: Run | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory
End of explanation
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
Explanation: Project 1: Quick Theory Validation
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: Transforming Text into Numbers
End of explanation
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
Explanation: Project 2: Creating the Input/Output Data
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from collections import Counter
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
self.pre_process_data(input_nodes)
# Set number of nodes in input, hidden and output layers.
self.input_nodes = len(self.layer_0)
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
#self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, inputs_list, targets_list):
self.update_input_layer(inputs_list)
# Convert inputs list to 2d array
inputs = np.array(self.layer_0, ndmin=2)
targets = np.array(self.get_target_for_label(targets_list), ndmin=2)
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer
hidden_outputs = hidden_inputs # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
print(output_errors)
# TODO: Backpropagated error - Replace these values with your calculations.
hidden_errors = np.dot(output_errors.T,self.weights_hidden_to_output)
hidden_grad = 1 # hidden layer gradients
hidden_error_term = hidden_grad * hidden_errors
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * output_errors * hidden_outputs # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * hidden_error_term * inputs.T # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = hidden_inputs # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def pre_process_data(self, reviews):
total_counts=Counter()
for i in range(len(reviews)):
for word in reviews[i].split(" "):
total_counts[word] += 1
self.vocab = set(total_counts.keys())
vocab_size = len(self.vocab)
self.word2index = {}
for i,word in enumerate(self.vocab):
self.word2index[word] = i
self.layer_0 = np.zeros((1,vocab_size))
list(self.layer_0)
def update_input_layer(self, review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self, label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Project 3: Building a Neural Network
Start with your neural network from the last chapter
3 layer neural network
no non-linearity in hidden layer
use our functions to create the training data
create a "pre_process_data" function to create vocabulary for our training data generating functions
modify "train" to train over the entire corpus
Where to Get Help if You Need it
Re-watch previous week's Udacity Lectures
Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 1500
learning_rate = 0.01
hidden_nodes = 8
output_nodes = 1
#N_i = len(reviews)
network = NeuralNetwork(reviews, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
#batch = np.random.choice(train_features.index, size=128)
for record, target in zip(reviews,
labels):
print(target)
network.train(record, target)
# Printing out the training progress
#train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
#val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=1)
Explanation: Training the network
End of explanation
print(network.run())
Explanation: Run
End of explanation |
11,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute cross-talk functions for LCMV beamformers
Visualise cross-talk functions at one vertex for LCMV beamformers computed
with different data covariance matrices, which affects their cross-talk
functions.
Step1: Compute LCMV filters with different data covariance matrices
Step2: Compute resolution matrices for the two LCMV beamformers
Step3: Visualise | Python Code:
# Author: Olaf Hauk <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.beamformer import make_lcmv, make_lcmv_resolution_matrix
from mne.minimum_norm import get_cross_talk
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects/'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif'
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
# Read raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# only pick good EEG/MEG sensors
raw.info['bads'] += ['EEG 053'] # bads + 1 more
picks = mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads')
# Find events
events = mne.find_events(raw)
# event_id = {'aud/l': 1, 'aud/r': 2, 'vis/l': 3, 'vis/r': 4}
event_id = {'vis/l': 3, 'vis/r': 4}
tmin, tmax = -.2, .25 # epoch duration
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,
picks=picks, baseline=(-.2, 0.), preload=True)
# covariance matrix for pre-stimulus interval
tmin, tmax = -.2, 0.
cov_pre = mne.compute_covariance(epochs, tmin=tmin, tmax=tmax,
method='empirical')
# covariance matrix for post-stimulus interval (around main evoked responses)
tmin, tmax = 0.05, .25
cov_post = mne.compute_covariance(epochs, tmin=tmin, tmax=tmax,
method='empirical')
# read forward solution
forward = mne.read_forward_solution(fname_fwd)
# use forward operator with fixed source orientations
forward = mne.convert_forward_solution(forward, surf_ori=True,
force_fixed=True)
# read noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# get valid measurement info
raw = raw.pick_types(meg=True, eeg=True, exclude='bads')
info = raw.info
# regularize noise covariance (we used 'empirical' above)
noise_cov = mne.cov.regularize(noise_cov, info, mag=0.1, grad=0.1,
eeg=0.1, rank='info')
Explanation: Compute cross-talk functions for LCMV beamformers
Visualise cross-talk functions at one vertex for LCMV beamformers computed
with different data covariance matrices, which affects their cross-talk
functions.
End of explanation
# compute LCMV beamformer filters for pre-stimulus interval
filters_pre = make_lcmv(info, forward, cov_pre, reg=0.05,
noise_cov=noise_cov,
pick_ori=None, rank=None,
weight_norm=None,
reduce_rank=False,
verbose=False)
# compute LCMV beamformer filters for post-stimulus interval
filters_post = make_lcmv(info, forward, cov_post, reg=0.05,
noise_cov=noise_cov,
pick_ori=None, rank=None,
weight_norm=None,
reduce_rank=False,
verbose=False)
Explanation: Compute LCMV filters with different data covariance matrices
End of explanation
rm_pre = make_lcmv_resolution_matrix(filters_pre, forward, info)
rm_post = make_lcmv_resolution_matrix(filters_post, forward, info)
# compute cross-talk functions (CTFs) for one target vertex
sources = [3000]
stc_pre = get_cross_talk(rm_pre, forward['src'], sources, norm=True)
stc_post = get_cross_talk(rm_post, forward['src'], sources, norm=True)
Explanation: Compute resolution matrices for the two LCMV beamformers
End of explanation
vertno_lh = forward['src'][0]['vertno'] # vertex of selected source
verttrue = [vertno_lh[sources[0]]] # pick one vertex
brain_pre = stc_pre.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir,
figure=1, clim=dict(kind='value', lims=(0, .2, .4)))
brain_pre.add_text(0.1, 0.9, 'LCMV beamformer with pre-stimulus\ndata '
'covariance matrix', 'title', font_size=16)
brain_post = stc_post.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir,
figure=2, clim=dict(kind='value', lims=(0, .2, .4)))
brain_post.add_text(0.1, 0.9, 'LCMV beamformer with post-stimulus\ndata '
'covariance matrix', 'title', font_size=16)
# mark true source location for CTFs
brain_pre.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh',
color='green')
brain_post.add_foci(verttrue, coords_as_verts=True, scale_factor=1.,
hemi='lh', color='green')
Explanation: Visualise
End of explanation |
11,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programming Language Correlation
This sample notebook demonstrates working with GitHub activity, which has been made possible via the publicly accessible GitHub Timeline BigQuery dataset via the BigQuery Sample Tables.
Here is the question that this notebook tackles
Step1: Understanding the GitHub Timeline
We're going to work with the GitHub Archive project data. It contains all github events (commits, pushes, forks, watches, etc.) along with metadata about the events (e.g., user, time, place). The schema and sample data will help use to further understand this dataset.
Step2: The GitHub timeline is a large dataset. A quick lookup of table metadata gives us the row count.
Step3: With over 290 million events, it is important to be able to sample the data. The sample method allows us to sample tables or queries.
Step4: Querying the Data
The first step in our analysis to correlate languages is retrieving the appropriate slice of data.
We'll need to retrieve the list of PushEvents from the timeline. This is a large list of events, and there are several ways to get a more manageable resultset
Step5: Checking the Results
Step6: Analyzing the Data
The next step is to integrate the BigQuery SQL queries with the analysis capabilities provided by Python and pandas. The query defined earlier can easily be materialized into a pandas dataframe.
Step7: Great! We've successfully populated a pandas dataframe with our dataset. Let's dig into our dataset a further using the dataframe to see if our data makes sense.
Step8: Let's see who is the most polyglot user of the mix.
Step9: Reshaping the Data
So far, our results have multiple rows for each user -- specifically, one per language. The next step is to pivot that data, so that we have one row per user, and one column per language. The resulting matrix will be extremely sparse. We'll just fill in 0 (no pushes) for user/language pairs that have no data.
Pandas offers a built-in pivot() method, which helps here.
Step10: Now, compute the correlation for each pair of languages (again, built into the pandas library).
Step11: Visualizing the Results
The correlation table, above, contains the results, but isn't very telling. A plot will make the data speak a lot louder, and highlight the highly correlated languages, as well as the highly uncorrelated languages. | Python Code:
import google.datalab.bigquery as bq
import matplotlib.pyplot as plot
import numpy as np
import pandas as pd
Explanation: Programming Language Correlation
This sample notebook demonstrates working with GitHub activity, which has been made possible via the publicly accessible GitHub Timeline BigQuery dataset via the BigQuery Sample Tables.
Here is the question that this notebook tackles: "How likely are you to program in X, if you program in Y?" For example, this might be an input into an repository exploration/recommendation/search tool to personalize the results based on your own contributions.
It is based on an example published at http://datahackermd.com/2013/language-use-on-github/. It counts pushes or commits made by all users across all repositories on GitHub and their associated repository languages to determine the correlation between languages.
Related Links:
Google BigQuery
BigQuery SQL reference
Python Pandas for data analysis
Python matplotlib for data visualization
End of explanation
%%bq tables describe --name "publicdata.samples.github_timeline"
Explanation: Understanding the GitHub Timeline
We're going to work with the GitHub Archive project data. It contains all github events (commits, pushes, forks, watches, etc.) along with metadata about the events (e.g., user, time, place). The schema and sample data will help use to further understand this dataset.
End of explanation
table = bq.Table('publicdata.samples.github_timeline')
table.metadata.rows
Explanation: The GitHub timeline is a large dataset. A quick lookup of table metadata gives us the row count.
End of explanation
bq.Query.from_table(table).execute(sampling=bq.Sampling.default(
fields=['repository_name',
'repository_language',
'created_at',
'type'])).result()
Explanation: With over 290 million events, it is important to be able to sample the data. The sample method allows us to sample tables or queries.
End of explanation
%%bq query --name popular_languages
SELECT repository_language AS language, COUNT(repository_language) as pushes
FROM `publicdata.samples.github_timeline`
WHERE type = 'PushEvent'
AND repository_language != ''
AND CAST(created_at AS TIMESTAMP) >= TIMESTAMP("2012-01-01")
AND CAST(created_at AS TIMESTAMP) < TIMESTAMP("2013-01-01")
GROUP BY language
ORDER BY pushes DESC
LIMIT 25
%%bq query --name pushes --subqueries popular_languages
SELECT timeline.actor AS user,
timeline.repository_language AS language,
COUNT(timeline.repository_language) AS push_count
FROM `publicdata.samples.github_timeline` AS timeline
JOIN popular_languages AS languages
ON timeline.repository_language = languages.language
WHERE type = 'PushEvent'
AND CAST(created_at AS TIMESTAMP) >= TIMESTAMP("2012-01-01")
AND CAST(created_at AS TIMESTAMP) < TIMESTAMP("2013-01-01")
GROUP BY user, language
%%bq query --name pushes_sample --subqueries popular_languages pushes
SELECT user, language, push_count
FROM pushes
WHERE MOD(ABS(FARM_FINGERPRINT(user)), 100) < 5
ORDER BY push_count DESC
Explanation: Querying the Data
The first step in our analysis to correlate languages is retrieving the appropriate slice of data.
We'll need to retrieve the list of PushEvents from the timeline. This is a large list of events, and there are several ways to get a more manageable resultset:
Limiting the analysis to the top 25 languages (from an otherwise long list of languages that simply add noise).
Limiting the analysis to just pushes made during 1 year time window; we will use 2012.
Further sampling to get a small, but still interesting sample set to further analyze for correlation.
End of explanation
popular_languages.execute().result()
query = pushes_sample.execute()
query.result()
Explanation: Checking the Results
End of explanation
df = query.result().to_dataframe()
Explanation: Analyzing the Data
The next step is to integrate the BigQuery SQL queries with the analysis capabilities provided by Python and pandas. The query defined earlier can easily be materialized into a pandas dataframe.
End of explanation
df[:10]
summary = df['user'].describe()
print('DataFrame contains %d with %d unique users' % (summary['count'], summary['unique']))
Explanation: Great! We've successfully populated a pandas dataframe with our dataset. Let's dig into our dataset a further using the dataframe to see if our data makes sense.
End of explanation
print('%s has contributions in %d languages' % (summary['top'], summary['freq']))
df[df['user'] == summary['top']]
Explanation: Let's see who is the most polyglot user of the mix.
End of explanation
dfp = df.pivot(index = 'user', columns = 'language', values = 'push_count').fillna(0)
dfp
Explanation: Reshaping the Data
So far, our results have multiple rows for each user -- specifically, one per language. The next step is to pivot that data, so that we have one row per user, and one column per language. The resulting matrix will be extremely sparse. We'll just fill in 0 (no pushes) for user/language pairs that have no data.
Pandas offers a built-in pivot() method, which helps here.
End of explanation
corr = dfp.corr(method = 'spearman')
corr
Explanation: Now, compute the correlation for each pair of languages (again, built into the pandas library).
End of explanation
# Plotting helper function
def plot_correlation(data):
min_value = 0
max_value = 0
for i in range(len(data.columns)):
for j in range(len(data.columns)):
if i != j:
min_value = min(min_value, data.iloc[i, j])
max_value = max(max_value, data.iloc[i, j])
span = max(abs(min_value), abs(max_value))
span = round(span + .05, 1)
items = data.columns.tolist()
ticks = np.arange(0.5, len(items) + 0.5)
plot.figure(figsize = (11, 7))
plot.pcolor(data.values, cmap = 'RdBu', vmin = -span, vmax = span)
plot.colorbar().set_label('correlation')
plot.xticks(ticks, items, rotation = 'vertical')
plot.yticks(ticks, items)
plot.show()
plot_correlation(corr)
Explanation: Visualizing the Results
The correlation table, above, contains the results, but isn't very telling. A plot will make the data speak a lot louder, and highlight the highly correlated languages, as well as the highly uncorrelated languages.
End of explanation |
11,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Latent Dirichlet Allocation for Text Data
In this assignment you will
apply standard preprocessing techniques on Wikipedia text data
use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model
explore and interpret the results, including topic keywords and topic assignments for documents
Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of mixed membership. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one.
With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document.
Note to Amazon EC2 users
Step1: In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be.
Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps
Step2: Model fitting and interpretation
In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module.
Note
Step3: GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.
Step4: It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will
get the top words in each topic and use these to identify topic themes
predict topic distributions for some example documents
compare the quality of LDA "nearest neighbors" to the NN output from the first assignment
understand the role of model hyperparameters alpha and gamma
Load a fitted topic model
The method used to fit the LDA model is a randomized algorithm, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization.
It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model.
We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above.
Step5: Identifying topic themes by top words
We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA.
In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word
Step6: We propose the following themes for each topic
Step7: Measuring the importance of top words
We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words.
We'll do this with two visualizations of the weights for the top words in each topic
Step8: In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!
Next we plot the total weight assigned by each topic to its top 10 words
Step9: Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.
Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all.
Topic distributions for some example documents
As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic.
We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition.
Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a distribution over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama
Step10: To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document
Step11: Quiz Question
Step12: Next we add the TF-IDF document representations
Step13: For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model
Step14: Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist
Step15: Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents.
With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example
Step16: Changing the hyperparameter alpha
Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha.
Step17: Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.
Quiz Question
Step18: Quiz Question
Step19: From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary.
Quiz Question
Step20: Quiz Question | Python Code:
import graphlab as gl
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(gl.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
# import wiki data
wiki = gl.SFrame('people_wiki.gl/')
wiki
Explanation: Latent Dirichlet Allocation for Text Data
In this assignment you will
apply standard preprocessing techniques on Wikipedia text data
use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model
explore and interpret the results, including topic keywords and topic assignments for documents
Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of mixed membership. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one.
With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Text Data Preprocessing
We'll start by importing our familiar Wikipedia dataset.
The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page.
End of explanation
wiki_docs = gl.text_analytics.count_words(wiki['text'])
wiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)
Explanation: In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be.
Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from GraphLab Create:
End of explanation
topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200)
Explanation: Model fitting and interpretation
In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module.
Note: This may take several minutes to run.
End of explanation
topic_model
Explanation: GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.
End of explanation
topic_model = gl.load_model('topic_models/lda_assignment_topic_model')
Explanation: It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will
get the top words in each topic and use these to identify topic themes
predict topic distributions for some example documents
compare the quality of LDA "nearest neighbors" to the NN output from the first assignment
understand the role of model hyperparameters alpha and gamma
Load a fitted topic model
The method used to fit the LDA model is a randomized algorithm, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization.
It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model.
We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above.
End of explanation
[x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)]
sum(topic_model.get_topics(topic_ids=[2], num_words=50)['score'])
Explanation: Identifying topic themes by top words
We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA.
In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme and that all the topics are relatively distinct.
We can use the GraphLab Create function get_topics() to view the top words (along with their associated probabilities) from each topic.
Quiz Question: Identify the top 3 most probable words for the first topic.
Quiz Question: What is the sum of the probabilities assigned to the top 50 words in the 3rd topic?
Let's look at the top 10 words for each topic to see if we can identify any themes:
End of explanation
themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \
'art and publishing','Business','international athletics','Great Britain and Australia','international music']
Explanation: We propose the following themes for each topic:
topic 0: Science and research
topic 2: Team sports
topic 3: Music, TV, and film
topic 4: American college and politics
topic 5: General politics
topic 6: Art and publishing
topic 7: Business
topic 8: International athletics
topic 9: Great Britain and Australia
topic 10: International music
We'll save these themes for later:
End of explanation
for i in range(10):
plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score'])
plt.xlabel('Word rank')
plt.ylabel('Probability')
plt.title('Probabilities of Top 100 Words in each Topic')
Explanation: Measuring the importance of top words
We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words.
We'll do this with two visualizations of the weights for the top words in each topic:
- the weights of the top 100 words, sorted by the size
- the total weight of the top 10 words
Here's a plot for the top 100 words by weight in each topic:
End of explanation
top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)]
ind = np.arange(10)
width = 0.5
fig, ax = plt.subplots()
ax.bar(ind-(width/2),top_probs,width)
ax.set_xticks(ind)
plt.xlabel('Topic')
plt.ylabel('Probability')
plt.title('Total Probability of Top 10 Words in each Topic')
plt.xlim(-0.5,9.5)
plt.ylim(0,0.15)
plt.show()
Explanation: In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!
Next we plot the total weight assigned by each topic to its top 10 words:
End of explanation
obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]])
pred1 = topic_model.predict(obama, output_type='probability')
pred2 = topic_model.predict(obama, output_type='probability')
print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]}))
Explanation: Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.
Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all.
Topic distributions for some example documents
As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic.
We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition.
Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a distribution over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama:
End of explanation
def average_predictions(model, test_document, num_trials=100):
avg_preds = np.zeros((model.num_topics))
for i in range(num_trials):
avg_preds += model.predict(test_document, output_type='probability')[0]
avg_preds = avg_preds/num_trials
result = gl.SFrame({'topics':themes, 'average predictions':avg_preds})
result = result.sort('average predictions', ascending=False)
return result
print average_predictions(topic_model, obama, 100)
bush = gl.SArray([wiki_docs[int(np.where(wiki['name']=='George W. Bush')[0])]])
pred11 = topic_model.predict(bush, output_type='probability')
pred22 = topic_model.predict(bush, output_type='probability')
print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred11[0], 'predictions (second draw)':pred22[0]}))
print average_predictions(topic_model, bush, 100)
ger = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Steven Gerrard')[0])]])
pred111 = topic_model.predict(ger, output_type='probability')
pred222 = topic_model.predict(ger, output_type='probability')
print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred111[0], 'predictions (second draw)':pred222[0]}))
print average_predictions(topic_model, ger, 100)
Explanation: To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document:
End of explanation
wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability')
Explanation: Quiz Question: What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions.
Quiz Question: What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions.
Comparing LDA to nearest neighbors for document retrieval
So far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations.
In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment.
We'll start by creating the LDA topic distribution representation for each document:
End of explanation
wiki['word_count'] = gl.text_analytics.count_words(wiki['text'])
wiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count'])
Explanation: Next we add the TF-IDF document representations:
End of explanation
model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
model_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'],
method='brute_force', distance='cosine')
Explanation: For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model:
End of explanation
model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
k5000 = model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)
k5000[k5000['reference_label'] == 'Mariano Rivera']
l5000 = model_lda_rep.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)
l5000[l5000['reference_label'] == 'Mariano Rivera']
Explanation: Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist:
End of explanation
tpm_low_alpha = gl.load_model('topic_models/lda_low_alpha')
tpm_high_alpha = gl.load_model('topic_models/lda_high_alpha')
Explanation: Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents.
With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada.
Our LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be "close" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies.
Quiz Question: Using the TF-IDF representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.)
Quiz Question: Using the LDA representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.)
Understanding the role of LDA model hyperparameters
Finally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic.
In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words.
Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model.
Quiz Question: What was the value of alpha used to fit our original topic model?
Quiz Question: What was the value of gamma used to fit our original topic model? Remember that GraphLab Create uses "beta" instead of "gamma" to refer to the hyperparameter that influences topic distributions over words.
We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model:
- tpm_low_alpha, a model trained with alpha = 1 and default gamma
- tpm_high_alpha, a model trained with alpha = 50 and default gamma
End of explanation
a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1]
b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1]
c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1]
ind = np.arange(len(a))
width = 0.3
def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab):
fig = plt.figure()
ax = fig.add_subplot(111)
b1 = ax.bar(ind, a, width, color='lightskyblue')
b2 = ax.bar(ind+width, b, width, color='lightcoral')
b3 = ax.bar(ind+(2*width), c, width, color='gold')
ax.set_xticks(ind+width)
ax.set_xticklabels(range(10))
ax.set_ylabel(ylab)
ax.set_xlabel(xlab)
ax.set_ylim(0,ylim)
ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param])
plt.tight_layout()
param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha',
xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article')
pk = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]])
pk1 = tpm_low_alpha.predict(pk, output_type='probability')
pk2 = tpm_low_alpha.predict(pk, output_type='probability')
print(gl.SFrame({'topics':themes, 'predictions (first draw)':pk1[0], 'predictions (second draw)':pk2[0]}))
print average_predictions(tpm_low_alpha, pk, 100)
Explanation: Changing the hyperparameter alpha
Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha.
End of explanation
pk = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]])
pk1 = tpm_high_alpha.predict(pk, output_type='probability')
pk2 = tpm_high_alpha.predict(pk, output_type='probability')
print(gl.SFrame({'topics':themes, 'predictions (first draw)':pk1[0], 'predictions (second draw)':pk2[0]}))
print average_predictions(tpm_high_alpha, pk, 100)
Explanation: Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.
Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the low alpha model? Use the average results from 100 topic predictions.
End of explanation
del tpm_low_alpha
del tpm_high_alpha
tpm_low_gamma = gl.load_model('topic_models/lda_low_gamma')
tpm_high_gamma = gl.load_model('topic_models/lda_high_gamma')
a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
ind = np.arange(len(a))
width = 0.3
param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma',
xlab='Topics (sorted by weight of top 100 words)',
ylab='Total Probability of Top 100 Words')
param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma',
xlab='Topics (sorted by weight of bottom 1000 words)',
ylab='Total Probability of Bottom 1000 Words')
Explanation: Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the high alpha model? Use the average results from 100 topic predictions.
Changing the hyperparameter gamma
Just as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models.
Now we will consider the following two models:
- tpm_low_gamma, a model trained with gamma = 0.02 and default alpha
- tpm_high_gamma, a model trained with gamma = 0.5 and default alpha
End of explanation
sum([len(tpm_low_gamma.get_topics(topic_ids=[i], num_words=5000, cdf_cutoff = 0.5)['score']) for i in range(10)])/10.0
Explanation: From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary.
Quiz Question: For each topic of the low gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument).
End of explanation
sum([len(tpm_high_gamma.get_topics(topic_ids=[i], num_words=5000, cdf_cutoff = 0.5)) for i in range(10)])/10.0
tpm_high_gamma.get_topics(topic_ids=[1],num_words=1000, cdf_cutoff = 0.5)
Explanation: Quiz Question: For each topic of the high gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument).
End of explanation |
11,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
cosmo_derived
This plugin calculates the following "derived" cosmological quantities
Step1: To use cosmo_derived, first call the set_params function to set all the cosmological parameters, then call the other functions which will subsequently use those values. Here's the signature for set_params
Step2: Plotting $H(z)$,
Step3: Here's the angular size of sound horizon at Planck's best-fit $z_*$ (Table 2, Planck XVI). The number for $\theta_s$ in that same table is $0.0104136$, or a difference of $0.09 \sigma$. This is likely due to differences in numerical values for physical constants that were used, or numerical integration error.
Step4: We can also use this plugin to convert $\theta$ to $H_0$. Here $\theta$ refers to $\theta_{\rm MC}$ which uses the Hu & Sugiyama fitting formula for $z_{\rm drag}$.
Step5: This plugin is written in Cython and is highly optimizied, so its pretty fast. | Python Code:
from cosmoslik import *
cosmo_derived = get_plugin('models.cosmo_derived')()
Explanation: cosmo_derived
This plugin calculates the following "derived" cosmological quantities:
* $H(z)$
* $D_A(z)$
* $r_s(z)$
* $\theta_s(z)$
* $z_{\rm drag}$ (Hu & Sugiyama fitting formula)
It can also be used as a $\theta$ to $H_0$ converter.
Credit: Lloyd Knox, code adapted by Marius Millea
End of explanation
help(cosmo_derived.set_params)
cosmo_derived.set_params(H0=67.04, ombh2=0.022032, omch2=0.12038, omk=0, mnu=0.06, massive_neutrinos=1, massless_neutrinos=2.046)
Explanation: To use cosmo_derived, first call the set_params function to set all the cosmological parameters, then call the other functions which will subsequently use those values. Here's the signature for set_params:
End of explanation
z=logspace(-2,6)
loglog(z,list(map(cosmo_derived.Hubble,z)))
xlabel(r'$z$',size=16)
ylabel(r'$H(z) \, [\rm km/s/Mpc]$',size=16);
Explanation: Plotting $H(z)$,
End of explanation
z_star = 1090.48
cosmo_derived.theta_s(z_star)
Explanation: Here's the angular size of sound horizon at Planck's best-fit $z_*$ (Table 2, Planck XVI). The number for $\theta_s$ in that same table is $0.0104136$, or a difference of $0.09 \sigma$. This is likely due to differences in numerical values for physical constants that were used, or numerical integration error.
End of explanation
cosmo_derived.theta2hubble(0.0104)
Explanation: We can also use this plugin to convert $\theta$ to $H_0$. Here $\theta$ refers to $\theta_{\rm MC}$ which uses the Hu & Sugiyama fitting formula for $z_{\rm drag}$.
End of explanation
%%timeit
cosmo_derived.theta_s(z_star)
%%timeit
cosmo_derived.theta2hubble(0.0104)
Explanation: This plugin is written in Cython and is highly optimizied, so its pretty fast.
End of explanation |
11,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Delaunay
Here, we'll perform various analysis by constructing graphs and measure properties of those graphs to learn more about the data
Step1: We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).
Let's start by triangulating our data. We'll use Delaunay on each y layer first. Putting our data in the right format for doing graph analysis...
Step2: Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.
Step3: We're going to need a method to get edge lengths from 2D centroid pairs
Step4: Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the "centroids" are no different
Step5: There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity.
Drawing Graphs
First we look at the default networkx graph plotting
Step6: This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.
Self Loops
Step7: Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.
The answer to this question is very simple. In a RAG, there are no self-loops by definition. Self loops are edges that form a connection between a node and itself.
<img src="../../docs/figures/selfloop.png" width="100">
To see whether the graphs are formed properly, let's look at an adjacency lists
Step8: Compare that to the test data
Step9: X-Layers
Step10: We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here | Python Code:
import csv
from scipy.stats import kurtosis
from scipy.stats import skew
from scipy.spatial import Delaunay
import numpy as np
import math
import skimage
import matplotlib.pyplot as plt
import seaborn as sns
from skimage import future
import networkx as nx
from ragGen import *
%matplotlib inline
sns.set_color_codes("pastel")
from scipy.signal import argrelextrema
# Read in the data
data = open('../../data/data.csv', 'r').readlines()
fieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']
reader = csv.reader(data)
reader.next()
rows = [[int(col) for col in row] for row in reader]
# These will come in handy later
sorted_x = sorted(list(set([r[0] for r in rows])))
sorted_y = sorted(list(set([r[1] for r in rows])))
sorted_z = sorted(list(set([r[2] for r in rows])))
Explanation: Delaunay
Here, we'll perform various analysis by constructing graphs and measure properties of those graphs to learn more about the data
End of explanation
a = np.array(rows)
b = np.delete(a, np.s_[3::],1)
# Separate layers - have to do some wonky stuff to get this to work
b = sorted(b, key=lambda e: e[1])
b = np.array([v.tolist() for v in b])
b = np.split(b, np.where(np.diff(b[:,1]))[0]+1)
Explanation: We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).
Let's start by triangulating our data. We'll use Delaunay on each y layer first. Putting our data in the right format for doing graph analysis...
End of explanation
graphs = []
centroid_list = []
for layer in b:
centroids = np.array(layer)
# get rid of the y value - not relevant anymore
centroids = np.delete(centroids, 1, 1)
centroid_list.append(centroids)
graph = Delaunay(centroids)
graphs.append(graph)
Explanation: Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.
End of explanation
def get_d_edge_length(edge):
(x1, y1), (x2, y2) = edge
return math.sqrt((x2-x1)**2 + (y2-y1)**2)
edge_length_list = [[]]
tri_area_list = [[]]
for del_graph in graphs:
tri_areas = []
edge_lengths = []
triangles = []
for t in centroids[del_graph.simplices]:
triangles.append(t)
a, b, c = [tuple(map(int,list(v))) for v in t]
edge_lengths.append(get_d_edge_length((a,b)))
edge_lengths.append(get_d_edge_length((a,c)))
edge_lengths.append(get_d_edge_length((b,c)))
try:
tri_areas.append(float(Triangle(a,b,c).area))
except:
continue
edge_length_list.append(edge_lengths)
tri_area_list.append(tri_areas)
Explanation: We're going to need a method to get edge lengths from 2D centroid pairs
End of explanation
np.subtract(centroid_list[0], centroid_list[1])
Explanation: Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the "centroids" are no different:
End of explanation
real_volume = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
real_volume[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
nx_graphs = []
for layer in b:
G = nx.Graph(graph)
nx_graphs.append(G)
for graph in graphs:
plt.figure()
nx.draw(graph, node_size=100)
Explanation: There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity.
Drawing Graphs
First we look at the default networkx graph plotting:
End of explanation
num_self_loops = []
for rag in y_rags:
num_self_loops.append(rag.number_of_selfloops())
num_self_loops
Explanation: This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.
Self Loops
End of explanation
# y_rags[0].adjacency_list()
Explanation: Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.
The answer to this question is very simple. In a RAG, there are no self-loops by definition. Self loops are edges that form a connection between a node and itself.
<img src="../../docs/figures/selfloop.png" width="100">
To see whether the graphs are formed properly, let's look at an adjacency lists:
End of explanation
# Test Data
test = np.array([[1,2],[3,4]])
test_rag = skimage.future.graph.RAG(test)
test_rag.adjacency_list()
Explanation: Compare that to the test data:
End of explanation
real_volume_x = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
real_volume_x[ sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
x_rags = []
count = 0;
for layer in real_volume_x:
count = count + 1
x_rags.append(skimage.future.graph.RAG(layer))
num_edges_x = []
for rag in x_rags:
num_edges_x.append(rag.number_of_edges())
sns.barplot(x=range(len(num_edges_x)), y=num_edges_x)
sns.plt.show()
Explanation: X-Layers
End of explanation
plt.imshow(np.amax(real_volume, axis=2), interpolation='nearest')
plt.show()
# edge_length_list[3]
# tri_area_list[3]
# triangles
# Note for future
# del_features['d_edge_length_mean'] = np.mean(edge_lengths)
# del_features['d_edge_length_std'] = np.std(edge_lengths)
# del_features['d_edge_length_skew'] = scipy.stats.skew(edge_lengths)
# del_features['d_edge_length_kurtosis'] = scipy.stats.kurtosis(edge_lengths)
Explanation: We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here:
End of explanation |
11,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing Data for DRQN
We take the data from data generator and save them into traces of (s,a,r,sp) tuples.
Each trajectory corresponds to a trace.
If trajectory has length n, then trace will have length n-1. (since we need the next state sp)
Step1: Creating a DRQN model and training it
Step2: Testing the model
Step3: Final Test Function
Step4: General Workflow
1. Create Data Set
Step5: 2. Create Model and Train
Step6: 3. Test Model in "real world" and calculate post test scores | Python Code:
data = d_utils.load_data(filename="../synthetic_data/test-n10000-l3-random.pickle")
dqn_data = d_utils.preprocess_data_for_dqn(data, reward_model="dense")
# Single Trace
print (dqn_data[0])
# First tuple in a trace
s,a,r,sp = dqn_data[0][0]
print (s)
print (a)
print (r)
print (sp)
# Last tuple
s,a,r,sp = dqn_data[0][-1]
print (s)
print (a)
print (r)
print (sp)
dqn_data_train, dqn_data_test = train_test_split(dqn_data, test_size=0.2)
Explanation: Preprocessing Data for DRQN
We take the data from data generator and save them into traces of (s,a,r,sp) tuples.
Each trajectory corresponds to a trace.
If trajectory has length n, then trace will have length n-1. (since we need the next state sp)
End of explanation
model_id = "test_model_drqn"
# Create the model object
model = drqn.DRQNModel(model_id, timesteps=2)
# Initialize trainer object inside the model
model.init_trainer()
# Creating training and validation data
train_buffer = ExperienceBuffer()
train_buffer.buffer = dqn_data_train
train_buffer.buffer_sz = len(train_buffer.buffer)
val_buffer = ExperienceBuffer()
val_buffer.buffer = dqn_data_test
val_buffer.buffer_sz = len(val_buffer.buffer)
# train the model (uses the previously initialized trainer object)
date_time_string = datetime.datetime.now().strftime("%m-%d-%Y_%H-%M-%S")
run_id = "{}".format(date_time_string)
model.train(train_buffer, val_buffer, n_epoch=2,
run_id=run_id, load_checkpoint=True)
# init evaluator of the model
model.init_evaluator()
# Create inputs (states / observations so far) to use for predictions
from drqn import stack_batch
train_batch = train_buffer.sample_in_order(4)
# make sure that batches are over multiple timesteps, should be of shape (batch_sz, n_timesteps, ?)
s_batch_train = stack_batch(train_batch[:, :, 0]) # current states
# Use model to predict next action
actions, q_vals = model.predict(s_batch_train, last_timestep_only=True)
q_vals
actions
# if we want to predict on data with different number of timesteps then we trained on,
# create a new model but using the same checkpoint
eval_model = drqn.DRQNModel(model_id, timesteps=10)
eval_model.init_evaluator()
# now the internal RNN will be unrolled over 10 timesteps.
# You can still pass in inputs that have fewer than 10, in which case remaining timesteps will be padded.
eval_model.predict(s_batch_train, last_timestep_only=True)
Explanation: Creating a DRQN model and training it
End of explanation
from drqn_tests import *
n_trajectories = 10
n_concepts = 5
horizon = 6
model_id = "test_model_drqn"
from simple_mdp import create_custom_dependency
dgraph = create_custom_dependency()
test_model = drqn.DRQNModel(model_id=model_id, timesteps=horizon)
test_model.init_evaluator()
learn_prob = 0.15
student = st.Student(n=n_concepts, p_trans_satisfied=learn_prob, p_trans_not_satisfied=0.0, p_get_ex_correct_if_concepts_learned=1.0)
k = test_drqn_single(dgraph, student, horizon, test_model, DEBUG=True)
k
test_drqn_chunk(n_trajectories, dgraph, student, model_id, horizon)
Explanation: Testing the model
End of explanation
test_drqn(model_id=model_id)
Explanation: Final Test Function:
End of explanation
n_concepts = 4
use_student2 = True
student2_str = '2' if use_student2 else ''
learn_prob = 0.15
lp_str = '-lp{}'.format(int(learn_prob*100)) if not use_student2 else ''
n_students = 100000
seqlen = 7
filter_mastery = False
filter_str = '' if not filter_mastery else '-filtered'
policy = 'random'
filename = 'test{}-n{}-l{}{}-{}{}.pickle'.format(student2_str, n_students, seqlen,
lp_str, policy, filter_str)
#concept_tree = sm.create_custom_dependency()
concept_tree = cdg.ConceptDependencyGraph()
concept_tree.init_default_tree(n_concepts)
if not use_student2:
test_student = st.Student(n=n_concepts,p_trans_satisfied=learn_prob, p_trans_not_satisfied=0.0, p_get_ex_correct_if_concepts_learned=1.0)
else:
test_student = st.Student2(n_concepts)
print(filename)
print ("Initializing synthetic data sets...")
dg.generate_data(concept_tree, student=test_student, n_students=n_students, filter_mastery=filter_mastery, seqlen=seqlen, policy=policy, filename="{}{}".format(dg.SYN_DATA_DIR, filename))
print ("Data generation completed. ")
data = d_utils.load_data(filename="../synthetic_data/{}".format(filename))
dqn_data = d_utils.preprocess_data_for_dqn(data, reward_model="dense")
dqn_data_train, dqn_data_test = train_test_split(dqn_data, test_size=0.2)
# Creating training and validation data
train_buffer = ExperienceBuffer()
train_buffer.buffer = dqn_data_train
train_buffer.buffer_sz = len(train_buffer.buffer)
val_buffer = ExperienceBuffer()
val_buffer.buffer = dqn_data_test
val_buffer.buffer_sz = len(val_buffer.buffer)
Explanation: General Workflow
1. Create Data Set
End of explanation
model_id = "test2_model_drqn_mid"
model = drqn.DRQNModel(model_id, timesteps=seqlen-1)
model.init_trainer()
# train the model (uses the previously initialized trainer object)
date_time_string = datetime.datetime.now().strftime("%m-%d-%Y_%H-%M-%S")
run_id = "{}".format(date_time_string)
model.train(train_buffer, val_buffer, n_epoch=32,
run_id=run_id, load_checkpoint=True)
Explanation: 2. Create Model and Train
End of explanation
test_drqn(model_id=model_id)
Explanation: 3. Test Model in "real world" and calculate post test scores
End of explanation |
11,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Science Tutorial 01 @ Data Science Society
้ฃ้ ้่ซ(Kaoru Nasuno)/ ๆฑไบฌๅคงๅญฆ(The University of Tokyo)
ใใผใฟใตใคใจใณในใฎๅบ็ค็ใชในใญใซใ่บซใซใคใใ็บใฎใใฅใผใใชใขใซใงใใ
KaggleใฎใณใณใใใฃใทใงใณใงใใRECRUIT Challenge, Coupon Purchase Predictionใฎใใผใฟใปใใใ้กๆใจใใฆใ
ใใผใฟใตใคใจใณในใฎๅบ็ค็ใชในใญใซใซ่งฆใ๏ผ็่งฃใฎๅๅฐใ้คใใใจใ็ฎ็ใจใใพใใ
(้ซใไบๆธฌ็ฒพๅบฆใๅบใใใจใ็ฎ็ใงใฏใชใใงใ)
ใพใ ใๆธใใใใงใใฆใ่ฆๆใซๅใใใฆ่ชคใใฎไฟฎๆญฃใๅ ็ญใใใฆใใไบๅฎใงใใไฝใใๆฐใฅใใฎ็นใใใใฐใ้ฃ็ตก้ ใใพใใจๅนธใใงใใ
ๅฏพ่ฑกใใผใฟ
RECRUIT Challenge, Coupon Purchase Predictionใฎใใผใฟใปใใใ
ใฆใผใถ็ป้ฒใๅฉ็จ่ฆ็ดใซๅๆใใฆใใฆใณใญใผใใใฆใใ ใใใ
https
Step1: ใขใธใฅใผใซใฎimportใๅคๆฐใฎๅๆๅ
ๆฌกใซใใใฎใใฅใผใใชใขใซใงๅฉ็จใใใขใธใฅใผใซใฎimportใไธ้จใฎๅคๆฐใฎๅๆๅใ่กใใพใใ
python
%matplotlib inline
ใฏ ipython notebookใซ็นๆใฎใใธใใฏใณใใณใใจใใใใฎใงใใ
pythonใฎๆๆณใจ็ฐใชใใพใใใmatplotlibใจใใ็ปๅใๆ็ปใใใฉใคใใฉใชใฎๅบๅ็ตๆใใใฉใฆใถไธใซ่กจ็คบใใใใใใซ่จญๅฎใใใใฎใงใใ
(ใใใงใฏใใใพใใชใ็จๅบฆใซ่ใใฆใใ ใใใ)
Step2: 2. ใใผใฟใใผในใธใฎใใผใฟใฎๆ ผ็ด
ใใผใฟใใผในใจใฏ
ใใผใฟใใผในใจใฏใ่ฒใ
ใชใใผใฟใฎ็ฎ็ใใผในใงใฎ็ฎก็ใใๅน็็ใชใใผใฟๅ็
ง๏ผๆค็ดขใๅฏ่ฝใซใใใใฎใงใใ
ใใผใฟใใผในใฎไธญใซใฏ่คๆฐใฎใใผใใซใใใใพใใ
ใใผใใซใฏใกใใใฉในใใฌใใใทใผใใฎใใใซใชใฃใฆใใฆใใใใใใฎๅใซๅๅใใใใ1่กใ1ใคใฎใใผใฟใจใชใใคใกใผใธใงใใ
ใใผใฟใฎๆ ผ็ด
ใใผใฟใฎๆ ผ็ดใฎๆตใใฏๅคงใพใใซใ
1. ใใผใใซใฎไฝๆ
2. ใใผใใซใธใฎใคใณใตใผใ
3. errorใwarningใฎ็ขบ่ช
ใฎ3ใคใฎในใใใใจใชใใพใใ
kaggleใฎใใผใธใซใใผใใซใฎๅฎ็พฉใๆธใใฆใใใฎใงใใใใงใฏใใใฎ้ใใซไฝๆใใพใใ
ใพใใฏใuser_listใฎใใผใใซไฝๆใฏใจใชใจๅฎ่กใงใใ
MySQLใฎCREATE TABLEๆงๆใซใคใใฆใฏใhttp
Step3: ๆฌกใซใใใผใฟใฎใคใณใตใผใใงใใ
csvใใกใคใซใชใฉใdumpใใใใใกใคใซใใMySQLใซใคใณใตใผใใใๅ ดๅใซใฏLOAD DATA INFILEๆงๆใๅฉ็จใใพใใ
LOAD DATA INFILEๆงๆใซใคใใฆใฏใ http
Step4: ใใผใใซใฎไฝๆใซๅฉ็จใใCREATE TABLEๆใซใฏใ
ใใผใใซใฎๅใฎๅฎ็พฉใงใฏใชใใใคใณใใใฏในใจๅผใฐใใใใฎใฎๅฎ็พฉใๅซใพใใฆใใพใใ
ใคใณใใใฏในใจใฏใใผใฟใฎๆค็ดขใ้ซ้ๅใใใใฎใงใใ
PRIMARY KEY
ใใผใใซๅ
ใงuniqueใงใใใคใๆค็ดขใใใซใฉใ ใซไปไธใใใ
ไพใใฐใuser_listใใผใใซใฎuser_id_hashใฏๅฝ่ฉฒใใผใใซใงใใฆใใผใฏใงใใ๏ผใใคใใฆใผใถใฎๆค็ดขใซใใ็จใใใใใPRIMARY KEYใไปไธใใฆใใใๆนใ่ฏใใ
INDEX
ใใผใใซๅ
ใงuniqueใงใฏใชใใใๆค็ดขใใใซใฉใ ใซไปไธใใใไพใใฐใใฆใผใถใๆงๅฅใๅนด้ฝขใซๅฟใใฆๆค็ดขใป้่จใใฆใๅฒๅใ่ฆใใๅ ดๅใซใฏใsex_idใageใชใฉใฎใซใฉใ ใซไปไธใใฆใใใๆนใ่ฏใใ
TODO
MYSQL้ขๆฐใชใฉใฎ่ชฌๆใฎๅ ็ญใ
Exercise
ไธ่จใฎไปใฎใใกใคใซใซใคใใฆใๅๆงใซใใผใใซใไฝๆใใใใผใฟใใคใณใตใผใใใฆใใ ใใใ
- prefecture_locations.csv
- coupon_area_train.csv, coupon_area_test.csv
- coupon_detail_train.csv
- coupon_visit_train.csv
- coupon_list_train.csv, coupon_list_test.csv
ๅฎ่ฃ
ไพ
prefecture_locations.csv
Step5: ๅฎ่กใใใจใใใใใใฎใฌใณใผใใงWarningใ็บ็ใใพใใใ
ใใผใฟใใผในใซๅฑ้ใใใใฌใณใผใใฎlongitudeใ็ขบ่ชใใใจๆญฃใใๅฑ้ใใใฆใใใใใใใใงใฏWarningใฏ็ก่ฆใใพใใ
(็ขบ่ชใใฆใใใพใใใใ่กๆซใฎๆน่กใณใผใใWarningใฎๅๅ ใใใใใพใใใใใ)
- coupon_area_train.csv, coupon_area_test.csv
Step6: coupon_detail_train.csv
Step7: coupon_visit_train.csv
ใใฎใใกใคใซใฏใฌใณใผใๆฐใไธ็ชๅคใใใคใณใตใผใใๅฎไบใใใพใงๅฐใๆ้ใใใใใพใใ
Step8: coupon_list_train.csv, coupon_list_test.csv
2ใคใจใWarningใๅบใพใใใ
ๆฅๆใฎๅคใๆญฃใใใชใใใใซ็บ็ใใฆใใใ ใใชใฎใงใ็ก่ฆใใพใใ
ไธ่จใฎใฏใจใชใฎ
SQL
SET validperiod=IF(@validperiod = 'NA', Null, @validperiod)
ใๅคใใจใใใใใNULLใๅ
ฅใฃใฆใปใใ็ฎๆใซ0ใๅ
ฅใฃใฆใใพใใใใใใใงใฏใNULLใใฏใใใใใซๅคๆใใพใใ | Python Code:
# TODO: You Must Change the setting bellow
MYSQL = {
'user': 'root',
'passwd': '',
'db': 'coupon_purchase',
'host': '127.0.0.1',
'port': 3306,
'local_infile': True,
'charset': 'utf8',
}
DATA_DIR = '/home/nasuno/recruit_kaggle_datasets' # ใใฃใฌใฏใใชใฎๅๅใซๆฅๆฌ่ช(ใใซใใใคใๆๅญ)ใฏไฝฟใใชใใงใใ ใใใ
OUTPUTS_DIR = '/home/nasuno/recruit_kaggle/outputs' # ไบๆธฌ็ตๆใชใฉใไฟๅญใใใใฃใฌใฏใใชใ
Explanation: Data Science Tutorial 01 @ Data Science Society
้ฃ้ ้่ซ(Kaoru Nasuno)/ ๆฑไบฌๅคงๅญฆ(The University of Tokyo)
ใใผใฟใตใคใจใณในใฎๅบ็ค็ใชในใญใซใ่บซใซใคใใ็บใฎใใฅใผใใชใขใซใงใใ
KaggleใฎใณใณใใใฃใทใงใณใงใใRECRUIT Challenge, Coupon Purchase Predictionใฎใใผใฟใปใใใ้กๆใจใใฆใ
ใใผใฟใตใคใจใณในใฎๅบ็ค็ใชในใญใซใซ่งฆใ๏ผ็่งฃใฎๅๅฐใ้คใใใจใ็ฎ็ใจใใพใใ
(้ซใไบๆธฌ็ฒพๅบฆใๅบใใใจใ็ฎ็ใงใฏใชใใงใ)
ใพใ ใๆธใใใใงใใฆใ่ฆๆใซๅใใใฆ่ชคใใฎไฟฎๆญฃใๅ ็ญใใใฆใใไบๅฎใงใใไฝใใๆฐใฅใใฎ็นใใใใฐใ้ฃ็ตก้ ใใพใใจๅนธใใงใใ
ๅฏพ่ฑกใใผใฟ
RECRUIT Challenge, Coupon Purchase Predictionใฎใใผใฟใปใใใ
ใฆใผใถ็ป้ฒใๅฉ็จ่ฆ็ดใซๅๆใใฆใใฆใณใญใผใใใฆใใ ใใใ
https://www.kaggle.com/c/coupon-purchase-prediction/data
้ฒใๆน
ใพใใฏใๅ
จใฆใฎใณใผใใใณใใผ&ใใผในใใใฆใใจใฉใผใชใๅไฝใใใใจใ็ขบ่ชใใพใใใใ
ใใฎๆฎต้ใงใจใฉใผใๅบใๅ ดๅใซใฏ็ฐๅขใๆดใฃใฆใใชใใใใใฉใกใฟใฎ่จญๅฎใใงใใฆใใชใ็ญใ
ใใญใฐใฉใ ใฎ็่งฃใจใฏใใพใ้ขไฟใฎใชใ็ฎๆใๅๅ ใงใใๅฏ่ฝๆงใ้ซใใงใใ
ๅไฝ็ขบ่ชใ็ตใใฃใใใใฒใจใคใใคๆธใๅใใฆใฟใฆใใใใใใฉใฎใใใซๅไฝใใใใ็่งฃใใฆใใใจใใๆนๆณใใๅงใใใพใใ
็ฎๆฌก
<span style="color: #FF0000;">ไธๆบๅ</span>
<span style="color: #FF0000;">ใใผใฟใใผในใธใฎใใผใฟใฎๅฑ้</span>
ใขใใชใณใฐๅฏพ่ฑกใฎๆ็ขบๅ
ๆฉๆขฐๅญฆ็ฟใซใใไบๆธฌใขใใซใฎๆง็ฏใป็ฒพๅบฆๆค่จผ
ใใผใฟใฎๆฆ่ฆณๆๆกใปไบๆธฌใขใใซใฎๆนๅ
1, 2 ใซใคใใฆ้ฒใใฆใใใพใใ 3. ไปฅ้ใซใคใใฆใฏใLecture 02ไปฅ้ใๅ็
งใใ ใใใ
dependencies
macใฆใผใถ๏ผ
bash
brew update;
pip install ipython;
pip install ipython[notebook];
brew install mariadb;
pip install MySQL-python;
pip install scikit-learn;
mysqlใ่ตทๅใใฆใใชใๅ ดๅใฏใไธ่จใฎใณใใณใใงmysqlใฎใใญใปในใ็ซใกไธใใพใใใใ
bash
mysqld_safe;
MySQLใฏใฉใคใขใณใฎไธใคใงใใSequel Pro( http://www.sequelpro.com/ )ใinstall ใใฆใใ ใใใ
1. ไธๆบๅ
ใใผใฟใใผในใฎไฝๆ
ใใฎใใฅใผใใชใขใซใงใฏMySQL(MariaDB)ใจใใใชใฌใผใทใงใใซใใผใฟใใผในใๅฉ็จใใพใใ
ใใใงใฏใๅฉ็จใใใใผใฟใใผในๅใcoupon_purchaseใจใใใใผใฟใใผในใไฝๆใใฆใใชใไบบใฏไธ่จใฎใณใใณใใใฟใผใใใซใงๅฎ่กใใฆใใ ใใใ
bash
echo 'CREATE DATABASE coupon_purchase; ' |mysql -uroot
rootใฆใผใถใฎใในใฏใผใใ่จญๅฎใใฆใใๆนใฏ
bash
echo 'CREATE DATABASE coupon_purchase; ' |mysql -uroot -pyourpassword
ใจใใฆใใ ใใใ
ใญใผใซใซ็ฐๅขไธใงๅฎ่กใใฆใใๅ ดๅใซใฏใsequel proใงไธ่จใฎใใใช่จญๅฎใง
ใงใใผใฟใใผในใซใขใฏใปในใงใใใใใซใชใฃใฆใใใฏใใงใใ
(MySQLใฎใในใฏใผใใ่จญๅฎใใฆใใชใๅ ดๅใซใฏใใในใฏใผใๆฌใฏ็ฉบ็ฝ)
<img src="files/sequel_pro.png" width="400px;"/>
ไปฅไธใฏใipython notebookไธใงๅฎ่กใใฆใใ ใใใ
ipython notebook ใฏไธ่จใฎใณใใณใใใฟใผใใใซใงๅฎ่กใใใใจใง่ตทๅใงใใพใใ
bash
ipython notebook;
่ตทๅใใใจใใใฉใฆใถไธใงipython notebookใ่ตทๅใใพใใ
New >> python2(or New Notebook)ใใฏใชใใฏใใใใจใงใๆฐใใpythonใฎใใผใใใใฏใไฝๆใงใใพใใ
ใใฉใกใฟใฎ่จญๅฎ
MySQLใฎใฆใผใถๅใใในใฏใผใใชใฉใฎใใฉใกใฟใๆๅฎใใฆใใ ใใใ
ๅคใใฎๅ ดๅใฏuserใpasswdใๅคๆดใใใฐๅใใจๆใใพใใ
ใพใใใใฆใณใญใผใใใ่งฃๅใใ9ใคใฎcsvใใกใคใซใ็ฝฎใใฆใใใใฃใฌใฏใใชใฎใในใ่จญๅฎใใฆใใ ใใใ
(coupon_area_test.csv, coupon_list_test.csv, prefecture_locations.csv, coupon_area_train.csv, coupon_list_train.csv, sample_submission.csv, coupon_detail_train.csv, coupon_visit_train.csv user_list.csv)
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import MySQLdb
import numpy
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split
from sklearn.metrics import f1_score, accuracy_score
from sklearn.linear_model import LogisticRegression
from datetime import datetime, timedelta
from itertools import product
# Random Seed
rng = numpy.random.RandomState(1234)
dbcon = MySQLdb.connect(**MYSQL)
dbcur = dbcon.cursor()
Explanation: ใขใธใฅใผใซใฎimportใๅคๆฐใฎๅๆๅ
ๆฌกใซใใใฎใใฅใผใใชใขใซใงๅฉ็จใใใขใธใฅใผใซใฎimportใไธ้จใฎๅคๆฐใฎๅๆๅใ่กใใพใใ
python
%matplotlib inline
ใฏ ipython notebookใซ็นๆใฎใใธใใฏใณใใณใใจใใใใฎใงใใ
pythonใฎๆๆณใจ็ฐใชใใพใใใmatplotlibใจใใ็ปๅใๆ็ปใใใฉใคใใฉใชใฎๅบๅ็ตๆใใใฉใฆใถไธใซ่กจ็คบใใใใใใซ่จญๅฎใใใใฎใงใใ
(ใใใงใฏใใใพใใชใ็จๅบฆใซ่ใใฆใใ ใใใ)
End of explanation
dbcur.execute('''DROP TABLE IF EXISTS user_list;''') # ใใฅใผใใชใขใซใฎไพฟๅฎไธใไธๅบฆๅ้คใใพใใ
query = '''
CREATE TABLE IF NOT EXISTS user_list (
reg_date DATETIME,
sex_id VARCHAR(1),
age INT,
withdraw_date DATETIME,
pref_name VARCHAR(15),
user_id_hash VARCHAR(32),
PRIMARY KEY(user_id_hash),
INDEX(reg_date),
INDEX(sex_id),
INDEX(age),
INDEX(withdraw_date),
INDEX(pref_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
'''
dbcur.execute(query)
Explanation: 2. ใใผใฟใใผในใธใฎใใผใฟใฎๆ ผ็ด
ใใผใฟใใผในใจใฏ
ใใผใฟใใผในใจใฏใ่ฒใ
ใชใใผใฟใฎ็ฎ็ใใผในใงใฎ็ฎก็ใใๅน็็ใชใใผใฟๅ็
ง๏ผๆค็ดขใๅฏ่ฝใซใใใใฎใงใใ
ใใผใฟใใผในใฎไธญใซใฏ่คๆฐใฎใใผใใซใใใใพใใ
ใใผใใซใฏใกใใใฉในใใฌใใใทใผใใฎใใใซใชใฃใฆใใฆใใใใใใฎๅใซๅๅใใใใ1่กใ1ใคใฎใใผใฟใจใชใใคใกใผใธใงใใ
ใใผใฟใฎๆ ผ็ด
ใใผใฟใฎๆ ผ็ดใฎๆตใใฏๅคงใพใใซใ
1. ใใผใใซใฎไฝๆ
2. ใใผใใซใธใฎใคใณใตใผใ
3. errorใwarningใฎ็ขบ่ช
ใฎ3ใคใฎในใใใใจใชใใพใใ
kaggleใฎใใผใธใซใใผใใซใฎๅฎ็พฉใๆธใใฆใใใฎใงใใใใงใฏใใใฎ้ใใซไฝๆใใพใใ
ใพใใฏใuser_listใฎใใผใใซไฝๆใฏใจใชใจๅฎ่กใงใใ
MySQLใฎCREATE TABLEๆงๆใซใคใใฆใฏใhttp://dev.mysql.com/doc/refman/5.6/ja/create-table.html ใๅ็
งใใ ใใใ
End of explanation
csv_path = DATA_DIR + '/user_list.csv'
query = '''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE user_list
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(reg_date, sex_id, age,@withdraw_date, pref_name, user_id_hash)
SET
withdraw_date = IF(CHAR_LENGTH(@withdraw_date) != 19 , '9999-12-31 23:59:59', STR_TO_DATE(@withdraw_date, "%Y-%m-%d %H:%i:%s"))
;
'''
dbcur.execute(query)
Explanation: ๆฌกใซใใใผใฟใฎใคใณใตใผใใงใใ
csvใใกใคใซใชใฉใdumpใใใใใกใคใซใใMySQLใซใคใณใตใผใใใๅ ดๅใซใฏLOAD DATA INFILEๆงๆใๅฉ็จใใพใใ
LOAD DATA INFILEๆงๆใซใคใใฆใฏใ http://dev.mysql.com/doc/refman/5.6/ja/load-data.html ใๅ็
งใใ ใใใ
End of explanation
### prefecture_locations
csv_path = DATA_DIR + '/prefecture_locations.csv'
dbcur.execute('''DROP TABLE IF EXISTS prefecture_locations;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS prefecture_locations (
pref_name VARCHAR(15),
PRIMARY KEY(pref_name),
prefectual_office VARCHAR(15),
latitude DOUBLE,
longitude DOUBLE
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE prefecture_locations
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(pref_name, prefectual_office, latitude, longitude)
;
''')
Explanation: ใใผใใซใฎไฝๆใซๅฉ็จใใCREATE TABLEๆใซใฏใ
ใใผใใซใฎๅใฎๅฎ็พฉใงใฏใชใใใคใณใใใฏในใจๅผใฐใใใใฎใฎๅฎ็พฉใๅซใพใใฆใใพใใ
ใคใณใใใฏในใจใฏใใผใฟใฎๆค็ดขใ้ซ้ๅใใใใฎใงใใ
PRIMARY KEY
ใใผใใซๅ
ใงuniqueใงใใใคใๆค็ดขใใใซใฉใ ใซไปไธใใใ
ไพใใฐใuser_listใใผใใซใฎuser_id_hashใฏๅฝ่ฉฒใใผใใซใงใใฆใใผใฏใงใใ๏ผใใคใใฆใผใถใฎๆค็ดขใซใใ็จใใใใใPRIMARY KEYใไปไธใใฆใใใๆนใ่ฏใใ
INDEX
ใใผใใซๅ
ใงuniqueใงใฏใชใใใๆค็ดขใใใซใฉใ ใซไปไธใใใไพใใฐใใฆใผใถใๆงๅฅใๅนด้ฝขใซๅฟใใฆๆค็ดขใป้่จใใฆใๅฒๅใ่ฆใใๅ ดๅใซใฏใsex_idใageใชใฉใฎใซใฉใ ใซไปไธใใฆใใใๆนใ่ฏใใ
TODO
MYSQL้ขๆฐใชใฉใฎ่ชฌๆใฎๅ ็ญใ
Exercise
ไธ่จใฎไปใฎใใกใคใซใซใคใใฆใๅๆงใซใใผใใซใไฝๆใใใใผใฟใใคใณใตใผใใใฆใใ ใใใ
- prefecture_locations.csv
- coupon_area_train.csv, coupon_area_test.csv
- coupon_detail_train.csv
- coupon_visit_train.csv
- coupon_list_train.csv, coupon_list_test.csv
ๅฎ่ฃ
ไพ
prefecture_locations.csv
End of explanation
### coupon_area_train
csv_path = DATA_DIR + '/coupon_area_train.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_area_train;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_area_train (
small_area_name VARCHAR(32),
pref_name VARCHAR(15),
coupon_id_hash VARCHAR(32),
INDEX(coupon_id_hash),
INDEX(pref_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_area_train
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(small_area_name,pref_name,coupon_id_hash)
;
''')
### coupon_area_test
csv_path = DATA_DIR + '/coupon_area_test.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_area_test;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_area_test (
small_area_name VARCHAR(32),
pref_name VARCHAR(15),
coupon_id_hash VARCHAR(32),
INDEX(coupon_id_hash),
INDEX(pref_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_area_test
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(small_area_name,pref_name,coupon_id_hash)
;
''')
Explanation: ๅฎ่กใใใจใใใใใใฎใฌใณใผใใงWarningใ็บ็ใใพใใใ
ใใผใฟใใผในใซๅฑ้ใใใใฌใณใผใใฎlongitudeใ็ขบ่ชใใใจๆญฃใใๅฑ้ใใใฆใใใใใใใใงใฏWarningใฏ็ก่ฆใใพใใ
(็ขบ่ชใใฆใใใพใใใใ่กๆซใฎๆน่กใณใผใใWarningใฎๅๅ ใใใใใพใใใใใ)
- coupon_area_train.csv, coupon_area_test.csv
End of explanation
### coupon_detail_train
csv_path = DATA_DIR + '/coupon_detail_train.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_detail_train;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_detail_train (
item_count INT,
i_date DATETIME,
small_area_name VARCHAR(32),
purchaseid_hash VARCHAR(32),
user_id_hash VARCHAR(32),
coupon_id_hash VARCHAR(32),
INDEX(coupon_id_hash)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_detail_train
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(item_count, i_date, small_area_name, purchaseid_hash, user_id_hash, coupon_id_hash)
;
''')
Explanation: coupon_detail_train.csv
End of explanation
### coupon_visit_train
csv_path = DATA_DIR + '/coupon_visit_train.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_visit_train;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_visit_train (
purchase_flg INT,
i_date DATETIME,
page_serial INT,
referrer_hash VARCHAR(128),
view_coupon_id_hash VARCHAR(128),
user_id_hash VARCHAR(32),
session_id_hash VARCHAR(128),
purchaseid_hash VARCHAR(32),
INDEX(user_id_hash, i_date),
INDEX(i_date, user_id_hash),
INDEX(view_coupon_id_hash),
INDEX(purchaseid_hash),
INDEX(purchase_flg)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_visit_train
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(purchase_flg,i_date,page_serial,referrer_hash,view_coupon_id_hash,user_id_hash,session_id_hash,purchaseid_hash)
;
''')
Explanation: coupon_visit_train.csv
ใใฎใใกใคใซใฏใฌใณใผใๆฐใไธ็ชๅคใใใคใณใตใผใใๅฎไบใใใพใงๅฐใๆ้ใใใใใพใใ
End of explanation
### coupon_list_train
csv_path = DATA_DIR + '/coupon_list_train.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_list_train;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_list_train (
capsule_text VARCHAR(20),
genre_name VARCHAR(50),
price_rate INT,
catalog_price INT,
discount_price INT,
dispfrom DATETIME,
dispend DATETIME,
dispperiod INT,
validfrom DATE,
validend DATE,
validperiod INT,
usable_date_mon VARCHAR(7),
usable_date_tue VARCHAR(7),
usable_date_wed VARCHAR(7),
usable_date_thu VARCHAR(7),
usable_date_fri VARCHAR(7),
usable_date_sat VARCHAR(7),
usable_date_sun VARCHAR(7),
usable_date_holiday VARCHAR(7),
usable_date_before_holiday VARCHAR(7),
large_area_name VARCHAR(30),
ken_name VARCHAR(8),
small_area_name VARCHAR(30),
coupon_id_hash VARCHAR(32),
PRIMARY KEY(coupon_id_hash),
INDEX(ken_name),
INDEX(genre_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_list_train
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(capsule_text,genre_name,price_rate,catalog_price,discount_price,dispfrom,dispend,dispperiod,validfrom,validend,@validperiod,usable_date_mon,usable_date_tue,usable_date_wed,usable_date_thu,usable_date_fri,usable_date_sat,usable_date_sun,usable_date_holiday,usable_date_before_holiday,large_area_name,ken_name,small_area_name,coupon_id_hash)
SET validperiod=IF(@validperiod = 'NA', Null, @validperiod)
;
''')
### coupon_list_test
csv_path = DATA_DIR + '/coupon_list_test.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_list_test;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_list_test (
capsule_text VARCHAR(20),
genre_name VARCHAR(50),
price_rate INT,
catalog_price INT,
discount_price INT,
dispfrom DATETIME,
dispend DATETIME,
dispperiod INT,
validfrom DATE,
validend DATE,
validperiod INT,
usable_date_mon VARCHAR(7),
usable_date_tue VARCHAR(7),
usable_date_wed VARCHAR(7),
usable_date_thu VARCHAR(7),
usable_date_fri VARCHAR(7),
usable_date_sat VARCHAR(7),
usable_date_sun VARCHAR(7),
usable_date_holiday VARCHAR(7),
usable_date_before_holiday VARCHAR(7),
large_area_name VARCHAR(30),
ken_name VARCHAR(8),
small_area_name VARCHAR(30),
coupon_id_hash VARCHAR(32),
PRIMARY KEY(coupon_id_hash),
INDEX(ken_name),
INDEX(genre_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_list_test
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(capsule_text,genre_name,price_rate,catalog_price,discount_price,dispfrom,dispend,dispperiod,validfrom,validend,@validperiod,usable_date_mon,usable_date_tue,usable_date_wed,usable_date_thu,usable_date_fri,usable_date_sat,usable_date_sun,usable_date_holiday,usable_date_before_holiday,large_area_name,ken_name,small_area_name,coupon_id_hash)
SET validperiod=IF(@validperiod = 'NA', Null, @validperiod)
;
''')
Explanation: coupon_list_train.csv, coupon_list_test.csv
2ใคใจใWarningใๅบใพใใใ
ๆฅๆใฎๅคใๆญฃใใใชใใใใซ็บ็ใใฆใใใ ใใชใฎใงใ็ก่ฆใใพใใ
ไธ่จใฎใฏใจใชใฎ
SQL
SET validperiod=IF(@validperiod = 'NA', Null, @validperiod)
ใๅคใใจใใใใใNULLใๅ
ฅใฃใฆใปใใ็ฎๆใซ0ใๅ
ฅใฃใฆใใพใใใใใใใงใฏใNULLใใฏใใใใใซๅคๆใใพใใ
End of explanation |
11,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 3
Due Date
Step1: Problem 2
Step2: Problem 3
This problem is related to the Lecture 4 exercises.
1. Open the languages.txt file. This file contains all the languages that students listed as their primary language in the course survey.
2. Load the language strings from the file into a list.
3. Use the Counter method from the collections library to count the number of occurrences of each element of the list.
+ NOTE
Step3: Problem 4
In chemical kinetics, the reaction rate coefficient for a given reaction depends on the temperature of the system. The functional relationship between the reaction rate coefficient and temperature is given by the Arrhenius rate
Step4: Problem 5
Using numpy arrays, plot $k\left(T\right)$ for $T\in\left(0, 5000\right]$ for three different sets of parameters $\left{A, b, E\right}$. Make sure all three lines are on the same figure and be sure to label each line. You may use the function from Problem 2. You may want to play with the parameters a little bit to get some nice curves but you won't lose points for ugly curves either (as long as they're correct!). | Python Code:
%%bash
cd /tmp
rm -rf playground
git clone https://github.com/crystalzhaizhai/playground.git
%%bash
cd /tmp/playground
git pull origin mybranch1
ls
%%bash
cd /tmp/playground
git status
%%bash
cd /tmp/playground
git reset --hard origin/master
ls
%%bash
cd /tmp/playground
git status
Explanation: Homework 3
Due Date: Tuesday, September 20th at 11:59 PM
Problem 1: Git and recovering from a mistake
You will do this problem in the Jupyter notebook so I can see your output. Once again, you will work with your playground repository.
NOTE: At the beginning of each cell, you MUST type %%bash. If you don't do that then you will not be able to work with the necessary bash commands.
Follow the following steps for this problem:
First cell:
Type cd /tmp to enter the temporary directory
git clone url_to_your_playground_repo
Second cell:
Go into your local playground directory (cd /tmp/playground)
Type git pull origin mybranch1
ls
Third cell:
Go into your local playground directory (cd /tmp/playground)
Type git status
Fourth cell:
Go into your local playground directory (cd /tmp/playground)
Type git reset --hard origin/master
ls
Fifth cell:
Go into your local playground directory (cd /tmp/playground)
Type git status
The whole point of this problem was to show you how to get your local repo back to an earlier state. In this exercise, you accidentally merged something to master that you didn't want. Rather than starting to delete things all over the place, you can simply reset your HEAD to a previous commit.
End of explanation
%%bash
cd /tmp/playground
cat .git/config
%%bash
cd /tmp/playground
git remote add course https://github.com/IACS-CS-207/playground.git
cat .git/config
%%bash
cd /tmp/playground
git fetch course master
%%bash
cd /tmp/playground
git checkout course/master -- README.md
cat README.md
%%bash
cd /tmp/playground
git add .
git commit -m "playgroundchange" -a
git status
git push
Explanation: Problem 2: Git and checking out a single file
Sometimes you don't want to merge an entire branch from the upstream but just one file from it. There is a direct use case for such a situation. Suppose I've made an error in this homework (or a lecture) and want to correct it. I fix the mistake in the upstream repo. In the meantime you have edited some other files and you really don't want to manually ignore my older copies of those files. Rather, you want to fix just one file from this new branch. This is how you do it.
As usual, be sure to type in %%bash before you write any bash commands in a cell.
Note: The steps below assume that you have already cloned the playground repo in this notebook.
First cell:
Go into the playground repo and fetch the changes from the master branch of the course remote.
Second cell:
git checkout course/master -- README.md. The -- means that README.md is a file (as opposed to a branch).
cat README.md. This just looks at the updated file.
Third cell:
git status
Commit the changes to your local repo with an appropriate commit message.
git status
Push the changes to your remote repo.
End of explanation
with open("../../lectures/L4/languages.txt","r") as f:
primary_course=f.read().split()
from collections import Counter
course_count=Counter(primary_course)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x_coords=np.arange(len(course_count))
total=np.sum(course_count.values())
freqs=course_count.values()
plt.xticks(x_coords,course_count.keys())
plt.bar(x_coords,freqs)
Explanation: Problem 3
This problem is related to the Lecture 4 exercises.
1. Open the languages.txt file. This file contains all the languages that students listed as their primary language in the course survey.
2. Load the language strings from the file into a list.
3. Use the Counter method from the collections library to count the number of occurrences of each element of the list.
+ NOTE: It is not necessary to use the most_common() method here.
4. Create a bar plot to display the frequency of each language. Be sure to label the x-axis!
+ Remember, to create plots in the notebook you must put the line %matplotlib inline at the beginning of your notebook.
+ Be sure to import matplotlib: import matplotlib.pyplot as plt.
+ To generate the bar plot write plt.bar(x_coords, freqs). You need to define x_coords and freqs.
+ Hint: You may want to use the numpy arange function to create x_coords. Remember, x_coords is the x-axis and it should have points for each distinct language.
+ Hint: To get freqs, you may want to use the values() method on your result from step 3. That is, freqs = result_from_3.values().
+ Hint: To label the x-axis you should use plt.xticks(x_coords, labels) where labels can be accessed through the keys() method on your result from step 3.
End of explanation
def kinetics(p,T, R=8.314):
import numpy as np
if len(p)<3:
print("Error! Less than 3 parameters")
return()
try:
k=p[0]*(T**p[1])*np.exp(-p[2]/(R*T))
return k
except ZeroDivisionError:
print("Error! Divided by 0")
return()
kinetics([1,2],0)
Explanation: Problem 4
In chemical kinetics, the reaction rate coefficient for a given reaction depends on the temperature of the system. The functional relationship between the reaction rate coefficient and temperature is given by the Arrhenius rate:
\begin{align}
k\left(T\right) = A T^{b}\exp\left(-\frac{E}{RT}\right)
\end{align}
where $A$, $b$, and $E$ are parameters, $R = 8.314 \dfrac{\textrm{J}}{\textrm{mol} \textrm{ K}}$ is the universal gas constant, and $T$ is the temperature.
Write a function which returns $k\left(T\right)$ given $A$, $b$, $E$, and $T$. Here are a few requirements:
* The function should test for exceptions where necessary.
* Pass the parameters $A$, $b$, and $E$ in as a list.
* Make $R$ a keyword argument to the function.
End of explanation
TT=np.arange(1,5000)
plt.plot(TT,kinetics([3,6,2],TT),'r')
plt.plot(TT,kinetics([4,5,6],TT),'g')
plt.plot(TT,kinetics([6,5,4],TT),'b')
plt.legend(["A,b,E=[3,6,2]","A,b,E=[4,5,6]","A,b,E=[6,5,4]"])
plt.xlabel("T")
plt.ylabel("k")
plt.title("kinetics")
%%bash
git add "HW3_final.ipynb"
git commit -m "HW3" -a
git status
git remote
git push origin master
Explanation: Problem 5
Using numpy arrays, plot $k\left(T\right)$ for $T\in\left(0, 5000\right]$ for three different sets of parameters $\left{A, b, E\right}$. Make sure all three lines are on the same figure and be sure to label each line. You may use the function from Problem 2. You may want to play with the parameters a little bit to get some nice curves but you won't lose points for ugly curves either (as long as they're correct!).
End of explanation |
11,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running pyqz II
First things first, let's start by importing pyqz and pyqz_plots.
Step1: D) Using custom MAPPINGS grids
While pyqz ships with a default set of HII region simulations from MAPPINGS, some (all!) users might be interested in using pyqz with their own specific sets of MAPPINGS simulations. pyqz was designed to be compatible with the grids generated from the awk script provided alongside MAPPINGS.
<b>If one uses the awk script to create new MAPPINGS grids</b>, the resulting .csv file must be placed inside pyqz.pyqzm.pyqz_grid_dir. The filename must match what the function pyqz.pyqzt.get_MVphotogrid_fn() expects for your given set of parameters, e.g. should you have run a MAPPINGS model for $\log$(P/k)=6.7, plane-parralel HII regions and $\kappa=10$, the resulting grid name for the .csv file must be
Step2: <b>If one does not use the awk script to generate the custom MAPPINGS grid</b>, then <i>just</i> make sure your model grid matches the format of existing model grids located in pyqz.pyqzm.pyqz_grid_dir ...
E) Resampling the original MAPPINGS grids
By default, 2 times resampled MAPPINGS grids are shipped with pyqz. These are generated using the function pyqz.pyqz_tools.resample_MVphotogrid(), which is straightforward to use
Step3: More densely resampled grids can then easily be created by varying the sampling keyword.
F) Projected 3-D line ratio diagrams
pyqz does support 2-D line ratio diagrams constructed from 3 sets of line ratios (i.e. 3-D line ratio diagrams projected to a given 2-D plane). For example, the diagnostic introduced in Dopita+ (2016) is | Python Code:
%matplotlib inline
import pyqz
import pyqz.pyqz_plots as pyqzp
Explanation: Running pyqz II
First things first, let's start by importing pyqz and pyqz_plots.
End of explanation
fn = pyqz.pyqzt.get_MVphotogrid_fn(Pk=6.7, calibs='GCZO', kappa =10, struct='pp')
print fn.split('/')[-1]
Explanation: D) Using custom MAPPINGS grids
While pyqz ships with a default set of HII region simulations from MAPPINGS, some (all!) users might be interested in using pyqz with their own specific sets of MAPPINGS simulations. pyqz was designed to be compatible with the grids generated from the awk script provided alongside MAPPINGS.
<b>If one uses the awk script to create new MAPPINGS grids</b>, the resulting .csv file must be placed inside pyqz.pyqzm.pyqz_grid_dir. The filename must match what the function pyqz.pyqzt.get_MVphotogrid_fn() expects for your given set of parameters, e.g. should you have run a MAPPINGS model for $\log$(P/k)=6.7, plane-parralel HII regions and $\kappa=10$, the resulting grid name for the .csv file must be:
End of explanation
grid_fn = pyqz.pyqzt.get_MVphotogrid_fn(Pk=5.0,struct='sph', kappa='inf')
pyqz.pyqzt.resample_MVphotogrid(grid_fn, sampling=2)
Explanation: <b>If one does not use the awk script to generate the custom MAPPINGS grid</b>, then <i>just</i> make sure your model grid matches the format of existing model grids located in pyqz.pyqzm.pyqz_grid_dir ...
E) Resampling the original MAPPINGS grids
By default, 2 times resampled MAPPINGS grids are shipped with pyqz. These are generated using the function pyqz.pyqz_tools.resample_MVphotogrid(), which is straightforward to use:
End of explanation
pyqzp.plot_grid('[NII]/[SII]+;[NII]/Ha;[OIII]/Hb',
coeffs = [[1.0,0.264,0.0],[0.242,-0.910,0.342]],
struct='pp',
sampling=1)
Explanation: More densely resampled grids can then easily be created by varying the sampling keyword.
F) Projected 3-D line ratio diagrams
pyqz does support 2-D line ratio diagrams constructed from 3 sets of line ratios (i.e. 3-D line ratio diagrams projected to a given 2-D plane). For example, the diagnostic introduced in Dopita+ (2016) is:
End of explanation |
11,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<H1>Migration velocity</H1>
<P> To compute the velocity of the trajectories of several particles, we generated a file with the 3D coordinates (Position X, Position Y and Position Z) acquired every 10 minutes.
Step1: <H2>Show basic file information</H2>
Step3: <H2>Compute euclidian distances </H2>
Step4: <H2>Velocities</H2>
<P>This is simply the distance if sampling time is constant </P>
Step5: <H2>Particle information</H2>
Step6: <H2>Show normalized speeds</H2>
Step7: <H2>Fourier transform</H2> | Python Code:
%pylab inline
import pandas as pd
# read CSV file in pandas
mydf = pd.read_csv('.data/Julie_R1_Bef_S4_cell123_Position.csv', skiprows=2)
mydf.head()
Explanation: <H1>Migration velocity</H1>
<P> To compute the velocity of the trajectories of several particles, we generated a file with the 3D coordinates (Position X, Position Y and Position Z) acquired every 10 minutes.
End of explanation
# get basic information
print('Number of samples %d'%len(mydf))
print('Number of particles = %d'%len(mydf['TrackID'].unique()))
print('Distance units = %s'%mydf['Unit'][0])
# get TrackIDs
TrackID = mydf['TrackID'].unique()
# select only locations, sampling points and TrackIDs
df = mydf[['Position X','Position Y', 'Position Z', 'Time','TrackID']]
df0 = df.loc[df['TrackID'] == TrackID[0]]
df1 = df.loc[df['TrackID'] == TrackID[1]]
df2 = df.loc[df['TrackID'] == TrackID[2]]
counter = 0
for i in TrackID:
mysize = len( df.loc[df['TrackID'] == i] )
counter +=mysize
print('Number of samples in TrackID = %d is %d'%(i,mysize))
print('Total number of samples %d'%counter)
df0.head() # show first values of first particle
# collect a list of 3d coordinates
P0 = zip(df0['Position X'], df0['Position Y'], df0['Position Z'])
P1 = zip(df1['Position X'], df1['Position Y'], df1['Position Z'])
P2 = zip(df2['Position X'], df2['Position Y'], df2['Position Z'])
P0[0] # test the values are correct
Explanation: <H2>Show basic file information</H2>
End of explanation
def distance(myarray):
Calculate the distance between 2 3D coordinates along the
axis of the numpy array.
# slice() method is useful for large arrays
# see diff in ./local/lib/python2.7/site-packages/numpy/lib/function_base.py
a = np.asanyarray(myarray)
slice1 = [slice(None)] # create a slice type object
slice2 = [slice(None)]
slice1[-1] = slice(1, None) # like array[1:]
slice2[-1] = slice(None, -1) # like array[:-1]
slice1 = tuple(slice1)
slice2 = tuple(slice2)
# calculate sqrt( dx^2 + dy^2 + dz^2)
sum_squared = np.sum( np.power(a[slice2]-a[slice1],2), axis=1)
return np.sqrt( sum_squared)
Explanation: <H2>Compute euclidian distances </H2>
End of explanation
# retrieve time vector
#dt = 10 # sampling interval in minutes
dt = 0.1666 # sampling interval in hours
t0 = df0['Time'].values*dt
print(len(t0))
D0 = distance(P0) # in um
S0 = D0/10. # speed in um/min
t0 = t0[:-1] # when ploting speeds we do not need the last sampling point
plt.plot(t0, S0, color = '#006400')
plt.ylabel('Speed (um/min)'),
plt.xlabel('Time (hours)')
plt.title('Particle %d'%TrackID[0]);
Explanation: <H2>Velocities</H2>
<P>This is simply the distance if sampling time is constant </P>
End of explanation
print('Track duration %2.4f min'%(len(t0)*10.))
print('total traveled distances = %2.4f um'%np.sum(D0))
print('total average speed = %2.4f um/min'%S0.mean())
# retrieve time vector and calculate speed
dt = 0.1666 # sampling interval in hours
t1 = df1['Time'].values*dt
D1 = distance(P1) # in um
S1 = D1/10. #um/min
t1 = t1[:-1]
plt.plot(t1, S1, color = '#4169E1')
plt.ylabel('Speed (um/min)'),
plt.xlabel('Time (hours)')
plt.title('Particle %d'%TrackID[1]);
print('Track duration %2.4f min'%(len(t1)*10.))
print('total traveled distances = %2.4f um'%np.sum(D1))
print('total average speed = %2.4f um/min'%S1.mean())
# retrieve time vector and calculate speed
dt = 0.1666 # sampling interval in hours
t2 = df2['Time'].values*dt
D2 = distance(P2) # in um
S2 = D2/10. #um/min
t2 = t2[:-1]
plt.plot(t2, S2, color = '#800080')
plt.xlabel('Time (hours)')
plt.ylabel('Speed (um/min)'), plt.title('Particle %d'%TrackID[2]);
print('Track duration %2.4f min'%(len(t2)*10.))
print('total traveled distances = %2.4f um'%np.sum(D2))
print('total average speed = %2.4f um/min'%S2.mean())
#Overlap
plt.plot(t0, S0, color = '#006400');
plt.plot(t1, S1, color = '#4169E1');
plt.plot(t2, S2, color = '#800080');
plt.xlabel('Time (hours)');
plt.ylabel('Speed (um/min)'), plt.title('All Particles');
Explanation: <H2>Particle information</H2>
End of explanation
S0_norm = S0/np.max(S0)
S1_norm = S1/np.max(S1)
S2_norm = S2/np.max(S2)
#Overlap
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
ax1.plot(t0, S0_norm, color = 'darkgreen', alpha=0.5)
ax2.plot(t1, S1_norm, color = 'royalblue')
ax3.plot(t2, S2_norm, color = 'purple')
#ax3.plot(np.arange(1500), mysin, color= 'cyan')
ax3.set_xlabel('Time (hours)');
for ax in fig.axes:
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
ax.get_yaxis().set_visible(False)
ax.get_xaxis().set_visible(False)
#ax.axis('Off')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax3.get_xaxis().set_visible(True)
ax.get_xaxis().set_ticks(np.arange(0,25,5))
ax3.spines['bottom'].set_visible(True)
ax3.spines['left'].set_visible(True)
Explanation: <H2>Show normalized speeds</H2>
End of explanation
n = len(S0) # length of the signal
k = np.arange(n)
T = n*dt
frq = k/T # two sides frequency range
frq = frq[range(n/2)] # one side frequency range
Y0 = np.fft.fft(S0)/n # fft computing and normalization
Y0 = Y0[range(n/2)]
plt.plot(frq, abs(Y0),color = 'darkgreen') # plotting the spectrum
plt.xlabel('Freq (hours)')
plt.ylabel('|Y(freq)|')
#plt.ylim(ymax=0.02)
n = len(S1) # length of the signal
k = np.arange(n)
T = n*dt
frq = k/T # two sides frequency range
frq = frq[range(n/2)] # one side frequency range
Y1 = np.fft.fft(S1)/n # fft computing and normalization
Y1 = Y1[range(n/2)]
plt.plot(frq, abs(Y0),color = 'darkgreen') # plotting the spectrum
plt.plot(frq, abs(Y1),color = 'royalblue') # plotting the spectrum
plt.xlabel('Freq (hours)')
plt.ylabel('|Y(freq)|')
plt.ylim(ymax = 0.1)
Explanation: <H2>Fourier transform</H2>
End of explanation |
11,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prerequisites
Install Theano and Lasagne using the following commands
Step1: Data loading
Step2: Network definition
Step3: Define the update rule, how to train
Step4: Compile
Step5: Training (a bit simplified)
Step6: Test phase
Now that the model is train it is enough to take the fwd function and apply it to new data. | Python Code:
import sys
import os
import numpy as np
import scipy.io
import time
import theano
import theano.tensor as T
import theano.sparse as Tsp
import lasagne as L
import lasagne.layers as LL
import lasagne.objectives as LO
from lasagne.layers.normalization import batch_norm
sys.path.append('..')
from icnn import aniso_utils_lasagne, dataset, snapshotter
Explanation: Prerequisites
Install Theano and Lasagne using the following commands:
bash
pip install -r https://raw.githubusercontent.com/Lasagne/Lasagne/master/requirements.txt
pip install https://github.com/Lasagne/Lasagne/archive/master.zip
Working in a virtual environment is recommended.
Data preparation
Current code allows to generate geodesic patches from a collection of shapes represented as triangular meshes.
To get started with the pre-processing:
git clone https://github.com/jonathanmasci/ShapeNet_data_preparation_toolbox.git
The usual processing pipeline is show in run_forrest_run.m.
We will soon update this preparation stage, so perhaps better to start with our pre-computed dataset, and stay tuned! :-)
Prepared data
All it is required to train on the FAUST_registration dataset for this demo is available for download at
https://www.dropbox.com/s/aamd98nynkvbcop/EG16_tutorial.tar.bz2?dl=0
ICNN Toolbox
bash
git clone https://github.com/jonathanmasci/EG16_tutorial.git
End of explanation
base_path = '/home/shubham/Desktop/IndependentStudy/EG16_tutorial/dataset/FAUST_registrations/data/diam=200/'
# train_txt, test_txt, descs_path, patches_path, geods_path, labels_path, ...
# desc_field='desc', patch_field='M', geod_field='geods', label_field='labels', epoch_size=100
ds = dataset.ClassificationDatasetPatchesMinimal(
'FAUST_registrations_train.txt', 'FAUST_registrations_test.txt',
os.path.join(base_path, 'descs', 'shot'),
os.path.join(base_path, 'patch_aniso', 'alpha=100_nangles=016_ntvals=005_tmin=6.000_tmax=24.000_thresh=99.900_norm=L1'),
None,
os.path.join(base_path, 'labels'),
epoch_size=50)
# inp = LL.InputLayer(shape=(None, 544))
# print(inp.input_var)
# patch_op = LL.InputLayer(input_var=Tsp.csc_fmatrix('patch_op'), shape=(None, None))
# print(patch_op.shape)
# print(patch_op.input_var)
# icnn = LL.DenseLayer(inp, 16)
# print(icnn.output_shape)
# print(icnn.output_shape)
# desc_net = theano.dot(patch_op, icnn)
Explanation: Data loading
End of explanation
nin = 544
nclasses = 6890
l2_weight = 1e-5
def get_model(inp, patch_op):
icnn = LL.DenseLayer(inp, 16)
icnn = batch_norm(aniso_utils_lasagne.ACNNLayer([icnn, patch_op], 16, nscale=5, nangl=16))
icnn = batch_norm(aniso_utils_lasagne.ACNNLayer([icnn, patch_op], 32, nscale=5, nangl=16))
icnn = batch_norm(aniso_utils_lasagne.ACNNLayer([icnn, patch_op], 64, nscale=5, nangl=16))
ffn = batch_norm(LL.DenseLayer(icnn, 512))
ffn = LL.DenseLayer(icnn, nclasses, nonlinearity=aniso_utils_lasagne.log_softmax)
return ffn
inp = LL.InputLayer(shape=(None, nin))
patch_op = LL.InputLayer(input_var=Tsp.csc_fmatrix('patch_op'), shape=(None, None))
ffn = get_model(inp, patch_op)
# L.layers.get_output -> theano variable representing network
output = LL.get_output(ffn)
pred = LL.get_output(ffn, deterministic=True) # in case we use dropout
# target theano variable indicatind the index a vertex should be mapped to wrt the latent space
target = T.ivector('idxs')
# to work with logit predictions, better behaved numerically
cla = aniso_utils_lasagne.categorical_crossentropy_logdomain(output, target, nclasses).mean()
acc = LO.categorical_accuracy(pred, target).mean()
# a bit of regularization is commonly used
regL2 = L.regularization.regularize_network_params(ffn, L.regularization.l2)
cost = cla + l2_weight * regL2
Explanation: Network definition
End of explanation
params = LL.get_all_params(ffn, trainable=True)
grads = T.grad(cost, params)
# computes the L2 norm of the gradient to better inspect training
grads_norm = T.nlinalg.norm(T.concatenate([g.flatten() for g in grads]), 2)
# Adam turned out to be a very good choice for correspondence
updates = L.updates.adam(grads, params, learning_rate=0.001)
Explanation: Define the update rule, how to train
End of explanation
funcs = dict()
funcs['train'] = theano.function([inp.input_var, patch_op.input_var, target],
[cost, cla, l2_weight * regL2, grads_norm, acc], updates=updates,
on_unused_input='warn')
funcs['acc_loss'] = theano.function([inp.input_var, patch_op.input_var, target],
[acc, cost], on_unused_input='warn')
funcs['predict'] = theano.function([inp.input_var, patch_op.input_var],
[pred], on_unused_input='warn')
Explanation: Compile
End of explanation
n_epochs = 50
eval_freq = 1
start_time = time.time()
best_trn = 1e5
best_tst = 1e5
kvs = snapshotter.Snapshotter('demo_training.snap')
for it_count in xrange(n_epochs):
tic = time.time()
b_l, b_c, b_s, b_r, b_g, b_a = [], [], [], [], [], []
for x_ in ds.train_iter():
tmp = funcs['train'](*x_)
# do some book keeping (store stuff for training curves etc)
b_l.append(tmp[0])
b_c.append(tmp[1])
b_r.append(tmp[2])
b_g.append(tmp[3])
b_a.append(tmp[4])
epoch_cost = np.asarray([np.mean(b_l), np.mean(b_c), np.mean(b_r), np.mean(b_g), np.mean(b_a)])
print(('[Epoch %03i][trn] cost %9.6f (cla %6.4f, reg %6.4f), |grad| = %.06f, acc = %7.5f %% (%.2fsec)') %
(it_count, epoch_cost[0], epoch_cost[1], epoch_cost[2], epoch_cost[3], epoch_cost[4] * 100,
time.time() - tic))
if np.isnan(epoch_cost[0]):
print("NaN in the loss function...let's stop here")
break
if (it_count % eval_freq) == 0:
v_c, v_a = [], []
for x_ in ds.test_iter():
tmp = funcs['acc_loss'](*x_)
v_a.append(tmp[0])
v_c.append(tmp[1])
test_cost = [np.mean(v_c), np.mean(v_a)]
print((' [tst] cost %9.6f, acc = %7.5f %%') % (test_cost[0], test_cost[1] * 100))
if epoch_cost[0] < best_trn:
kvs.store('best_train_params', [it_count, LL.get_all_param_values(ffn)])
best_trn = epoch_cost[0]
if test_cost[0] < best_tst:
kvs.store('best_test_params', [it_count, LL.get_all_param_values(ffn)])
best_tst = test_cost[0]
print("...done training %f" % (time.time() - start_time))
Explanation: Training (a bit simplified)
End of explanation
rewrite = True
out_path = '/tmp/EG16_tutorial/dumps/'
print "Saving output to: %s" % out_path
if not os.path.isdir(out_path) or rewrite==True:
try:
os.makedirs(out_path)
except:
pass
a = []
for i,d in enumerate(ds.test_iter()):
fname = os.path.join(out_path, "%s" % ds.test_fnames[i])
print fname,
tmp = funcs['predict'](d[0], d[1])[0]
a.append(np.mean(np.argmax(tmp, axis=1).flatten() == d[2].flatten()))
scipy.io.savemat(fname, {'desc': tmp})
print ", Acc: %7.5f %%" % (a[-1] * 100.0)
print "\nAverage accuracy across all shapes: %7.5f %%" % (np.mean(a) * 100.0)
else:
print "Model predictions already produced."
Explanation: Test phase
Now that the model is train it is enough to take the fwd function and apply it to new data.
End of explanation |
11,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Zipline beginner tutorial
Basics
Zipline is an open-source algorithmic trading simulator written in Python.
The source can be found at
Step1: As you can see, we first have to import some functions we would like to use. All functions commonly used in your algorithm can be found in zipline.api. Here we are using order() which takes two arguments -- a security object, and a number specifying how many stocks you would like to order (if negative, order() will sell/short stocks). In this case we want to order 10 shares of Apple at each iteration. For more documentation on order(), see the Quantopian docs.
You don't have to use the symbol() function and could just pass in AAPL directly but it is good practice as this way your code will be Quantopian compatible.
Finally, the record() function allows you to save the value of a variable at each iteration. You provide it with a name for the variable together with the variable itself
Step2: Note that you have to omit the preceding '!' when you call run_algo.py, this is only required by the IPython Notebook in which this tutorial was written.
As you can see there are a couple of flags that specify where to find your algorithm (-f) as well as parameters specifying which stock data to load from Yahoo! finance (--symbols) and the time-range (--start and --end). Finally, you'll want to save the performance metrics of your algorithm so that you can analyze how it performed. This is done via the --output flag and will cause it to write the performance DataFrame in the pickle Python file format. Note that you can also define a configuration file with these parameters that you can then conveniently pass to the -c option so that you don't have to supply the command line args all the time (see the .conf files in the examples directory).
Thus, to execute our algorithm from above and save the results to buyapple_out.pickle we would call run_algo.py as follows
Step3: run_algo.py first outputs the algorithm contents. It then fetches historical price and volume data of Apple from Yahoo! finance in the desired time range, calls the initialize() function, and then streams the historical stock price day-by-day through handle_data(). After each call to handle_data() we instruct zipline to order 10 stocks of AAPL. After the call of the order() function, zipline enters the ordered stock and amount in the order book. After the handle_data() function has finished, zipline looks for any open orders and tries to fill them. If the trading volume is high enough for this stock, the order is executed after adding the commission and applying the slippage model which models the influence of your order on the stock price, so your algorithm will be charged more than just the stock price * 10. (Note, that you can also change the commission and slippage model that zipline uses, see the Quantopian docs for more information).
Note that there is also an analyze() function printed. run_algo.py will try and look for a file with the ending with _analyze.py and the same name of the algorithm (so buyapple_analyze.py) or an analyze() function directly in the script. If an analyze() function is found it will be called after the simulation has finished and passed in the performance DataFrame. (The reason for allowing specification of an analyze() function in a separate file is that this way buyapple.py remains a valid Quantopian algorithm that you can copy&paste to the platform).
Lets take a quick look at the performance DataFrame. For this, we use pandas from inside the IPython Notebook and print the first ten rows. Note that zipline makes heavy usage of pandas, especially for data input and outputting so it's worth spending some time to learn it.
Step4: As you can see, there is a row for each trading day, starting on the first business day of 2000. In the columns you can find various information about the state of your algorithm. The very first column AAPL was placed there by the record() function mentioned earlier and allows us to plot the price of apple. For example, we could easily examine now how our portfolio value changed over time compared to the AAPL stock price.
Step5: As you can see, our algorithm performance as assessed by the portfolio_value closely matches that of the AAPL stock price. This is not surprising as our algorithm only bought AAPL every chance it got.
IPython Notebook
The IPython Notebook is a very powerful browser-based interface to a Python interpreter (this tutorial was written in it). As it is already the de-facto interface for most quantitative researchers zipline provides an easy way to run your algorithm inside the Notebook without requiring you to use the CLI.
To use it you have to write your algorithm in a cell and let zipline know that it is supposed to run this algorithm. This is done via the %%zipline IPython magic command that is available after you import zipline from within the IPython Notebook. This magic takes the same arguments as the command line interface described above. Thus to run the algorithm from above with the same parameters we just have to execute the following cell after importing zipline to register the magic.
Step6: Note that we did not have to specify an input file as above since the magic will use the contents of the cell and look for your algorithm functions there. Also, instead of defining an output file we are specifying a variable name with -o that will be created in the name space and contain the performance DataFrame we looked at above.
Step7: Manual (advanced)
If you are happy with either way above you can safely skip this passage. To provide a closer look at how zipline actually works it is instructive to see how we run an algorithm without any of the interfaces demonstrated above which hide the actual zipline API.
Step8: As you can see, we again define the functions as above but we manually pass them to the TradingAlgorithm class which is the main zipline class for running algorithms. We also manually load the data using load_bars_from_yahoo() and pass it to the TradingAlgorithm.run() method which kicks off the backtest simulation.
Access to previous prices using history
Working example | Python Code:
!tail ../zipline/examples/buyapple.py
Explanation: Zipline beginner tutorial
Basics
Zipline is an open-source algorithmic trading simulator written in Python.
The source can be found at: https://github.com/quantopian/zipline
Some benefits include:
Realistic: slippage, transaction costs, order delays.
Stream-based: Process each event individually, avoids look-ahead bias.
Batteries included: Common transforms (moving average) as well as common risk calculations (Sharpe).
Developed and continuously updated by Quantopian which provides an easy-to-use web-interface to Zipline, 10 years of minute-resolution historical US stock data, and live-trading capabilities. This tutorial is directed at users wishing to use Zipline without using Quantopian. If you instead want to get started on Quantopian, see here.
This tutorial assumes that you have zipline correctly installed, see the installation instructions if you haven't set up zipline yet.
Every zipline algorithm consists of two functions you have to define:
* initialize(context)
* handle_data(context, data)
Before the start of the algorithm, zipline calls the initialize() function and passes in a context variable. context is a persistent namespace for you to store variables you need to access from one algorithm iteration to the next.
After the algorithm has been initialized, zipline calls the handle_data() function once for each event. At every call, it passes the same context variable and an event-frame called data containing the current trading bar with open, high, low, and close (OHLC) prices as well as volume for each stock in your universe. For more information on these functions, see the relevant part of the Quantopian docs.
My first algorithm
Lets take a look at a very simple algorithm from the examples directory, buyapple.py:
End of explanation
!run_algo.py --help
Explanation: As you can see, we first have to import some functions we would like to use. All functions commonly used in your algorithm can be found in zipline.api. Here we are using order() which takes two arguments -- a security object, and a number specifying how many stocks you would like to order (if negative, order() will sell/short stocks). In this case we want to order 10 shares of Apple at each iteration. For more documentation on order(), see the Quantopian docs.
You don't have to use the symbol() function and could just pass in AAPL directly but it is good practice as this way your code will be Quantopian compatible.
Finally, the record() function allows you to save the value of a variable at each iteration. You provide it with a name for the variable together with the variable itself: varname=var. After the algorithm finished running you will have access to each variable value you tracked with record() under the name you provided (we will see this further below). You also see how we can access the current price data of the AAPL stock in the data event frame (for more information see here.
Running the algorithm
To now test this algorithm on financial data, zipline provides two interfaces. A command-line interface and an IPython Notebook interface.
Command line interface
After you installed zipline you should be able to execute the following from your command line (e.g. cmd.exe on Windows, or the Terminal app on OSX):
End of explanation
!run_algo.py -f ../zipline/examples/buyapple.py --start 2000-1-1 --end 2014-1-1 --symbols AAPL -o buyapple_out.pickle
Explanation: Note that you have to omit the preceding '!' when you call run_algo.py, this is only required by the IPython Notebook in which this tutorial was written.
As you can see there are a couple of flags that specify where to find your algorithm (-f) as well as parameters specifying which stock data to load from Yahoo! finance (--symbols) and the time-range (--start and --end). Finally, you'll want to save the performance metrics of your algorithm so that you can analyze how it performed. This is done via the --output flag and will cause it to write the performance DataFrame in the pickle Python file format. Note that you can also define a configuration file with these parameters that you can then conveniently pass to the -c option so that you don't have to supply the command line args all the time (see the .conf files in the examples directory).
Thus, to execute our algorithm from above and save the results to buyapple_out.pickle we would call run_algo.py as follows:
End of explanation
import pandas as pd
perf = pd.read_pickle('buyapple_out.pickle') # read in perf DataFrame
perf.head()
Explanation: run_algo.py first outputs the algorithm contents. It then fetches historical price and volume data of Apple from Yahoo! finance in the desired time range, calls the initialize() function, and then streams the historical stock price day-by-day through handle_data(). After each call to handle_data() we instruct zipline to order 10 stocks of AAPL. After the call of the order() function, zipline enters the ordered stock and amount in the order book. After the handle_data() function has finished, zipline looks for any open orders and tries to fill them. If the trading volume is high enough for this stock, the order is executed after adding the commission and applying the slippage model which models the influence of your order on the stock price, so your algorithm will be charged more than just the stock price * 10. (Note, that you can also change the commission and slippage model that zipline uses, see the Quantopian docs for more information).
Note that there is also an analyze() function printed. run_algo.py will try and look for a file with the ending with _analyze.py and the same name of the algorithm (so buyapple_analyze.py) or an analyze() function directly in the script. If an analyze() function is found it will be called after the simulation has finished and passed in the performance DataFrame. (The reason for allowing specification of an analyze() function in a separate file is that this way buyapple.py remains a valid Quantopian algorithm that you can copy&paste to the platform).
Lets take a quick look at the performance DataFrame. For this, we use pandas from inside the IPython Notebook and print the first ten rows. Note that zipline makes heavy usage of pandas, especially for data input and outputting so it's worth spending some time to learn it.
End of explanation
%pylab inline
figsize(12, 12)
import matplotlib.pyplot as plt
ax1 = plt.subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('portfolio value')
ax2 = plt.subplot(212, sharex=ax1)
perf.AAPL.plot(ax=ax2)
ax2.set_ylabel('AAPL stock price')
Explanation: As you can see, there is a row for each trading day, starting on the first business day of 2000. In the columns you can find various information about the state of your algorithm. The very first column AAPL was placed there by the record() function mentioned earlier and allows us to plot the price of apple. For example, we could easily examine now how our portfolio value changed over time compared to the AAPL stock price.
End of explanation
import zipline
%%zipline --start 2000-1-1 --end 2014-1-1 --symbols AAPL -o perf_ipython
from zipline.api import symbol, order, record
def initialize(context):
pass
def handle_data(context, data):
order(symbol('AAPL'), 10)
record(AAPL=data[symbol('AAPL')].price)
Explanation: As you can see, our algorithm performance as assessed by the portfolio_value closely matches that of the AAPL stock price. This is not surprising as our algorithm only bought AAPL every chance it got.
IPython Notebook
The IPython Notebook is a very powerful browser-based interface to a Python interpreter (this tutorial was written in it). As it is already the de-facto interface for most quantitative researchers zipline provides an easy way to run your algorithm inside the Notebook without requiring you to use the CLI.
To use it you have to write your algorithm in a cell and let zipline know that it is supposed to run this algorithm. This is done via the %%zipline IPython magic command that is available after you import zipline from within the IPython Notebook. This magic takes the same arguments as the command line interface described above. Thus to run the algorithm from above with the same parameters we just have to execute the following cell after importing zipline to register the magic.
End of explanation
perf_ipython.head()
Explanation: Note that we did not have to specify an input file as above since the magic will use the contents of the cell and look for your algorithm functions there. Also, instead of defining an output file we are specifying a variable name with -o that will be created in the name space and contain the performance DataFrame we looked at above.
End of explanation
import pytz
from datetime import datetime
from zipline.algorithm import TradingAlgorithm
from zipline.utils.factory import load_bars_from_yahoo
# Load data manually from Yahoo! finance
start = datetime(2000, 1, 1, 0, 0, 0, 0, pytz.utc)
end = datetime(2012, 1, 1, 0, 0, 0, 0, pytz.utc)
data = load_bars_from_yahoo(stocks=['AAPL'], start=start,
end=end)
# Define algorithm
def initialize(context):
pass
def handle_data(context, data):
order(symbol('AAPL'), 10)
record(AAPL=data[symbol('AAPL')].price)
# Create algorithm object passing in initialize and
# handle_data functions
algo_obj = TradingAlgorithm(initialize=initialize,
handle_data=handle_data)
# Run algorithm
perf_manual = algo_obj.run(data)
Explanation: Manual (advanced)
If you are happy with either way above you can safely skip this passage. To provide a closer look at how zipline actually works it is instructive to see how we run an algorithm without any of the interfaces demonstrated above which hide the actual zipline API.
End of explanation
%%zipline --start 2000-1-1 --end 2014-1-1 --symbols AAPL -o perf_dma
from zipline.api import order_target, record, symbol, history, add_history
import numpy as np
def initialize(context):
# Register 2 histories that track daily prices,
# one with a 100 window and one with a 300 day window
add_history(100, '1d', 'price')
add_history(300, '1d', 'price')
context.i = 0
def handle_data(context, data):
# Skip first 300 days to get full windows
context.i += 1
if context.i < 300:
return
# Compute averages
# history() has to be called with the same params
# from above and returns a pandas dataframe.
short_mavg = history(100, '1d', 'price').mean()
long_mavg = history(300, '1d', 'price').mean()
# Trading logic
if short_mavg[0] > long_mavg[0]:
# order_target orders as many shares as needed to
# achieve the desired number of shares.
order_target(symbol('AAPL'), 100)
elif short_mavg[0] < long_mavg[0]:
order_target(symbol('AAPL'), 0)
# Save values for later inspection
record(AAPL=data[symbol('AAPL')].price,
short_mavg=short_mavg[0],
long_mavg=long_mavg[0])
def analyze(context, perf):
fig = plt.figure()
ax1 = fig.add_subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('portfolio value in $')
ax2 = fig.add_subplot(212)
perf['AAPL'].plot(ax=ax2)
perf[['short_mavg', 'long_mavg']].plot(ax=ax2)
perf_trans = perf.ix[[t != [] for t in perf.transactions]]
buys = perf_trans.ix[[t[0]['amount'] > 0 for t in perf_trans.transactions]]
sells = perf_trans.ix[
[t[0]['amount'] < 0 for t in perf_trans.transactions]]
ax2.plot(buys.index, perf.short_mavg.ix[buys.index],
'^', markersize=10, color='m')
ax2.plot(sells.index, perf.short_mavg.ix[sells.index],
'v', markersize=10, color='k')
ax2.set_ylabel('price in $')
plt.legend(loc=0)
plt.show()
Explanation: As you can see, we again define the functions as above but we manually pass them to the TradingAlgorithm class which is the main zipline class for running algorithms. We also manually load the data using load_bars_from_yahoo() and pass it to the TradingAlgorithm.run() method which kicks off the backtest simulation.
Access to previous prices using history
Working example: Dual Moving Average Cross-Over
The Dual Moving Average (DMA) is a classic momentum strategy. It's probably not used by any serious trader anymore but is still very instructive. The basic idea is that we compute two rolling or moving averages (mavg) -- one with a longer window that is supposed to capture long-term trends and one shorter window that is supposed to capture short-term trends. Once the short-mavg crosses the long-mavg from below we assume that the stock price has upwards momentum and long the stock. If the short-mavg crosses from above we exit the positions as we assume the stock to go down further.
As we need to have access to previous prices to implement this strategy we need a new concept: History
history() is a convenience function that keeps a rolling window of data for you. The first argument is the number of bars you want to collect, the second argument is the unit (either '1d' for '1m' but note that you need to have minute-level data for using 1m). For a more detailed description history()'s features, see the Quantopian docs. While you can directly use the history() function on Quantopian, in zipline you have to register each history container you want to use with add_history() and pass it the same arguments as the history function below. Lets look at the strategy which should make this clear:
End of explanation |
11,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Excel - Paste Import
sometimes we just want to import Excel data by pasting it (it pastes as \n separated rows, where the fields are separated by \t
Ranges
Step2: Export
the table below can be pasted into Excel; then apply Text to Columns... from the Data menu to expand it into a full range (must choose delimited and comma for this to work)
Step5: Rows or Colums
Step6: Export
the first emits rows that paste directly into Excel; for the second, like about Text to Columns... has to be used | Python Code:
data_string =
1 2 3 4
11 12 13 14
21 22 23 24
31 32 33 34
41 42 43 44
51 52 53 54
data_string = data_string.strip()
data = [line.split('\t') for line in data_string.split('\n')]
data
data_f = [list(map(float,line.split('\t'))) for line in data_string.split('\n')]
data_f
data_i = [list(map(int,line.split('\t'))) for line in data_string.split('\n')]
data_i
data_i = [list(map(int, row)) for row in data]
data_i
Explanation: Excel - Paste Import
sometimes we just want to import Excel data by pasting it (it pastes as \n separated rows, where the fields are separated by \t
Ranges
End of explanation
str0 = "\n".join(",".join(map(str, data_row)) for data_row in data_i)
print (str0)
Explanation: Export
the table below can be pasted into Excel; then apply Text to Columns... from the Data menu to expand it into a full range (must choose delimited and comma for this to work)
End of explanation
data_string =
1
11
21
31
41
51
data_string = data_string.strip()
#data_string =
#1 2 3 4
#
#data_string = data_string.strip()
import re
data = re.split("\t|\n", data_string)
data
data_f = list(map(float,re.split("\t|\n", data_string)))
data_f
data_i = list(map(int,re.split("\t|\n", data_string)))
data_i
Explanation: Rows or Colums
End of explanation
str0 = "\n".join(map(str, data_i))
print (str0)
str0 = ",".join(map(str, data_i))
print (str0)
Explanation: Export
the first emits rows that paste directly into Excel; for the second, like about Text to Columns... has to be used
End of explanation |
11,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Epochs
Step2: Evoked
Step3: Challenge
Step4: Machine learning approach
Step5: Performing linear regression
Step6: Inspecting the weights
Step7: What's going on here?
https
Step8: The data covariance
Step9: Shrinking the covariance
Step10: Post-hoc modification of the model
Step11: The pattern matrix
Step12: Modifying the pattern matrix
<img src="kernel.png" width="400">
Step13: Post-hoc modifying the pattern in the model
Step14: To find out more, read the paper!
https
Step15: Automatic optimization
Step16: Feature selection vs. Pattern modification | Python Code:
import mne
epochs = mne.read_epochs('subject04-epo.fif')
epochs.metadata
Explanation: <a href="https://mybinder.org/v2/gh/wmvanvliet/neuroscience_tutorials/master?filepath=posthoc%2Flinear_regression.ipynb" target="_new" style="float: right"><img src="qr.png" alt="https://mybinder.org/v2/gh/wmvanvliet/neuroscience_tutorials/master?filepath=posthoc%2Flinear_regression.ipynb"></a>
Marijn van Vliet
A deep dive into linear models
tiny.cc/deepdive
Loading the data
End of explanation
epochs.plot(n_channels=32, n_epochs=10);
Explanation: Epochs: snippets of EEG data
End of explanation
unrelated = epochs['FAS < 0.1'].average()
related = epochs['FAS > 0.1'].average()
mne.viz.plot_evoked_topo([related, unrelated]);
Explanation: Evoked: averaging across epochs
End of explanation
ROI = epochs.copy()
ROI.pick_channels(['P3', 'Pz', 'P4'])
ROI.crop(0.3, 0.47)
FAS_pred = ROI.get_data().mean(axis=(1, 2))
from scipy.stats import pearsonr
print('Performance: %.2f' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
Explanation: Challenge:
Deduce the memory priming effect for a word-pair, given the EEG epoch
Naive approach: average signal in ROI
End of explanation
print(epochs.get_data().shape)
X = epochs.get_data().reshape(200, 32 * 60)
y = epochs.metadata['FAS'].values
from sklearn.preprocessing import normalize
X = normalize(X)
print('X:', X.shape)
print('y:', y.shape)
Explanation: Machine learning approach: linear regression
End of explanation
from sklearn.linear_model import LinearRegression
model = LinearRegression().fit(X, y)
FAS_pred = model.predict(X)
print('Performance: %.2f (to beat: 0.30)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
from sklearn.model_selection import cross_val_predict
FAS_pred = cross_val_predict(model, X, y, cv=10)
print('Performance: %.2f (to beat: 0.30)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
Explanation: Performing linear regression
End of explanation
model.fit(X, y)
weights = model.coef_.reshape(32, 60)
ev = mne.EvokedArray(weights, epochs.info, tmin=epochs.times[0], comment='weights')
ev.plot_topo();
Explanation: Inspecting the weights
End of explanation
from posthoc import Workbench
model = Workbench(LinearRegression())
model.fit(X, y)
cov_X = X.T @ X / len(X)
pattern = model.pattern_
normalizer = model.normalizer_
Explanation: What's going on here?
https://users.aalto.fi/~vanvlm1/posthoc/regression.html
The post-hoc framework
Data covariance matrix
Haufe pattern matrix
Normalizer
End of explanation
from matplotlib import pyplot as plt
plt.matshow(cov_X, cmap='magma')
# Show channel names
plt.xticks(range(0, 32 * 60, 60), epochs.ch_names, rotation=90)
plt.yticks(range(0, 32 * 60, 60), epochs.ch_names);
Explanation: The data covariance
End of explanation
import numpy as np
# Amount of shrinkage
alpha = 0.75
# Shrinkage formula
shrinkage_target = np.identity(32 * 60) * np.trace(cov_X) / len(cov_X)
cov_X_mod = alpha * shrinkage_target + (1 - alpha) * cov_X
# Plot shrunk covariance
plt.matshow(cov_X_mod, cmap='magma')
plt.xticks(range(0, 32 * 60, 60), epochs.ch_names, rotation=90)
plt.yticks(range(0, 32 * 60, 60), epochs.ch_names);
Explanation: Shrinking the covariance
End of explanation
from posthoc.cov_estimators import ShrinkageKernel
model = Workbench(LinearRegression(), cov=ShrinkageKernel(alpha=0.97))
FAS_pred = cross_val_predict(model, X, y, cv=10)
print('Performance: %.2f (to beat: 0.30)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
Explanation: Post-hoc modification of the model
End of explanation
pattern_ev = mne.EvokedArray(pattern.reshape(32, 60), epochs.info, epochs.times[0], comment='pattern')
pattern_ev.plot_topo();
Explanation: The pattern matrix
End of explanation
import numpy as np
def pattern_modifier(pattern, X_train=None, y_train=None, mu=0.36, sigma=0.06):
pattern = pattern.reshape(32, 60)
# Define mu and sigma in samples
mu = np.searchsorted(epochs.times, mu)
sigma = sigma * epochs.info['sfreq']
# Formula for Gaussian curve
kernel = np.exp(-0.5 * ((np.arange(60) - mu) / sigma) ** 2)
return (pattern * kernel).ravel()
pattern_mod = pattern_modifier(pattern)
pattern_mod = mne.EvokedArray(pattern_mod.reshape(32, 60), epochs.info, epochs.times[0], comment='pattern')
pattern_mod.plot_topo();
Explanation: Modifying the pattern matrix
<img src="kernel.png" width="400">
End of explanation
model = Workbench(LinearRegression(), cov=ShrinkageKernel(0.97), pattern_modifier=pattern_modifier)
FAS_pred = cross_val_predict(model, X, y, cv=10)
print('Performance: %.2f (to beat: 0.30, 0.35)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
Explanation: Post-hoc modifying the pattern in the model
End of explanation
print(normalizer)
Explanation: To find out more, read the paper!
https://www.biorxiv.org/content/10.1101/518662v2
Marijn van Vliet & Riitta Salmelin
Post-hoc modification of linear models: combining machine learning with domain information to make solid inferences from noisy data
NeuroImage (2020)
For more interactive neuroscience tutorials:
https://github.com/wmvanvliet/neuroscience_tutorials
The normalizer
End of explanation
def scorer(model, X, y):
return pearsonr(model.predict(X), y)[0]
from posthoc import WorkbenchOptimizer
model = WorkbenchOptimizer(LinearRegression(), cov=ShrinkageKernel(0.95),
pattern_modifier=pattern_modifier, pattern_param_x0=[0.4, 0.05], pattern_param_bounds=[(0, 0.8), (0.01, 0.5)],
scoring=scorer)
model.fit(X, y)
print('Optimal parameters: alpha=%.3f, mu=%.3f, sigma=%.3f'
% tuple(model.cov_params_ + model.pattern_modifier_params_))
Explanation: Automatic optimization
End of explanation
import numpy as np
def modify_X(X, X_train=None, y_train=None, mu=0.36, sigma=0.06):
X = X.reshape(200, 32, 60)
# Define mu and sigma in samples
mu = np.searchsorted(epochs.times, mu)
sigma = sigma * epochs.info['sfreq']
# Formula for Gaussian curve
kernel = np.exp(-0.5 * ((np.arange(60) - mu) / sigma) ** 2)
return (X * kernel).reshape(200, -1)
X_mod = modify_X(X)
model = LinearRegression()
FAS_pred = cross_val_predict(model, X_mod, y, cv=10)
print('LR performance: %.2f (to beat: 0.30, 0.38)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
model = Workbench(LinearRegression(), cov=ShrinkageKernel(alpha=0.97))
FAS_pred = cross_val_predict(model, X_mod, y, cv=10)
print('Shrinkage LR performance: %.2f (to beat: 0.30, 0.38)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
Explanation: Feature selection vs. Pattern modification
End of explanation |
11,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Python-implementation-of-Finding-a-"Kneedle"-in-a-Haystack
Step1: Example 1
$$ X \sim N(50, 10) $$
Knee point(expected)
Step2: Example 2
$$ y = -1/x + 5$$
Knee point(expected) | Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import seaborn as sns
from scipy.interpolate import UnivariateSpline
import matplotlib.pyplot as plt
sns.set_style('white')
np.random.seed(42)
def draw_plot(X, Y, knee_point=None):
plt.plot(X, Y)
if knee_point:
plt.axvline(x=knee_point, color='k', linestyle='--')
mu = 50
sigma = 10
S = 1
n = 1000
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Python-implementation-of-Finding-a-"Kneedle"-in-a-Haystack:-Detecting-Knee-Points-in-System-Behavior" data-toc-modified-id="Python-implementation-of-Finding-a-"Kneedle"-in-a-Haystack:-Detecting-Knee-Points-in-System-Behavior-1"><span class="toc-item-num">1 </span>Python implementation of <a href="https://www1.icsi.berkeley.edu/~barath/papers/kneedle-simplex11.pdf" target="_blank">Finding a "Kneedle" in a Haystack: Detecting Knee Points in System Behavior</a></a></div><div class="lev2 toc-item"><a href="#Example-1" data-toc-modified-id="Example-1-11"><span class="toc-item-num">1.1 </span>Example 1</a></div><div class="lev2 toc-item"><a href="#Example-2" data-toc-modified-id="Example-2-12"><span class="toc-item-num">1.2 </span>Example 2</a></div>
# Python implementation of [Finding a "Kneedle" in a Haystack: Detecting Knee Points in System Behavior](https://www1.icsi.berkeley.edu/~barath/papers/kneedle-simplex11.pdf)
End of explanation
X = np.random.normal(mu, sigma, n)
sorted_X = np.sort(X)
Y = np.arange(len(X))/float(len(sorted_X))
def _locate(Y_d, T_lm, maxima_ids):
n = len(Y_d)
for j in range(0, n):
for index, i in enumerate(maxima_ids):
if j <= i:
continue
if Y_d[j] <= T_lm[index]:
return index
def find_knee_point(X, Y, S):
n = len(X)
spl = UnivariateSpline(X, Y)
X_s = np.linspace(np.min(X), np.max(X), n)
Y_s = spl(X_s)
X_sn = (X_s - np.min(X_s)) / (np.max(X_s) - np.min(X_s))
Y_sn = (Y_s - np.min(Y_s)) / (np.max(Y_s) - np.min(Y_s))
X_d = X_sn
Y_d = Y_sn - X_sn
X_lm = []
Y_lm = []
maxima_ids = []
for i in range(1, n - 1):
if (Y_d[i] > Y_d[i - 1] and Y_d[i] > Y_d[i + 1]):
X_lm.append(X_d[i])
Y_lm.append(Y_d[i])
maxima_ids.append(i)
T_lm = Y_lm - S * np.sum(np.diff(X_sn)) / (n - 1)
knee_point_index = _locate(Y_d, T_lm, maxima_ids)
knee_point = X_lm[knee_point_index] * (np.max(X_s) - np.min(X_s)
) + np.min(X_s)
return knee_point, Y_d
knee_point, yd = find_knee_point(sorted_X, Y, S)
draw_plot(sorted_X, Y, knee_point)
knee_point
Explanation: Example 1
$$ X \sim N(50, 10) $$
Knee point(expected) : $\mu+\sigma=60$
Knee point(simulation) : 66
End of explanation
x = np.linspace(0.1,1,10)
y = -1/x+5
knee_point, _ = find_knee_point(x, y, S)
draw_plot(x, y, knee_point)
knee_point
Explanation: Example 2
$$ y = -1/x + 5$$
Knee point(expected): $0.22$
Knee point(simulation): $0.4$
End of explanation |
11,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Newtonian Tidal Disruption of Compact Binaries
We expect certain types of LIGO signals to have electromagnetic (EM) counterparts — bright, transient explosions visible to optical, radio, or high-energy telescopes at or sometime near the time of the gravitational wave signal. For example, up to 5-10% of neutron star binary mergers (either NS-NS or NS-BH) will probably produce detectable short-duration $\gamma$-ray bursts, and up to 20-50% of them will have optical or radio "afterglows" that last for a few days after the merger (Metzger & Berger, 2012). This happens because, during certain NS-BH mergers, the neutron star can be destroyed by tidal forces as it spirals toward the black hole, forming an accretion disk that powers the EM counterpart. But under what circumstances can this happen?
The following diagram illustrates the situation
Step1: Next, we'll need to define some physical constants (particularly $G$ and $c$) and set the neutron star mass to a typical value of $m_{\rm NS} = 1.4\,M_{\odot}$. We'll also want to choose a range of neutron star radii, which will act as a proxy for its equation of state.
Step2: Now, it makes sense to define a couple of functions to do our heavy lifting. The first of these should locate the radius of the innermost stable circular orbit (or ISCO) given a black hole mass ($m_{\rm BH}$) and spin angular momentum (represented by the dimensionless number $\chi_{\rm BH} = G\vert\mathbf{S}{\rm BH}\vert/Gm{\rm BH}^2$). According to general relativity, this happens when objects in the binary are separated by a distance $a_{\rm ISCO}$ given by
\begin{equation}
f(\chi_{\rm BH}) \equiv \frac{c^2a_{\rm ISCO}}{Gm_{\rm BH}} = 3 + Z_2 - {\rm sgn}(\chi_{\rm BH}) \sqrt{(3-Z_1)(3+Z_1+2Z_2)}
\end{equation}
where $Z_1$ and $Z_2$ are defined as
\begin{align}
Z_1 &= 1 + (1 - \chi_{\rm BH}^2)^{1/3} \left[ (1+\chi_{\rm BH}^2)^{1/3} + (1-\chi_{\rm BH}^2)^{1/3} \right] \
Z_2 &= \sqrt{3\chi_{\rm BH}^2 + Z_1^2}.
\end{align}
Step3: Note
Step4: Now we're ready for the main body of the code, which simply makes repeated calls to our two functions aISCO and disrupt inside of a loop
Step5: To visualize the relationship between tidal disruption and mass ratio, we can make a couple of plots. First, we want to know how the ratio between $a_{\rm td}$ and $a_{\rm ISCO}$ scales with the mass ratio $q = m_{\rm BH}/m_{\rm NS}$. Put simply, if tidal disruption occurs outside of ISCO then an accretion disk will form; otherwise, the neutron star plunges in without forming a disk (and thus, without producing an EM counterpart). So, the interesting question is
Step6: From this plot, we can convince ourselves that destroying a neutron star becomes easier as the neutron star gets less dense, which might not be surprising — but it also becomes easier as the black hole spins faster! Would it blow your mind if I told you that this happens because, when a black hole spins, it drags space and time along with it, causing even more extreme gravity? When you pick a fight with a raging monster the size of a city spinning faster than a kitchen blender, you're going to lose that fight.
Our second plot visualizes where the boundary between forming a disk and not forming a disk lies as a function of neutron star radius and black hole mass and spin. Remember
Step7: Note that this (very simplified!) model only uses Newtonian gravity, but it turns out to be a pretty good match for the results of Foucart, 2012. This is because two competing relativistic effects roughly cancel each other out | Python Code:
# Imports.
import numpy as np
from numpy import pi
import matplotlib.pyplot as plt
from matplotlib import ticker
%matplotlib inline
Explanation: Newtonian Tidal Disruption of Compact Binaries
We expect certain types of LIGO signals to have electromagnetic (EM) counterparts — bright, transient explosions visible to optical, radio, or high-energy telescopes at or sometime near the time of the gravitational wave signal. For example, up to 5-10% of neutron star binary mergers (either NS-NS or NS-BH) will probably produce detectable short-duration $\gamma$-ray bursts, and up to 20-50% of them will have optical or radio "afterglows" that last for a few days after the merger (Metzger & Berger, 2012). This happens because, during certain NS-BH mergers, the neutron star can be destroyed by tidal forces as it spirals toward the black hole, forming an accretion disk that powers the EM counterpart. But under what circumstances can this happen?
The following diagram illustrates the situation:
In this Jupyter notebook, we'll use Newtonian gravity to investigate the basic physics of tidal disruption when a neutron star of radius $R_{\rm NS}$ and mass $m_{\rm NS}$ spirals into a black hole of mass $m_{\rm BH}$, whose spin angular momentum $\mathbf{S}_{\rm BH}$ is aligned with the total orbital angular momentum. In particular, we'll try to understand when and how tidal disruption results in an accretion disk, and how this relates to the black hole's rate of spin. For a more detailed look at accretion-driven EM counterparts, we'll then compare this against e.g. Foucart, 2012.
First, we'll need the Python modules numpy and matplotlib:
End of explanation
# Physical constants.
G = 6.67408e-11 # Newton's constant in m^3 / kg / s
MSun = 1.989e30 # Solar mass in kg
c = 299792458. # Speed of light in m/s
m_NS = 1.4*MSun # NS mass in kg
R_NS = np.array([11e3, 12e3, 13e3]) # Neutron star radii to try, in meters
Explanation: Next, we'll need to define some physical constants (particularly $G$ and $c$) and set the neutron star mass to a typical value of $m_{\rm NS} = 1.4\,M_{\odot}$. We'll also want to choose a range of neutron star radii, which will act as a proxy for its equation of state.
End of explanation
# Define a function that locates ISCO given the BH mass and spin.
def aISCO(m, chi):
Z1 = 1 + (1 - chi**2)**(1./3) * ((1 + chi)**(1./3) + (1 - chi)**(1./3))
Z2 = np.sqrt(3*chi**2 + Z1**2)
f = 3 + Z2 - np.sign(chi) * np.sqrt((3 - Z1) * (3 + Z1 + 2*Z2))
return f * G * m / c**2
Explanation: Now, it makes sense to define a couple of functions to do our heavy lifting. The first of these should locate the radius of the innermost stable circular orbit (or ISCO) given a black hole mass ($m_{\rm BH}$) and spin angular momentum (represented by the dimensionless number $\chi_{\rm BH} = G\vert\mathbf{S}{\rm BH}\vert/Gm{\rm BH}^2$). According to general relativity, this happens when objects in the binary are separated by a distance $a_{\rm ISCO}$ given by
\begin{equation}
f(\chi_{\rm BH}) \equiv \frac{c^2a_{\rm ISCO}}{Gm_{\rm BH}} = 3 + Z_2 - {\rm sgn}(\chi_{\rm BH}) \sqrt{(3-Z_1)(3+Z_1+2Z_2)}
\end{equation}
where $Z_1$ and $Z_2$ are defined as
\begin{align}
Z_1 &= 1 + (1 - \chi_{\rm BH}^2)^{1/3} \left[ (1+\chi_{\rm BH}^2)^{1/3} + (1-\chi_{\rm BH}^2)^{1/3} \right] \
Z_2 &= \sqrt{3\chi_{\rm BH}^2 + Z_1^2}.
\end{align}
End of explanation
# Define a function for locating the tidal disruption point.
def disrupt(m1, R, m2=1.4*MSun, tol=1e-4):
M = m1 + m2 # total mass in kg
M0 = G * M / c**2 # total mass in m
mu = m1 * m2 / M # reduced mass in kg
a_test = np.linspace(M0, 24*M0, int(1/tol))
fgrav = G * mu / R**2 # NS self-gravity
ftide = -G * M * (1/(a_test - R)**2 - 1/(a_test + R)**2) # tidal force due to BH
ftot = fgrav + ftide # total force
return a_test[ np.abs(ftot).argmin() ]
Explanation: Note: when you use this function, remember it is vitally important that $-1 < \chi_{\rm BH} < 1$, where a negative number means the black hole spins opposite the direction of the orbit. If you step outside of these limits, you will rip a hole through space and time. This would be very bad. Do not rip a hole through space and time.
The second function will locate the orbital separation at which tidal diruption occurs, $a_{\rm td}$. This is roughly identified as the point in the inspiral where tidal stresses acting on the neutron star due to the black hole overcome its own self-gravity, ripping this giant ultra-compact space atom to smithereens. To identify this point, remember that tidal stresses arise because of a nonzero force gradient across the neutron star. Along the axis separating both objects, when they are a distance $a$ apart, this is given by
\begin{equation}
f_{\rm tide} = -GM\left[ \frac{1}{(a - R_{\rm NS})^2} - \frac{1}{(a + R_{\rm NS})^2} \right]
\end{equation}
where $M = m_{\rm BH} + m_{\rm NS}$ is the total mass of the system. The neutron star self-gravity is of course
\begin{equation}
f_{\rm grav} = \frac{G\mu}{R_{\rm NS}^2}
\end{equation}
where $\mu = m_{\rm BH}m_{\rm NS}/M$ is the reduced mass. Thus, we can compute an array of force gradients at several test values of $a$, compare them to the self-gravity, then find a zero-crossing:
End of explanation
# Set array of primary masses to try.
masses = np.linspace(1.4*MSun, 18*m_NS, 1000)
atd = [ np.array([]) for i in xrange(len(R_NS)) ]
# Find the disruption point for each primary mass and several different spins.
for i in xrange(len(R_NS)):
for m_BH in masses:
atd[i] = np.append( atd[i], disrupt(m_BH, R_NS[i]) )
Explanation: Now we're ready for the main body of the code, which simply makes repeated calls to our two functions aISCO and disrupt inside of a loop:
End of explanation
# Plot the ratio of a_td/a_ISCO as a function of direct mass ratio.
fig = plt.figure( figsize=(6, 3) )
ax = fig.add_subplot(1, 1, 1)
ls = ['solid', 'dashed', 'dashdot']
for i in xrange(len(R_NS)):
ax.plot(masses/m_NS, atd[i]/aISCO(masses, 0), 'k', linestyle=ls[i], linewidth=2.,
label=r'$R_{\rm NS} =$ %.1f km' % (R_NS[i]/1e3))
ax.plot(masses/m_NS, atd[i]/aISCO(masses, 0.7), 'DeepSkyBlue', linestyle=ls[i])
ax.plot(masses/m_NS, atd[i]/aISCO(masses, 0.9), 'Tomato', linestyle=ls[i])
ax.plot([0, 25], [1, 1], 'k--', linewidth=0.5)
ax.plot(masses/m_NS, atd[0]/aISCO(masses, 0.7), 'DeepSkyBlue', label=r'$\chi_{\rm BH}$ = 0.7')
ax.plot(masses/m_NS, atd[0]/aISCO(masses, 0.9), 'Tomato', label=r'$\chi_{\rm BH} =$ 0.9')
ax.set_xlim([1, 15])
ax.set_xlabel(r'$m_{\rm BH}/m_{\rm NS}$')
ax.set_ylim([0.75, 1.5])
ax.set_ylabel(r'$a_{\rm td}/a_{\rm ISCO}$')
ax.xaxis.set_major_formatter(ticker.FormatStrFormatter("%d"))
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter("%.1f"))
ax.legend(loc=1, fontsize=11, fancybox=True)
fig.tight_layout()
plt.savefig('disruption_point.pdf')
Explanation: To visualize the relationship between tidal disruption and mass ratio, we can make a couple of plots. First, we want to know how the ratio between $a_{\rm td}$ and $a_{\rm ISCO}$ scales with the mass ratio $q = m_{\rm BH}/m_{\rm NS}$. Put simply, if tidal disruption occurs outside of ISCO then an accretion disk will form; otherwise, the neutron star plunges in without forming a disk (and thus, without producing an EM counterpart). So, the interesting question is: at what mass ratio is $a_{\rm td}/a_{\rm ISCO} = 1$?
End of explanation
# For each NS radius, plot the boundary where a_td = a_ISCO as a function of BH mass and spin.
chi_BH = np.linspace(0, 0.999, 100) # range of BH spins
q = [ np.array([]) for i in xrange(len(R_NS)) ]
for x in chi_BH:
ISCO = aISCO(masses, x)
for i in xrange(len(R_NS)):
q[i] = np.append( q[i], masses[np.abs(atd[i]/ISCO - 1).argmin()] / m_NS )
fig = plt.figure( figsize=(6, 4) )
ax = fig.add_subplot(1, 1, 1)
for i in xrange(len(R_NS)):
ax.plot(q[i], chi_BH, 'k', linestyle=ls[i], linewidth=2.,
label=r'$R_{\rm NS} =$ %.1f km' % (R_NS[i]/1e3))
ax.set_xlim([1, 15])
ax.set_xlabel(r'$m_{\rm BH}/m_{\rm NS}$')
ax.set_ylim([0, 1])
ax.set_ylabel(r'$\chi_{\rm BH}$')
ax.annotate('EM-dark', xy=(10, 0.6), xycoords='data', size=12, ha="center", va="center")
ax.annotate('EM-bright', xy=(3, 0.8), xycoords='data', size=12, ha="center", va="center")
ax.xaxis.set_major_formatter(ticker.FormatStrFormatter("%d"))
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter("%.1f"))
ax.legend(loc=4, fontsize=11, fancybox=True)
fig.tight_layout()
plt.savefig('qchi_diagram.pdf')
Explanation: From this plot, we can convince ourselves that destroying a neutron star becomes easier as the neutron star gets less dense, which might not be surprising — but it also becomes easier as the black hole spins faster! Would it blow your mind if I told you that this happens because, when a black hole spins, it drags space and time along with it, causing even more extreme gravity? When you pick a fight with a raging monster the size of a city spinning faster than a kitchen blender, you're going to lose that fight.
Our second plot visualizes where the boundary between forming a disk and not forming a disk lies as a function of neutron star radius and black hole mass and spin. Remember: if there's a disk, there will be a bright explosion; if there isn't a disk, the system stays dark.
End of explanation
def q_Foucart(chi1, m1, R2, m2=1.4*MSun):
const = 0.288 / 0.148 # Foucart's alpha/beta
C = G * m2 / (c**2 * R2) # compactness of the neutron star
f = aISCO(c**2/G, chi1)
return 3**0.5 * (const * (1/f) * (1 - 2*C) / C)**(3./2)
fig = plt.figure( figsize=(6, 4) )
ax = fig.add_subplot(1, 1, 1)
m_BH = np.linspace(1.4*MSun, 18*m_NS, len(chi_BH)) # we don't need as many points as before
for i in xrange(len(R_NS)):
ax.plot(q[i], chi_BH, 'Silver', linestyle=ls[i], linewidth=1.5)
ax.plot(q_Foucart(chi_BH, m_BH, R_NS[i]), chi_BH, 'k', linestyle=ls[i], linewidth=2.,
label=r'$R_{\rm NS} =$ %.1f km' % (R_NS[i]/1e3))
ax.set_xlim([1, 15])
ax.set_xlabel(r'$m_{\rm BH}/m_{\rm NS}$')
ax.set_ylim([0, 1])
ax.set_ylabel(r'$\chi_{\rm BH}$')
ax.annotate('EM-dark', xy=(10, 0.6), xycoords='data', size=12, ha="center", va="center")
ax.annotate('EM-bright', xy=(3, 0.8), xycoords='data', size=12, ha="center", va="center")
ax.xaxis.set_major_formatter(ticker.FormatStrFormatter("%d"))
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter("%.1f"))
ax.legend(loc=4, fontsize=11, fancybox=True)
fig.tight_layout()
plt.savefig('qchi_diagram_foucart.pdf')
Explanation: Note that this (very simplified!) model only uses Newtonian gravity, but it turns out to be a pretty good match for the results of Foucart, 2012. This is because two competing relativistic effects roughly cancel each other out: a compact object like a neutron star is harder to break in general relativity, but rotating black holes also have stronger tides. To see this, we can compare our model to Foucart's prediction for the mass of the accretion disk, Eq. (6):
\begin{equation}
\frac{m_{\rm disk}}{m_{\rm NS}} = \alpha (3q)^{1/3} \left(1 - \frac{2Gm_{\rm NS}}{c^2R_{\rm NS}}\right) - \beta \frac{a_{\rm ISCO}}{R_{\rm NS}}
\end{equation}
where $\alpha = 0.288$ and $\beta = 0.148$ are fitting parameters to a set of numerical simulations. The boundary between EM-bright and EM-dark occurs where $m_{\rm disk} = 0$, which relates $q$ to $a_{\rm ISCO}$ at fixed $R_{\rm NS}$:
\begin{equation}
q = \frac{1}{3} \left(\frac{\beta}{\alpha}\frac{a_{\rm ISCO}}{R_{\rm NS}}\right)^3 \left(1 - \frac{2Gm_{\rm NS}}{c^2R_{\rm NS}}\right)^{-3}.
\end{equation}
But watch out! There's a hidden dependence on $q$ on the right-hand side. Remember that $f(\chi_{\rm BH}) \equiv c^2a_{\rm ISCO}/Gm_{\rm BH}$ is a function of the black hole spin, so we can pull out a factor of $m_{\rm BH} = qm_{\rm NS}$ to get
\begin{equation}
q = 3^{1/2} \left[ \frac{\alpha}{\beta} \frac{1}{f(\chi_{\rm BH})} \frac{1-2C_{\rm NS}}{C_{\rm NS}} \right]^{3/2}.
\end{equation}
In this equation, $C_{\rm NS}=Gm_{\rm NS}/c^2R_{\rm NS}$ quantifies how compact the neutron star is.
Still with me? It's now possible to make an apples-to-apples comparison between Foucart's model and ours:
End of explanation |
11,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classes & Object Oriented Programming
Object-oriented programming or programming using classes is an abstraction that tries to apply the same abstraction rules used for categorizing nature to programming techniques.
This has the advantage that the process for creating a computer program that should simulate a detail of nature can apply similar principles than nature seems to apply.
Hence, the thought process for creating a simulated world in your program becomes rather similar to the thought process of understanding nature via abstraction.
Let me show you by example what this abstract description means.
When classifying fauna, one important class for example, are mammals.
But how did we define what mammals are?
We do this classification abstraction by identifying features that are similar between the different animals, and once we are sure to have found a defining feature that is only true or applicable for this one class, we can save time by just checking or using this feature for identification.
Today
Step1: The variable "np" is an object with many attributes such as
Step3: Anytime you use "dot-access" (e.g., np.sin) the thing after the dot is an attribute. It could be a number, a function, an array, etc.
An attribute could also be a class instance.
Let's learn about classes with a simple example
Step5: A Class for Parabolas
$$y = ax^2 + bx + c$$
Note that a straight line is a special case of this where $a = 0$
Step6: Test a single value
Step7: Q. Before we do this, let's review
Step8: Q. What are the 0, 5, 6 values?
Which is equivalent to
Step9: A Class for Parabolas Using Inheritance
We can specify that class Parabola inherits all the code (attributes
Step10: Q. How would we access b and c? (That is, b and c are attributes of Parabola, and therefore test. How can we see what those attributes are?)
Step11: To compare with later in the notebook, let's have a look at all attributes. It shows also the inherited attributes.
Step12: To here -- Lecture 1.
Q. Which is the parent class and which is the child class?
Q. Can you think of any disadvantages of using inheritance?
Review -- Jargon summary
Step13: A couple of attributes of classes
Step14: Careful Distinction
Step15: This also can be phrased as
Step16: Now, the relationship has to be phrased as "The Parabola object HAS a Line object". | Python Code:
import numpy as np
Explanation: Classes & Object Oriented Programming
Object-oriented programming or programming using classes is an abstraction that tries to apply the same abstraction rules used for categorizing nature to programming techniques.
This has the advantage that the process for creating a computer program that should simulate a detail of nature can apply similar principles than nature seems to apply.
Hence, the thought process for creating a simulated world in your program becomes rather similar to the thought process of understanding nature via abstraction.
Let me show you by example what this abstract description means.
When classifying fauna, one important class for example, are mammals.
But how did we define what mammals are?
We do this classification abstraction by identifying features that are similar between the different animals, and once we are sure to have found a defining feature that is only true or applicable for this one class, we can save time by just checking or using this feature for identification.
Today: Recap basic structure, nomenclature, __call__()
Next few lectures/tutorials: applications.
A class is really just a container for related functions and values.
Classes are put together in families. The point of this is to make it easier to modify and extend programs. A family of classes is known as a class hierarchy
A class hierarchy has parent classes and child classes. Child classes can inherit data and methods from parent classes.
You may also hear parent classes referred to as super-classes or base-classes and child classes referred to as sub-classes or derived-classes
When people talk about object-oriented programming they are probably referring to programs that are class-based. Believe it or not, we have been doing object-oriented programming for a long time now because everything in Python is an object. This is why Python programmers typically reserve "object-oriented" to mean "programming with classes."
End of explanation
np.sin
x = np.arange(10)
x.size # size is an attribute: just a number
x.mean() # mean is an attribute: a method
Explanation: The variable "np" is an object with many attributes such as:
End of explanation
class Line:
# __init__ is a special method used to create the class.
# It is referred to as "the constructor."
def __init__(self, m, b):
# self must be the first argument in every class method
# m and b are attributes of the Line class
self.m = m
self.b = b
# The special method __call__ will allow us to call Line
# with the syntax of a function.
# It is referred to as the call operator.
def __call__(self, x):
return self.m * x + self.b
# A class method for tabulating results
def table(self, L, R, n):
Return a table with n points at L <= x <= R.
s = '' # This is a string that will contain table lines
import numpy as np
for x in np.linspace(L, R, n):
# The self call yields self.m*x + self.b
y = self(x)
s += '%12g %12g\n' % (x, y)
return s
# Note that there is more than one return statement!
test = Line(1, 5) # This sets the slope and intercept
test # and creates an instance of Line
test.m, test.b
test(2) # Now we calculate a y for a given x.
print(test.table(0, 4, 5))
print(Line(1, 5).table(0, 4, 5) )
# Table is an attribute, or function, or method of class Line
# Where 1 is the slope, 5 is the y-intercept,
# 0 to 5 is the range, and there are 5 points.
Explanation: Anytime you use "dot-access" (e.g., np.sin) the thing after the dot is an attribute. It could be a number, a function, an array, etc.
An attribute could also be a class instance.
Let's learn about classes with a simple example: a class for straight lines.
$y = mx + b$
A class for parabolas will build on this.
End of explanation
import numpy as np
class Parabola:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def __call__(self, x):
return self.a * x**2 + self.b * x + self.c
def table(self, L, R, n):
Return a table with n points at L <= x <= R.
s = ''
for x in np.linspace(L, R, n):
y = self(x)
s += '%12g %12g\n' % (x, y)
return s
Explanation: A Class for Parabolas
$$y = ax^2 + bx + c$$
Note that a straight line is a special case of this where $a = 0$
End of explanation
test = Parabola(1, 1, 1) # We've created test, an instances of Parabola.
test1 = test(x=3) # Here we evaluate test at x = 3
# Q. What should test1 be?
test1
Explanation: Test a single value:
End of explanation
# Make a table of values
print(test.table(0, 5, 6))
Explanation: Q. Before we do this, let's review: what are the 1, 1, 1, and x=3 values?
End of explanation
# Q. What does the next line do?
print(Parabola(1, 1, 1).table(0, 5, 6))
Explanation: Q. What are the 0, 5, 6 values?
Which is equivalent to
End of explanation
class Parabola(Line):
def __init__(self, a, b, c):
super().__init__(b, c) # Line stores b and c
self.a = a # a is a
def __call__(self, x):
return Line.__call__(self, x) + self.a*x**2
# When Parabola is called it returns a call to Line (+ a*x**2).
class Parabola(Line):
def __init__(self, a, b, c):
super().__init__(b, c) # Line stores b and c
self.a = a # a is a
def __call__(self, x):
return super().__call__(x) + self.a*x**2
# When Parabola is called it returns a call to Line (+ a*x**2).
# Test a single value:
test = Parabola(1, 2, 3) # Q. What does this do?
# And below self is test.
test1 = test(x=2) # Q. What does this do?
# (Note that the x= is not needed.)
print(test1)
print(test.a)
print(test.m)
print(test.b)
Explanation: A Class for Parabolas Using Inheritance
We can specify that class Parabola inherits all the code (attributes: data, functions...) from class Line by making the class statement
class Parabola(Line):
Thus, class Parabola is derived from class Line. Line is a superclass or parent class, Parabola is a subclass or child class.
(Note that if we implement the constructor and call operators in Parabola, they will override the inherited versions from Line.)
Aside: Any method in the superclass Line can be called with
super().methodname(arg1, arg2, ...)
Note that calling a method does not require self as an argument, only defining them.
Unless you call a "class method" instead of an "object method", then there's no object around to attach things to, and one needs to provide self. See difference shown below.
End of explanation
# test calculates a*x**2 + b*x + c,
# Line is a subclass of Parabola, and in Line
# m is b and b is c. So, the argument order
# is a, m, b.
# (Trace it!)
test.a, test.m, test.b
Explanation: Q. How would we access b and c? (That is, b and c are attributes of Parabola, and therefore test. How can we see what those attributes are?)
End of explanation
dir(test)
# Hierarchy!
# Make a table of values:
# test is an attribute of Line,
# Line is a parent class or superclass of Parabola,
# and test is an instance of Parabola.
print(test.table(0, 5, 6))
Explanation: To compare with later in the notebook, let's have a look at all attributes. It shows also the inherited attributes.
End of explanation
l = Line(0, 1)
isinstance(l, Line)
# Q. What should the output be?
# MAKE SURE TO RUN ALL ABOVE CELLS FIRST!!
p = Parabola(1, 2, 3)
isinstance(p, Line)
# Q. And what about this?
# Q. And this?
isinstance(l, Parabola)
# Is Parabola a subclass of Line?
issubclass(Parabola, Line)
# Q. Should this be true or false?
issubclass(Line, Parabola)
Explanation: To here -- Lecture 1.
Q. Which is the parent class and which is the child class?
Q. Can you think of any disadvantages of using inheritance?
Review -- Jargon summary:
Object-oriented programming
Python programming is always object oriented, so
this phrase really refers to class-based programming.
Class
It's the definition for an object. A collection of related data and/or methods. But not alive unless instantiated!
Object
The instance of a class. Instances of the same class are independent of each other.
Attributes
Data or methods that belong to objects.
Constructor
A special method that initializes a class instance.
Inheritance
Passing functionality from a parent class to a child class.
This is similar to importing everything from a module,
i.e.,
from ___ import *
so be careful! You might get more than you expect.
Being Careful: Checking occurance for instances, checking for subclasses, and checking the class type
End of explanation
# This should tell us that instance p is a Parabola type of class:
p.__class__
p.__class__ == Parabola
p.__class__.__name__
# You can see it's a string from the tick
# marks, but also:
type(p.__class__.__name__)
Explanation: A couple of attributes of classes:
End of explanation
# What we have done so far is make class Parabola inherit class Line
# (by making Line an argument of Parabola in the class statement):
class Parabola(Line):
def __init__(self, a, b, c):
super().__init__(b, c) # Line stores b and c
self.a = a # a is a
def __call__(self, x):
# Recall equation: a*x**2 + b*x + c
return super().__call__(x) + self.a*x**2
test = Parabola(1, 2, 3)
test1 = test(x=2)
test1
# Q. Verifying: Is test an instance of Parabola?
isinstance(test, Parabola)
Explanation: Careful Distinction: Attribute vs. Inheritance
a.k.a. the infamous "has a ..." vs "is a ..." question.
End of explanation
# Q. Verifying: Is Parabola a subclass of Line?
issubclass(Parabola, Line)
class Parabola: # Before "class Parabola(Line):", which
# made Parabola inherit the attributes of Line.
def __init__(self, a, b, c): # Same as before
self.line = Line(b, c) # Now Line will be an attribute of Parabola
# Before "Line.__init__(self, b, c)" constructed an instance of Line
self.a = a # Same as before
self.c = c
self.b = b
def __call__(self, x): # Same as before
return self.line(x) + self.a*x**2
# Before "return Line.__call__(self, x) + self.a*x**2",
# which returned an instance of Line evaluated at x.
# To summarize:
# 1. We have not made Parabola a subclass of line
# 2. Line is an attribute of Parabola
test = Parabola(1, 2, 3)
test1 = test(x=2)
test1
# And the result should be the same:
# Is this still true?
isinstance(test, Parabola)
# Is this still true?
issubclass(Parabola, Line)
Explanation: This also can be phrased as: "A Parabola object IS a Line object".
A more specialized one, but nonetheless!
End of explanation
# So, will this work? Why or why not?
test.table(0, 5, 6)
# To see this, list the attributes of Parabola with dir(Parabola):
dir(Parabola)
# BUT, test is an instance of Parabola and has an attribute line:
dir(test)
# AND line has an attribute table:
dir(test.line)
# Hence:
print( test.line.table(0, 5, 6))
# And the attributes of test.line have attributes!
dir(test.line.m)
# The slope of the line is m:
test.line.m
# Focusing on two of the attributes,
# m is a real number:
test.line.m.real
# So, its imaginary component is zero:
test.line.m.imag
type(test.line.m)
a = 5
a.real
a.imag
Explanation: Now, the relationship has to be phrased as "The Parabola object HAS a Line object".
End of explanation |
11,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/logo.jpg" style="display
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <span style="text-align
Step5: <p style="text-align
Step6: <p style="text-align
Step8: <p style="text-align
Step9: <p style="text-align
Step10: <p style="text-align
Step12: <p style="text-align
Step14: <p style="text-align
Step26: <p style="text-align
Step27: <span style="text-align
Step28: <p style="text-align
Step29: <p style="text-align
Step31: <p style="text-align
Step32: <p style="text-align
Step34: <p style="text-align
Step37: <p style="text-align
Step40: <p style="text-align
Step43: <span style="text-align | Python Code:
import math
def factorize_prime(number):
while number % 2 == 0:
yield 2
number = number // 2
# `number` must be odd at this point (we've just factored 2 out).
# Skip even numbers. Square root is good upper limit, check
# https://math.stackexchange.com/a/1039525 for more info.
divisor = 3
max_divisor = math.ceil(number ** 0.5)
while number != 1 and divisor <= max_divisor:
if number % divisor == 0:
yield divisor
number = number // divisor
else:
divisor += 2
# If `number` is a prime, just print `number`.
# 1 is not a prime, 2 already taken care of.
if number > 2:
yield number
print(list(factorize_prime(5)))
print(list(factorize_prime(100)))
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="ืืืื ืฉื ืืืื ืืืืื ืืคืืืชืื. ื ืืฉ ืืฆืืืจ ืืฆืืขื ืฆืืื ืืืืื, ืื ืข ืืื ืืืืชืืืช ืฉื ืฉื ืืงืืจืก: ืืืืืื ืคืืืชืื. ืืกืืืื ืืืืคืืข ืืขื ืืฉื ืืงืืจืก ืืื ืืืื ืืื ืื ืืืืืื ืชืื ืืช ืืขืืจืืช.">
<span style="text-align: right; direction: rtl; float: right;">ืชืืขืื</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืืงืืื</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืจื ืฉืืืคืฉืชื ืืื ืจื ืืืืืืื ืืืคืืื ืืฉืชืืฉืชื ืืืื ืื ืฉื ืืื ืืื, ืืื ืชื ืืืื ืืช ืืืฉืืืืช ืืจืื ืฉื ืชืืขืื ืืื.<br>
ืืื ืื ืชืืชืื ืงืื ืืืืง ืืคืจืืืงื ืฆืืืชื ืื ืฉืชืฉืืจืจื ืืืืื ืืงืื ืคืชืื, ืืืืืช ืืชืืขืื ืชืงืืข ืื ืืืื ืืื ืืงืืืืช ืืจืืฆืื ืื ืื ืชืฆืืจืื ืืขืืื ืืื ืืืื ืืืขื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืคืชืืื ืืืืฉืชืืฉืื, ืืชืืขืื ืขืืืจ ืื ื ืืืืื ืืื ืชืืืืชื ืฉื ืืืืืืื, ืืืคืืกืื, ืืืืงืืช, ืคืขืืืืช ืืคืื ืงืฆืืืช.<br>
ืืื ืขืืืจ ืื ื ืืืชืืฆื ืืงืื ืืืืืื ืืืืืจืืช ืื ืื ืฉืื ืื ื ืจืืืื ืืชืืื ืื ื ืืื ืชืืื ืฆืืจืช ืืฉืืืืฉ ืืืืืืืช ืื,<br>
ืืืืื, ืืฉืคื ืืื ืคืืืชืื, ืฉืืงืืฉืช ืืช ืขืจื ืงึฐืจึดืืืึผืช ืืงืื, ืืฉ ืขืจื ืืืื ืืืื ืืชืืขืื ืจืืื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืืืจืช ืืงืจืืื ื ืืื ืขื ืชืืขืื ืจืืื ืืคืืืชืื, ืขื ืืืกืืืืช ืชืืขืื ืืขื ืืืฉืืช ืฉืื ืืช ืืชืืขืื.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืืขืจืืช ืืชืืขืื</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืคื ื ืฉื ืืื ืืขืืืง ืืชืืขืื, ื ืืืจ ืืขื ืขื ืืืืื ืฉืืื ืืขืจืืช ืืงืื ืืืื ืชืืขืื ืืงืื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขืจืืช ืื ืฉืืจืืช ืืื ืืืืืขืืืช ืืืคืชืืื.<br>
ืื ืืืืืืช ืืืืข ืืื ืคืืจืืืืื ืขื ืืืืืืช ืฉืงืืืืชื ืื ืืืข ืืงืื. ืืืืข ืื ื ืืขื ืืขืืืจ ืืขืืืชืืื ืืชืืืง ืืช ืืงืื.<br>
ืื ืืืคืืขื ืงืจืื ืืื ืืืคืฉืจ ืฉืืคืฉืจ ืืงืื ืฉืืืื ืื ืืชืืืืกืืช, ืืขืืชืื ืืคืืื ืืืฉ ืืฉืืจื ืฉื ืืงืื ืขืฆืื.<br>
<mark>ืืขืจืืช ืืืืจืืช ืืืกืืืจ <em>ืืื</em> ืืืจ ืืกืืื ืืชืื ืืื ืฉืืื ืืชืื, ืืืขืืื ืื <em>ืื</em> ืืชืื ืืงืื.</mark>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขืืืชื, ืชืืขืื ืืื ืืื ืืืืืขื ืืื ืฉืื ืฉืืฉืชืืฉืื ืืงืื ืฉืืื.<br>
ืืื ืืืฆืื ืืืืืืืื, ืืืืืงืืช, ืืคืขืืืืช ืืืคืื ืงืฆืืืช, ืืืกืคืจ ืืงืฆืจื ืืืขื ืืื ืืืช ืื ืขืืฉื ืืคืื ืงืฆืื ืืืื ืืืฉืชืืฉ ืื.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืื, ืืืืืื, ืคืืชืืชื ืืืืื ืืฉืืืจืจืชื ืืืชื ืืืื ืืจื ื,<br>
ืืืฉืชืืฉืื ืื ืืฆืคื ืืชืืขืื ืฉืืจืืจ ืืื ื ืืื ืืฉืชืืฉืื ืืงืื, ืืื ืงืืจื ืืื ืืื ืืืงืจื ืืงืฆื.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืืขืจืืช ืืงืื</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">Block Comments</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืคื ืฉืืชื ืืืื ืืืืจืื ืืืืืืจืืช ืฉืงืจืืชื ืขื ืื, ืืขืจื ืชืชืืื ืืชื <code>#</code> ืืจืืื ืฉืืืื ืืืจืื.<br>
ืืืจื ืืกืืืืืช ืืืจืืื ืืืื ืืืื ืฉืืกืืืจ ืืช ืืืืืืืช ืฉืืชืงืืื ืืฆืืจื ืืชืืืช ืืงืื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ื ืจืื ืืืืืื ืงืื ืงืฆืจ ืืคืืจืืง ืืกืคืจืื ืจืืฉืื ืืื ืขื ืืขืจืืช ืฉืืืืืขื ืื:
</p>
End of explanation
print("Hello World") # This is a comment
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืงืื ืฉืืืขืื ืืืคืืขืื ืฉื ื ืืงืจืื ืฉื "<dfn>Block Comment</dfn>".<br>
ืืืืืจ ืืฉืืจื ืืืช ืื ืืืชืจ ืฉื ืืขืจื ืฉืืื ืืคื ื ืคืืกืช ืงืื, ืืืืจืชื ืืืืจ ืืืจืื ืืงืื.<br>
ืึพBlock ืืืืชื ืืืืชื ืจืืช ืืืื ืฉื ืืงืื ืฉืืืื ืืื ืืชืืืืก, ืืื ืฉืืจื ืื ืชืชืืื ืืชื <code>#</code> ืฉืืืืจืื ืืืื ืจืืื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ื ืฉืื ืื ืื ืงืืืืช ืกืื ืื ืืฉืืืืช, ืฉื ืืืจืืช ืืืื ืืืืืื ืืืืจืื ื:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>ืืขืจื ืชืืื ืชืกืืืจ <em>ืืื</em> ืขืฉืื ื ืืฉืื ืืกืืื โ ืืืฉืื ืืืคื ืื <em>ืื</em> ืขืฉืื ื.<br>
ืืืชืื ืช ืืืคืืจืกื ื'ืฃ ืืืืื <a href="https://blog.codinghorror.com/code-tells-you-how-comments-tell-you-why/">ืืืืจ</a>: "ืืงืื ืืืื ืื ืื, ืืืขืจืืช ืืืืื ืื ืืื".</li>
<li>ืืขืจืืช ืืืจืืืืช ืืืฉืคืืื ืฉืืืื ืฉืืชืืืืื ืืืืช ืืืืื ืืืกืชืืืืื ืื ืงืืื.</li>
<li>ืืขืืื ืื ื ืฉื ื ืฉืืืช ืฉื ืืืืื (ืืฉืชื ืื, ืคืื ืงืฆืืืช ืืื') โ ืื ืื ืื ืืืคืืขืื ืืชืืืืช ืืฉืคื.</li>
<li>ืืขืจืืช ืืืืืืช ืืืืืชื ืืื ืืืืช.</li>
</ul>
<span style="text-align: right; direction: rtl; float: right; clear: both;">Inline Comments</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขืจื ืืืืื ืืืืืช ืืืืงืืช ืื ืืกืืฃ ืฉืืจืช ืืงืื:
</p>
End of explanation
snake_y = snake_y % 10 # Take the remainder from 10
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืงืจื ืฉืื ืืืขืจื ื ืืฆืืช ืืืืชื ืฉืืจื ืืื ืขื ืืงืื (ืืื ืืชื ืืืืจืื), ื ืืื ืืฉืื ืืคืืืช ืฉื ื ืจืืืืื ืืคื ื ืืกืืื <code>#</code>.<br>
ืืืื ืฆืืจื ืืงืืืืช ืคืืืช ืืืชืืืช ืืขืจืืช, ืืืืื ืฉืืื ืืืจืืื ืืช ืฉืืจืช ืืงืื, ืฉืืืจืช ืืช ืจืฆืฃ ืืงืจืืื ืืืจืื ืืืืชืจืช.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืชืื ืชืื ืืชืืืืื ื ืืืื ืืืกืืืจ ืื ืืงืื ืขืืฉื, ืืืฉื ืื ืื ืืฉืชืืฉืื ืืขืืชืื ืงืจืืืืช ืึพInline Comments.<br>
ืืืื ืขื ืืืืกืืืจ ืื ืืงืื ืฉืืื ืขืืฉื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืืื ืืืขืจื ืื ืืืื:
</p>
End of explanation
snake_y = snake_y % 10 # Wrap from the bottom if the snake hits the top
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืขืืืชื, ืืขืจื ืืืชืงืืืช ืขื ืืืขืช:
</p>
End of explanation
quote = "So many books, so little time."
help(quote.upper)
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">ืืืืืืื ืขื ืืขืจืืช</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขืืื ืืชืืื ื ืืืงืฆืืขื ื ืืืฉ ืืืืื ืจื ืฉื ืื ืขื ืืชื ื ืืื ืืืืกืืฃ ืืขืจืืช ืืงืื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขื ืคืืคืืืจืืช ืืืช ืืืืืช ืืจืืืื ืืขืจืืช ืืงืื.<br>
ืืื ืืืฆืืืื ืืืขื ืื ืืืกืืคืื ืืงืื ืฉืืื ืืขืจืืช ืืฆืจืืื ืืืืื ืื:<br>
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>ืืกืืจ ืืืืืื ืขื ืคืขืืืช ืืงืื.</li>
<li>ืืกืืจ ืืืืข ืงืื ืืกืืื ืขืืฉื ืืฉืื.</li>
<li>ืืกืืจ ืืืืข ืงืื ื ืจืื ืคืืื ืืฆืืจื ืืกืืืืช (ืื ืขืืงื ืืืจื ืืืกืืืืช, ืื ืืขืื) ืืืื ืืฉ ืืืฉืืืจ ืืืชื ืื.</li>
<li>ืืกืืจ ืขื ืืืืืืช ืฉืืชืงืืื ืื ืืืข ืืฆืืจืช ืืงืื ืฉื ืืชื ืืืืจืืืืงืืืจื ืฉืื.</li>
<li>ืฉืืืจื ืฉื ืงืื ืืฉืืืืฉ ืขืชืืื (ื ื ืื, ืืืงืจืื ืฉื ืงืื ืฉืขืืืจ ืื ืคืืช ืฉืืืืืช).</li>
<li>ืฆืืจืืฃ ื ืชืื ืื ื ืืกืคืื ืขื ืืืืืช ืืงืื โ ืืืืื ืืื ืืงืื, ืชื ืื ืืจืืฉืืื ืฉืื ืืืืืื.</li>
<li>
ืชืืื, ืฉืืืจืชื ืืืงื ืืืืคืืฉ ืขืชืืื ืฉื ืืขืืืช ื ืคืืฆืืช ืืงืื. ืืืืืื:
<ul>
<li><code dir="ltr"># FIXME</code> ืืฆืืื ืงืืข ืงืื ืฉืฆืจืื ืืชืงื.</li>
<li><code dir="ltr"># TODO</code> ืืืืจืื ืืื ืฉืืกืืืจ ืืฉืื ืฉืขืืืื ืฆืจืื ืืืฆืข ืืขืื ืื ื ืคืชืจ.</li>
<li><code dir="ltr"># HACK</code> ืืฆืืื ืืขืงืฃ ืฉื ืืขื ืืคืชืืจ ืืขืื, ืคืขืืื ืจืืืช ืืืจื ืืขืืืชืืช.</li>
</ul>
</li>
<li>ืฉืืจืืจ ืงืืืืจ. ืืงืื ืืืงืืจ ืฉื ืคืจืืืงื ืืงืื ืืคืชืื <a href="https://www.vidarholen.net/contents/wordcount/">Linux</a>, ืืืืืื, ืืืคืืขื ืืืืื "crap" ืืขื 150 ืคืขืืื, ืืกืคืจ ืฉื ืืฆื ืืืืืช ืขืืืื ืืืืจื ืืฉื ืื.</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขื ืคืืคืืืจืืช ืืืจืช ืืืืืช ืืฆืืฆืื ืืืขืจืืช ืืงืื ืืืื ืืืื ืืืืจืื.<br>
ืืื ืืืฆืืืื ืืืขื ืื ืืฉืชืืืื ืืืืขืื ืืื ืืืคืฉืจ ืืืืกืคืช ืืขืจืืช ืืงืื.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขื ืืชืืื ืฉื ืื ืื ืื ืขื ืืกืืืืช ืฆืืฆืื ืืืขืจืืช ืืืืื ืืช ืืืกืืช:<br>
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>ืงืื ืฆืจืื ืืืกืืืจ ืืช ืขืฆืื. ืื ืืืกืคืช ืืขืจืืช ืืงืื, ืกืืื ืฉืืงืื ืื ืืืื ืืื, ืืื ืืฆื ืฉืืคืืข ืืืคืชืื ืืงืื ืืขืชืื.</li>
<li>ืืขืจืืช ืฉืืกืืืจืืช ืงืื ืืืฆืจืืช ืฉืืคืื โ ืืฉืืคืื ืืงืื <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself">ืื ืจืข</a>.</li>
<li>
ืืขืจืืช ืืฆืจืืืืช ืชืืืืงื ืืคื ื ืขืฆืื โ ืฉืื ืื ืฉื ืงืื ืืฆืจืื ืืจืื ืฉืื ืื ืฉื ืืืขืจื ืฉืืชืืืืกืช ืืืื.<br>
ืจืื ืืืคืชืืื ืฉืืืืื ืืชืืืง ืืช ืืืขืจืืช, ืืืขืืชืื ื ืฉืืจืืช ืืขืจืืช ืฉืื ืชืืืืืช ืืช ืืงืื ืขืฆืื.<br>
ืืขืจื ืฉืืืืจืช ืืืจ ืื ื ืืื ืขื ืืงืื ืืจืืขื ืืืจืื ืืืืกืจ ืืขืจื.
</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืืช, ืืจืืื, ื ืืฆืืช ืืืคืฉืื ืืืืฆืข, ืืื ืืชืคืงืืื ืฉื ืืืืจืช ืื ืืืืจืืข ืืืืืืื ืืื.<br>
ืื ื ืืืืจ ืืื ืืืืื ืืกืืกืืื ืฉืคืืืช ืื ืืืชืจ ืืงืืืืื ืขื ืืืื:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>ืืฉืชืืื ืืืืื ืข ืืืขืจืืช ืฉืืืืจืืจ ืืื ืืื ืฆืืจื.</li>
<li>ืื ืชืืกืืคื ืืขืจืืช ืขื ืงืื ืื ืืื ืืื โ ืืืงืื ืืืช, ืฉืคืจื ืืืชื.</li>
<li>ืืืื ืชืืื ืฉืืืขืจืืช ืฉืืื ืชืืืืืช ืืงืื ืฉืืชืืชื.</li>
</ul>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืืืจืืืืช ืชืืขืื</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ื ืืืืจ ืืคืื ืงืฆืื <var>help</var>, ืฉืืืจืชื ืืืฆืื ืื ื ืชืืขืื ืื ืืืข ืืขืจื ืืกืืื ืืชืืื ืืช:
</p>
End of explanation
quote = "So many books, so little time."
help(str)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืื ืชืขืืื ืืืื, ืืฉืืืชื ื, ืื ืขืืืจ ืกืื ืืืฉืชื ื <var>str</var> ืขืฆืื, ืื ืื ืืฆืืจื ืืืคื ืคืืืช ืืื ื ืืช (ืคื ื ืขืจื ืืื ืืงืจืื ืืช ืื):
</p>
End of explanation
def add(a, b):
return a + b
help(add)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืื ื ื ืกื ืืืืืืจ ืคืื ืงืฆืื ืืฉืื ื, ืืขืืจืชื ืืืจืืืื ื ืืคืื ืงืฆืื <var>help</var> ืชืืืืจ ืื ื ืืืืข ืื ืืืขืื ืืขืืื:
</p>
End of explanation
def add(a, b):
Return the result of a + b.
return a + b
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืื ืื ืขืืื ื ืืขืฉืืช ืืื ืืืืกืืฃ ืชืืขืื?<br>
ืืชืืจืจ ืฉืื ืื ืืื ืืกืืื. ืืกื ืืืื ืฆืจืื ืืืืกืืฃ ืืฉืื ืฉื ืงืจื "ืืืจืืืช ืชืืขืื".
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืืืจืืืืช ืชืืขืื ืฉื ืฉืืจื ืืืช</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ื ืืกืืฃ ืืคืื ืงืฆืื ืฉืื ื ืืืจืืืช ืชืืขืื ืฉื ืฉืืจื ืืืช (<dfn>One-line Docstring</dfn>) ืืฆืืจื ืืืื:
</p>
End of explanation
help(add)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืืืื ืืืืจืื ื ืืืกืคื ื ืืฉืืจื ืืจืืฉืื ื ืฉื ืืืฃ ืืคืื ืงืฆืื <dfn>ืืืจืืืช ืชืืขืื</dfn> (<dfn>Docstring</dfn>), ืฉืืชืืืื ืืืกืชืืืืช ืึพ3 ืืืจืืืืช ืืคืืืืช.<br>
ืืคืฉืจ ืืืฉืชืืฉ ืืกืื ืืืจืืืืช ืืืจ, ืื 3 ืืืจืืืืช <a href="https://www.python.org/dev/peps/pep-0257/#specification">ืื ืืืืกืืื</a> ืืื ืื ื ื ืืืืง ืื.<br>
ืืื ืืืืจืืืืช ืชืืืจื ื ืืงืฆืจื ืื ืืคืื ืงืฆืื ืขืืฉื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขืช ืืคืื ืงืฆืื <var>help</var> ืชืฉืชืฃ ืืืชื ื ืคืขืืื, ืื ืืื ืืงืื ืืช ืืชืืขืื ืขื ืืคืื ืงืฆืื ืฉืืชืื ื:
</p>
End of explanation
def get_parts(path):
current_part = ""
for char in self.fullpath:
if char in r"\/":
yield current_part
current_part = ""
else:
current_part = current_part + char
if current_part != "":
yield current_part
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ื ืงืืืืช ืืฉืืืืช ืืืงืฉืจ ืื:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>ืชืืขืื ืฉื ืฉืืจื ืืืช ืืืืขื ืขืืืจ ืืงืจืื ืืจืืจืื ืืืืืื, ืืื ืืคืื ืงืฆืื <var>add</var> ืฉืืชืื ื.</li>
<li>ืืชืืขืื ืืืืชื ืืฉืืจื ืืืช, ืฆืืื ืืืืจืืืืช, ืืื ืฉืืจืืช ืจืืงืืช ืืคื ืื ืื ืืืจืื.</li>
<li>ืืชืืขืื ืื ืืกื ืืฆืืจืช ืคืงืืื ืืื ืืกืืคืืจ ("ืืืืจ ืืช ืืชืืฆืื" ืืื "ืืคืื ืงืฆืื ืืืืืจื...").<br>
ืืื ืืฆืืข ืืื ืืื ืืฉืืืจ ืขื ืืฆืืจื "ืขืฉื X, ืืืืจ Y" <span div="ltr">(ืืื ืืืืช: Do X, Return Y)</span>.</li>
<li>ืชืืขืื ืฉื ืฉืืจื ืืืช ืื ืืืืื ืืช ืกืื ืืคืจืืืจืื (a ืื b, ืืืงืจื ืฉืื ื). ืืื ืืืื ืืืืื ืืช ืืกืื ืฉื ืขืจื ืืืืืจื.</li>
</ul>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืืืจืืืืช ืชืืขืื ืืจืืืืช ืฉืืจืืช</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ื ืืงื ืืืืืื ืคืื ืงืฆืื ืฉืืงืืืช ื ืชืื ืืืืืืจื ืืช ืืืงืื:
</p>
End of explanation
def get_parts(path):
Split the path, return each part separately.
current_part = ""
for char in self.fullpath:
if char in r"\/":
yield current_part
current_part = ""
else:
current_part = current_part + char
if current_part != "":
yield current_part
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืชืืจ ืืชืืื, ื ืืกืืฃ ืืคืื ืงืฆืื ืืืจืืืช ืชืืขืื ืฉื ืฉืืจื ืืืช ืฉืืชืืจืช ืืงืฆืจื ืื ืชืืืืชื.
</p>
End of explanation
def get_parts(path):
Split the path, return each part separately.
Each "part" of the path can be defined as a drive, folder, or
file, separated by a forward slash (/, typically used in Linux/Mac)
or by a backslash (usually used in Windows).
path -- String that consists of a drive (if applicable), folders
and files, separated by a forward slash or by a backslash.
current_part = ""
for char in self.fullpath:
if char in r"\/":
yield current_part
current_part = ""
else:
current_part = current_part + char
if current_part != "":
yield current_part
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืื ืืืคืื ืืช ืืชืืขืื ืืืจืืื ืฉืืจืืช, ื ืืกืืฃ ืฉืืจื ืจืืงื ืืืจื ืืฉืืจื ืฉืืชืืจืช ืืงืฆืจื ืื ืขืืฉื ืืคืื ืงืฆืื.<br>
ืืืจื ืืฉืืจื ืืจืืงื ื ืืกืืฃ ืชืืืืจ ืืืื ืืืชืจ ืฉืืกืืืจ ืื ืืคืฉืจ ืืฆืคืืช ืฉืืคืื ืงืฆืื ืชืืืืจ, ืืื ืืคืจืืืจืื ืฉืืื ืืฆืคื ืืงืื.<br>
ืืืืจืืืืช ืืกืืืจืืช ืืืื ืืฉืืจื ืืฉืืื:
</p>
End of explanation
A demonstration of writing well documented Python code.
This snippet demonstrates how a well documented code should look.
Each method and class is documented, and there is also
documentation for the script itself.
import os
class Path:
Represent a filesystem path.
It is used to simplify the work with paths across different
operating systems. The initialization method takes a string and
populates the full path property along with "parts," which is a
version of the path after we split it using path separator
characters.
Basic Usage:
>>> Path(r'C:\Yossi').get_drive_letter()
'C:'
>>> str(Path(r'C:\Messed/Up/Path\To\file.png'))
'C:/Messed/Up/Path/To/file.png'
def __init__(self, path):
self.fullpath = path
self.parts = list(self.get_parts())
def get_parts(self):
Split the path, return each part separately.
Each "part" of the path can be defined as a drive, folder, or
file, separated by a forward slash (/, typically used in
Linux/Mac) or by a backslash (usually used in Windows).
path -- String that consists of a drive (if applicable),
folders, and files, separated by a forward slash or by
a backslash.
current_part = ""
for char in self.fullpath:
if char in r"\/":
yield current_part
current_part = ""
else:
current_part = current_part + char
if current_part != "":
yield current_part
def get_drive_letter(self):
Return the drive letter of the path, when applicable.
return self.parts[0].rstrip(":")
def get_dirname(self):
Return the full path without the last part.
path = "/".join(self.parts[:-1])
return Path(path)
def get_basename(self):
Return the last part of the path.
return self.parts[-1]
def get_extension(self):
Return the extension of the filename.
If there is no extension, return an empty string.
This does not include the leading period.
For example: 'txt'
name = self.get_basename()
i = name.rfind('.')
if 0 < i < len(name) - 1:
return name[i + 1:]
return ''
def is_exists(self):
Check if the path exists, return boolean value.
return os.path.exists(str(self))
def normalize_path(self):
Create a normalized string of the path for printing.
normalized = "\\".join(self.parts)
return normalized.rstrip("\\")
def info_message(self):
Return a long string with essential details about the file.
The string contains:
- Normalized path
- Drive letter
- Dirname
- Basename
- File extension (displayed even if not applicable)
- If the file exists
Should be used to easily print the details about the path.
return f
Some info about "{self}":
Drive letter: {self.get_drive_letter()}
Dirname: {self.get_dirname()}
Last part of path: {self.get_basename()}
File extension: {self.get_extension()}
Is exists?: {self.is_exists()}
.strip()
def __str__(self):
return self.normalize_path()
EXAMPLES = (
r"C:\Users\Yam\python.jpg",
r"C:/Users/Yam/python.jpg",
r"C:",
r"C:\\",
r"C:/",
r"C:\Users/",
r"D:/Users/",
r"C:/Users",
)
for example in EXAMPLES:
path = Path(example)
print(path.info_message())
print()
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืกืื ืืื ืฉื ืชืืขืื ื ืงืจื "<dfn>ืืืจืืืช ืชืืขืื ืืจืืืช ืฉืืจืืช</dfn>" (<dfn>Multi-line Docstring</dfn>).<br>
ื ืืชืื ืืืชื ืืื ืืขืืืจ ืืื ืฉืืฉืชืืฉ ืืคืื ืงืฆืื ืืืืื ืื ืืืจืชื, ืืืื ืคืจืืืจืื ืืื ืืฆืคื ืืงืื ืืืืื ืขืจืืื ืืืืืจื ืืื ื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืื ื ืฉืชืืฉ ืืชืืขืื ืืจืืื ืฉืืจืืช?
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>ืืจืืฉ ืืงืื โ ื ืกืื ืื ืืืจืช ืืงืื, ืื ืกืืืจ ืืงืฆืจื ืื ืืื ืืืื.</li>
<li>ืืจืืฉ ืืืืงื โ ื ืกืื ืืช ืืชื ืืืืช ืืืืืงื ืื ืชืขื ืืช ืืคืขืืื <code>__init__</code>.</li>
<li>ืืคืื ืงืฆืื ืื ืืคืขืืื โ ื ืกืื:
<ul>
<li>ืื ืืืจืชื.</li>
<li>ืืืื ืืจืืืื ืืื ืืื ืืฆืคื ืืงืื.</li>
<li>ืืื ืขืจืื ืืืืืจื ืฉืื.</li>
<li>ืืืื ืฉืืืืืช ืืื ืขืืืื ืืืืืืจ.</li>
<li>ืขื ืื ืืื ืืฉืคืืขื ืืืฅ ืืืฉืจ ืขื ืืืฉืชื ืื ืืคืื ืงืฆืื ืขืฆืื (ื ื ืื โ ืืชืืื ืืงืืืฅ).</li>
</ul>
</li>
</ul>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืืืืื ืืชืืขืื ืืกืืกื</span>
End of explanation
help(quote.upper)
print(quote.upper.__doc__)
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">ืืืืืจื ืืงืืขืื</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืฉืจ ืื ืื ื ืืืกืืคืื ืืืฉืืช ืืกืืืืช ืืืจืืืช ืชืืขืื, ื ืืกืคืช ืื ืชืืื ืช ืงืกื ืืฉื <code>__doc__</code> ืฉืืืืื ืืช ืืชืืขืื ืฉืื.<br>
ืืชืืื ืฉื ืืชืืื ื ืืื ืืื ืื ืฉืืืืคืก ืืืฉืจ ืื ืื ื ืืคืขืืืื ืืช ืืคืื ืงืฆืื <var>help</var>.<br>
ื ืืื ืืืืืื ืืช ืชืืื ืืชืืื ื <code>__doc__</code> ืฉื <var>quote.upper</var> ืฉืกืงืจื ื ืืชืืืืช ืืืืืจืช.<br>
ืืคืฉืจ ืืจืืืช ืฉืืื ืืื ืืืืืืื ืืืืจืืืช ืฉืงืืืื ื ืืฉืืคืขืื ื ืขืืื <var>help</var>:
</p>
End of explanation
def add(a, b):
return a + b
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืื ืงืืจื ืืฉื ืืฆืืจ ืคืื ืงืฆืื ืืฉืื ื?<br>
ื ื ืกื ืืืฆืืจ ืืืืืื ืืช ืืืืืชื ื ืืืืชืืงื, ืืคืื ืงืฆืื <var>add</var>:
End of explanation
print(add.__doc__)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืืื ืฉืื ืืืกืคื ื ืืคืื ืงืฆืื ืชืืขืื, ืืชืืื ื <code>__doc__</code> ืชืืืืจ ืึพ<code>None</code>:
</p>
End of explanation
def add(a, b):
Return the result of a + b.
return a + b
print(add.__doc__)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ื ืืกืืฃ ืชืืขืื ืื ืจืื ืืช ืืฉืื ืื:
</p>
End of explanation
class PostOffice:
def __init__(self, usernames):
self.message_id = 0
self.boxes = {user: [] for user in usernames}
def send_message(self, sender, recipient, message_body, urgent=False):
user_box = self.boxes[recipient]
self.message_id = self.message_id + 1
message_details = {
'id': self.message_id,
'body': message_body,
'sender': sender,
}
if urgent:
user_box.insert(0, message_details)
else:
user_box.append(message_details)
return self.message_id
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืืืจ ืืืืข ืืืื ืืื ืื, ืืื ืืื ืฉืื ืืชืืจืืชืื ืืขืืื ืื ื ืืฆืข ืืฉืื ืืฉืืจื ืึพ<code dir="ltr">__doc__</code> ืืื ื ืืืฉ ืืืื ืืฉืืจืืช.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืืชืคืชืืืช ืืชืืขืื</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืขื ืืืื ืจืืชื ืงืืืืช ืืืคืชืืื ืืคืืืชืื ืื ืืื, ืืืฉืื ืขื ืืจื ื ืขืืื ืื ืืื ืืืชืจ ืืงืจืื ืชืืขืื ืืืืชืื ืืืชื.<br>
ืืืืื ืฉืชืืขืื ืืื ืืืง ืืฉืื ืืงืื, ืืชืคืชืื ืืืจืืฆืช ืืฉื ืื ืืื ืืืืืืืช ืืชืงื ืื ืฉืืืจืชื ืืกืืืข ืืืคืชืื ืคืืืชืื ืืชืขื ืืื ืืืชืจ ืืช ืืงืื ืฉืืื.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืฉืคืช ืกืืืื ืืชืืขืื</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืชืืืื, ืงืืืืช ืคืืืชืื ืืืคืฉื ืืจื ืืืชืื ืืฉืื ืฉืืื ืืืชืจ ื"ืกืชื ืืื".<br>
ืืขืืชืื ืงืจืืืืช ื ืจืฆื ืืืืืืฉ ืืืจืื ืืชืืขืื, ืืชืช ืงืืฉืืจ ืืืงืืจ ืืืฆืื ื, ืืืืกืืฃ ืืืชืจืช ืื ืืฆืืื ืคืจืืืื ืืจืฉืืื.<br>
ืฉืคืช ืืกืืืื ืืืืืจืช HTML ืฉืืฉืืืฉืช ืชืืืจ ืืืฆืืจืช ืืคื ืืื ืืจื ื ืืืืชื ืืืจ ืงืืืืช, ืืื ืืคืืืชืื ืืืคืฉื ืฉืคื ื ืงืืื ืืืชืจ ืฉื ืื ืืขืื ืืกืจืืง.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืฆืืจื ืื ืคืืชืื ืืฉื ืช 2002 ืฉืคืช ืืกืืืื <dfn>reStructuredText</dfn> (ืื ืืงืืฆืืจ โ <dfn>reST</dfn>), ืฉืืืคืฉืจืช ืืืคืื ืืื ืฉืื ืื ื ืืืชืืื ืืืงืกื ืืกืืื ื.<br>
ืืืจืชื ืืขืืงืจืืช ืฉื reST ืืืืชื ืืืคืฉืจ ืืื ืกืช ืืงืกื ืืกืืื ื ืืชืืขืื ืืื ื ืืคืืืชืื, ืืคืืืชืื ืืื ืืืืฆื ืืืชื ืจืฉืืืช ืืฆืืจื ืื.<br>
ืืื ืืืคืฉืจืช ืืืชืื ืื ืจืง ืชืืขืื ืืงืื, ืืื ืื ืชืืขืื ืืืืืื ืืืื ืขื ืืืจืืชืื ืฉื ืืคืจืืืงื ืืขื ืืจื ืืฉืืืืฉ ืื (ืืื ืกืคืจ ืชืืขืื ืงืื).
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืงืฆืจื ืืืจืืขื ืืืืืื ืคื ืืืจืื ืขื ืฉืืืืฉ ืึพreStructuredText, ืืื ืื ืื ื ืืืืืฆืื ืืืื ืืงืจืื ืืช ืืืืจืื ืืืงืืฆืจ ืฉื ืืฆื <a href="https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html">ืืื</a>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืื ืฉื ืืชื ืึพreStructuredText ืืืจืื ืื ืขืืืจ ืื ืฉืืชื ืืืชื:
</p>
<pre style="text-align: right; direction: rtl; float: right; clear: both;">
ืืคืฉืจ ืืจืืืช ืืืงืกื *ืืื* ืืขืืื ืงืื ื ืืืืืืืืช ืฉื **reStructuredText**.
ืืืง ืืืืคืฉืจืืืืช ืืคืืืช ืืชืืืืืืช ืฉืื ืืืืืืช:
* ืืืืฉื, ืืื ืืืื ืืงื ืชืืชืื.
* ืจืฉืืืืช.
* ืกืืืื ืฉื ืงืื, ืืื `print("Hello World")`.
</pre>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืืจืื ืื ืืชืืฆืื ืืกืืคืืช:
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืคืฉืจ ืืจืืืช ืืืงืกื <em>ืืื</em> ืืขืืื ืงืื ื ืืืืืืืืช ืฉื <strong>reStructuredText</strong>.<br>
ืืืง ืืืืคืฉืจืืืืช ืืคืืืช ืืชืืืืืืช ืฉืื ืืืืืืช:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>ืืืืฉื, ืืื ืืืื ืืงื ืชืืชืื.</li>
<li>ืจืฉืืืืช.</li>
<li>ืกืืืื ืฉื ืงืื, ืืื <code dir="ltr">print("Hello World")</code>.</li>
</ul>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืื ืืข ืืืฆืืจืช ืงืืืฆื ืชืืขืื</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืฉื ืช 2008 ืคืืชื ืืงืืืืช ืืคืืืชืื ืืื ืืฉื <dfn>Sphinx</dfn>.<br>
ืืืจืชื ืืกืจืืง ืืช ืืชืืขืื ืฉื ืคืจืืืงื ืืงืื ืฉืืื, ืืืืฆืืจ ืืื ื ืืกืื ืชืืขืื ืฉื ืขืื ืืงืจืื ืึพPDF ืื ืึพHTML.<br>
Sphinx, ืืืืื, ืชืืื ืืืกืืืื ืฉื ืืชืื ืึพreStructuredText, ืืืื ืืคื ืืืืจื ืืคืืคืืืจื ืืืื.<br>
ืืชืจ ืืชืืขืื ืื ืืืื ืฉื ืคืืืชืื ื ืืฆืจ ืืืืฆืขืืชื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืืฉื ืืงืืจืก ื ืืชืื ืคืจืืืงืืื, ืืืื ื ืืื ืืืฉืชืืฉ ืึพSphinx ืืืฆืืจืช ืืกืืืื ืฉืืขืืจื ืืืฉืชืืฉืื ืืคืจืืืงื ืืืชืืฆื ืื.<br>
ืืฉืชืืฉืื ืืืืืื ืึพSphinx ืืืืืื, <a href="https://www.sphinx-doc.org/en/master/examples.html">ืืื ืืฉืืจ</a>, ืืช:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li><a href="https://docs.python.org/3/">ืืชืจ ืืชืืขืื ืืจืฉืื ืฉื ืคืืืชืื</a>.</li>
<li>ืืืืืื ืืคืืคืืืจื ืื ืืืื ืืื ืืชืื ืืืืข <a href="https://pandas.pydata.org/pandas-docs/stable/reference/">pandas</a>.</li>
<li>ืืืืืื ืืคืืคืืืจื ืืขืืืื ืืคื ืืื ืืจื ื <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/">BeautifulSoup</a>.</li>
</ul>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืืชืจ ืืืืกืื ืงืืืฆื ืชืืขืื</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืฉื ืช 2010 ืคืืชื ืืชืจ ืืฉื Read the Docs, ืฉืืืจืชื ืืจืื ืชืืขืื ืืคืจืืืงืืื ืฉื ืืชืื ืืคืืืชืื.<br>
ืืืชืจ ืืืคืฉืจ ืืืขืืืช ืืจืฉืช ืืงืืืช ืชืืขืืืื ืฉื ืืฆืจื ืืขืืจืช Sphinx ืืืื ืืืฉ ืืืชื ืืงืื ืืจืื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืฉืชืืฉืื ืืืืืื ืึพRead the Docs ืืืืืื, ืืื ืืฉืืจ, ืืช:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>ืืืืืื ืืคืืคืืืจื ืืขืืืื ืืืฉืืืื ืฉื ืืงืฉืืช ืืื ืืจื ื <a href="https://requests.readthedocs.io/en/master/">requests</a>.</li>
<li>ืื ืื ืืืืืืืช ืฉื ืคืืืชืื <a href="https://pip.pypa.io/en/stable/">pip</a>.</li>
<li>ืคืืืคืืจืืช ืืืืืจืืช <a href="https://jupyter-notebook.readthedocs.io/en/stable/">Jupyter Notebooks</a>.</li>
<li>ืืืืืื ืื ืคืืฅ ืืืืชืจ ืืขืืืื ืขื ืืืื ืืชืืืืื <a href="https://numpydoc.readthedocs.io/en/latest/">NumPy</a>.</li>
</ul>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืกืื ืื ืืช ืชืืขืื</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืฉื ืฉื ืื ืืืคืฉื ืืคืชืื ืคืืืชืื ืืจื ืืืืื ืืืชืจ ืืืชืื ืืืจืืืืช ืชืืขืื.<br>
ืืืื, ืืชืื ืชืื ืืืืืื ืฉืืืืจืื ืืฉ ืฆืืจื ืืืกืืืช ืืืืืืจืช ืืจืืฉ.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขืงืืืช ืืฆืืจื ืืื, ืืชืืืื ืืืชืคืชื ืกืื ืื ืืช ืฉืืืืืจืื ืืฆืืจื ืืืืงื ืืืชืจ ืืืฆื ืืืืจ ืืืืจืืืช ืืชืืื ืฉื ืืืจืืืช ืชืืขืื.<br>
ืืื ืื ืืื, ืืชื ืฉืืืืื? ืื ืืฉืืืืจืื ืืฉ ืชืงื ืืืื ืืคืฉืจ ืืขืฉืืช ืืจืื ืืืจืื ืืื ืืืื:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>ืืคืฉืจ ืืื ืืช ืื ืืข ืืืคืืฉ ืขืืืจ ืชืืขืืื ืืืืืืื.</li>
<li>ืืคืฉืจ ืืืคืขืื ืืืื ืืื Sphinx, ืฉืืกืจืงื ืืช ืืืืืื ืืืฆืจื ืืืชืืขืื ืฉืืฉ ืื ืืชืจ ืชืืขืื โ ืืืืคื ืืืืืืื!</li>
<li>ืืคืฉืจ ืืืคืกืืง ืืืืื ืืื ืืืืชืจ ืืืืืืืืื ืืื ืืจื ืืืื ืขื ืืื ืืคื ืืืชืจ ืืืชืื ๐</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืื ืฉืืืคืชืืื ืื ืืืืช ืืฆืืื ื ืืืืชืจ ืขื ืืืืืืืืื ืืืื ืืจื ืืืื, ืืืื ืื ืงืืื ืืืืจืื ื ืื ืชืงืคื.<br>
ืขื ืืืื ืืชืืืฉื ืฉืืืฉื ืกืื ืื ืืช ืคืืคืืืจืืื ื"ืืื ืืืืจื ืืืจืืืช ืืืืืช ืชืืขืื".
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ื ืืื ืืช ืืืืืืื ืืื ืืกืื ืื ืืช, ืื ืฉืืืจ ืืื ืืืืืจ ืืืืื ืกืื ืื ืชืขืืืคื ืืืฉืชืืฉ.<br>
ืืคื ืืื ืงืื ื ืืื ืืงืฆืจืฆืจ ืฉืื ืื ื ืืืืืื ืืชืขื ืืฉืืจืืช ืืืืืจืช, ืืื ืืื ืืืกืื ืื ืืช ืืืื.<br>
ืืงืื ื ืืืืจ ืืืืงื ืฉื ืกื ืืฃ ืืืืจ, ืฉืชืืคืฉืจ ืืืฉืชืืฉืื ืื ืืฉืืื ืืืืขืืช ืื ืืื.<br>
ืืื ืชืืขืื ืืื, ืืืืืงื ืชืืจืื ืื:
</p>
End of explanation
def show_example():
Show example of using the PostOffice class.
users = ('Newman', 'Mr. Peanutbutter')
post_office = PostOffice(users)
message_id = post_office.send_message(
sender='Mr. Peanutbutter',
recipient='Newman',
message_body='Hello, Newman.',
)
print(f"Successfuly sent message number {message_id}.")
print(post_office.boxes['Newman'])
show_example()
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืื ืชืืจืื ืืืืื ืืคืื ืงืฆืื ืืชืืขืืช ืฉืืืืืื ืืืฆื ืืืืืงื ืขืืืืช:
</p>
End of explanation
class PostOffice:
A Post Office class. Allows users to message each other.
Args:
usernames (list): Users for which we should create PO Boxes.
Attributes:
message_id (int): Incremental id of the last message sent.
boxes (dict): Users' inboxes.
def __init__(self, usernames):
self.message_id = 0
self.boxes = {user: [] for user in usernames}
def send_message(self, sender, recipient, message_body, urgent=False):
Send a message to a recipient.
Args:
sender (str): The message sender's username.
recipient (str): The message recipient's username.
message_body (str): The body of the message.
urgent (bool, optional): The urgency of the message.
Urgent messages appear first.
Returns:
int: The message ID, auto incremented number.
Raises:
KeyError: If the recipient does not exist.
Examples:
After creating a PO box and sending a letter,
the recipient should have 1 message in the
inbox.
>>> po_box = PostOffice(['a', 'b'])
>>> message_id = po_box.send_message('a', 'b', 'Hello!')
>>> len(po_box.boxes['b'])
1
>>> message_id
1
user_box = self.boxes[recipient]
self.message_id = self.message_id + 1
message_details = {
'id': self.message_id,
'body': message_body,
'sender': sender,
}
if urgent:
user_box.insert(0, message_details)
else:
user_box.append(message_details)
return self.message_id
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืืืืืื? ืืืื ืืื ืฉืื ืื ื ืืืืืื ืืชืขื ืืช ืืืืืงื ืืื.<br>
ืงืืืื, ืืขืืืื.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">Google Docstrings</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืืื ืืชืืืงืื ืื ืฉื ืื ืจืืืช <a href="https://google.github.io/styleguide/pyguide.html">ืืกืื ืืืกืืืืช</a> ืืจืื ืืฉืืื ืฉืืคืจื ืืช ืกืื ืื ืืืชืืื ืืคื ืืื ืืจืฆืื ืืคืืืชืื ืืืืจื.<br>
ืืืกืื, ืืื ืืืชืจ, <a href="https://google.github.io/styleguide/pyguide.html#383-functions-and-methods">ืืชืืจืื ืืืื</a> ืืืฆื ืื ืืืืื ืื ืฉืฆืจืืืืช ืืืืจืืืช ืืืจืืืืช ืชืืขืื.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืชืืืื ืืจืืืช ืืืืื ืืืืคื ืฉืื ืืืืจืืช ืืืืจืืืช ืืืจืืืืช ืืชืืขืื ืฉื Google <a href="https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html">ืืื</a>.<br>
ื ืจืื ืืืืื ืืชืืขืื ืืืืืงื <var>PostOffice</var> ืืคืขืืืืชืื ืืฉืืื ืฉื ืืืื, ืืืืื ืืืจ ืื ื ื ืชื ืื ืจืืื ื.
</p>
End of explanation
class PostOffice:
A Post Office class. Allows users to message each other.
Parameters
----------
usernames : list
Users for which we should create PO Boxes.
Attributes
----------
message_id : int
Incremental id of the last message sent.
boxes : dict
Users' inboxes.
def __init__(self, usernames):
self.message_id = 0
self.boxes = {user: [] for user in usernames}
def send_message(self, sender, recipient, message_body, urgent=False):
Send a message to a recipient.
Parameters
----------
sender : str
The message sender's username.
recipient : str
The message recipient's username.
message_body : str
The body of the message.
urgent : bool, optional
The urgency of the message.
Urgent messages appear first.
Returns
-------
int
The message ID, auto incremented number.
Raises
------
KeyError
If the recipient does not exist.
Examples
--------
After creating a PO box and sending a letter,
the recipient should have 1 messege in the
inbox.
>>> po_box = PostOffice(['a', 'b'])
>>> message_id = po_box.send_message('a', 'b', 'Hello!')
>>> len(po_box.boxes['b'])
1
>>> message_id
1
user_box = self.boxes[recipient]
self.message_id = self.message_id + 1
message_details = {
'id': self.message_id,
'body': message_body,
'sender': sender,
}
if urgent:
user_box.insert(0, message_details)
else:
user_box.append(message_details)
return self.message_id
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืคื ืืกืื ืืกืื ืื ืฉื ืืืื, ืืคืขืืื ืฆืจืืืื ืืืืืช 3 ืืืงื ืชืืขืื:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li><em dir="ltr" style="border-style: 1px solid; padding: 0 0.5em;">Args:</em> โ ืจืฉืืืช ืืืจืืืื ืืื ืฉืืื ืืืืืช ืืงืื, ืกืืื ืืืกืืจ ืงืฆืจ ืขื ืื ืืื ืืื.</li>
<li><em dir="ltr" style="border-style: 1px solid; padding: 0 0.5em;">Returns:</em> โ ืืขืจื ืฉืืคืื ืงืฆืื ืืืืืจื ืืืกืื ืฉืื. ืืืงืจื ืฉื generator ืืืืง ืืงืจื <em dir="ltr" style="border-style: 1px solid; padding: 0 0.5em;">Yields :</em> ืืืงืื.</li>
<li><em dir="ltr" style="border-style: 1px solid; padding: 0 0.5em;">Raises:</em> โ ืืฉืืืืืช ืฉืืคืื ืงืฆืื ืขืืืื ืืืจืืง ืืืืืื ืืงืจืื ืื ืขืืื ืืงืจืืช.</li>
<li>ืืคืฉืจ ืืืืกืืฃ ืื ืืืงืื ืืฉืื ื, ืืื <em dir="ltr" style="border-style: 1px solid; padding: 0 0.5em;">Examples:</em> ืฉืืจืื ืืืฆื ืืคืื ืงืฆืื ืคืืขืืช. ืืืืืฅ ืื ืืืืืื ืขื ืื.</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืฉ ืืชืขื ืื ืืืืงืืช ืืืืื:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li><em dir="ltr" style="border-style: 1px solid; padding: 0 0.5em;">Attributes:</em> โ ืืชืืื ืืช ืฉื ืืืืคืขืื ืฉืืืืืฆืจื ืขื ืืื ืืืืืงื.</li>
<li>ืื ืืืงื ืืชืืขืื ืฉืฉืืืืื ืืคืขืืื ืจืืืื, ืืืชืืืืก ืืคืขืืืช ืึพ<code>__init__</code> ืฉื ืืืืืงื.</li>
</ul>
<span style="text-align: right; direction: rtl; float: right; clear: both;">NumPy Docstrings</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
NumPy ืืื ืืืืืื ืืืืืื ืืคืืืชืื ืืื ืืงืฉืืจ ืืืืื ืืชืืืืื.<br>
ืฉืืืช ืืชืืขืื ืฉืื ืื ืืืื ืืื ืฉื ืืืื, ืืืชืืขืืช <a href="https://numpydoc.readthedocs.io/en/latest/format.html">ืืื</a>.<br>
ืืื ืืขื ืงืจืืื ืืืชืจ ืืขืื ืื ืืฉืืช, ืื ืืฉืชืืฉืช ืืืืชืจ ืฉืื ืืืืจื ืืืฃ:
</p>
End of explanation
class PostOffice:
A Post Office class. Allows users to message each other.
:ivar int message_id: Incremental id of the last message sent.
:ivar dict boxes: Users' inboxes.
:param list usernames: Users for which we should create PO Boxes.
def __init__(self, usernames):
self.message_id = 0
self.boxes = {user: [] for user in usernames}
def send_message(self, sender, recipient, message_body, urgent=False):
Send a message to a recipient.
:param str sender: The message sender's username.
:param str recipient: The message recipient's username.
:param str message_body: The body of the message.
:param urgent: The urgency of the message.
:type urgent: bool, optional
:return: The message ID, auto incremented number.
:rtype: int
:raises KeyError: if the recipient does not exist.
user_box = self.boxes[recipient]
self.message_id = self.message_id + 1
message_details = {
'id': self.message_id,
'body': message_body,
'sender': sender,
}
if urgent:
user_box.insert(0, message_details)
else:
user_box.append(message_details)
return self.message_id
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">Sphinx</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขืืจ ืืืืืชื ืืื ืืืฆืืจืช ืืกืืื ืชืืขืื, ืึพSphinx ืงืืืืช ืื ืืืืจื ืืฆืืจื ืฉืื ืืืขืชื ืืืจืืืืช ืชืืขืื ืืืืจืืช ืืืืจืืืช.<br>
ืืื ืืืฅ โ Sphinx ืืืข ืืืืืจ ืืช ืืชืืขืื ืฉืืื ืืืกืื ืื ืื ืชืฉืชืืฉื ืึพGoogle Docstrings ืื ืึพNumPy Docstrings.<br>
ืกืื ืื ืื ืชืืคืก ืืช ืืฉืื ืืืืขืจื ืืืืชืจ ืืืืจื ืืืฃ, ืื ืืื ืืขื ืงืฉื ืืืชืจ ืืงืจืืื.<br>
ืืืจืืืืช ืืชืืขืื ืฉึพSphinx ืืืืืจืื ื ืจืืืช ืื:
</p>
End of explanation |
11,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to py2cytoscape
Step1: Long Description
From version 0.4.0, py2cytoscape has wrapper modules for cyREST RESTful API. This means you can access Cytoscape features in more Pythonic way instead of calling raw REST API via HTTP.
Features
Pandas for basic data exchange
Since pandas is a standard library for data mangling/analysis in Python, this new version uses its DataFrame as its basic data object.
Embedded Cytoscaep.js Widget
You can use Cytoscape.js widget to embed your final result as a part of your notebook.
Simpler Code to access Cytoscape
cyREST provides language-agnostic RESTful API, but you need to use a lot of template code to access raw API. Here is an example. Both of the following do the same task, which is creating an empty network in Cytoscape. You will notice it is significantly simpler if you use py2cytoscape wrapper API.
Raw cyREST
Step2: With py2cytoscape
Step3: Status
As of 6/4/2015, this is still in alpha status and feature requests are always welcome. If youi have questions or feature requests, please send them to our Google Groups
Step4: Creating empty networks
Step5: Load networks from files, URLs or web services
Step6: Create networks from various types of data
Currently, py2cytoscape accepts the following data as input
Step7: Get Network from Cytoscape
You can get network data in the following forms
Step8: Working with CyNetwork API
CyNetwork class is a simple wrapper for network-related cyREST raw REST API. It does not hold the actual network data. It's a reference to a network in current Cytoscape session. With CyNetwork API, you can access Cytoscape data objects in more Pythonista-friendly way.
Step9: Get references from existing networks
And of course, you can grab references to existing Cytoscape networks
Step10: Tables as DataFrame
Cytoscape has two main data types
Step11: Edit Network Topology
Adding and deleteing nodes/edges
Step12: Update Table
Let's do something a bit more realistic. You can update any Tables by using DataFrame objects.
1. ID conversion with external service
Let's use ID Conversion web service by Uniprot to add more information to existing yeast network in current session.
Step13: Create / Delete Table Data
Currently, you cannot delete the table or rows due to the Cytoscape data model design. However, it is easy to create / delete columns
Step14: Visual Styles
You can also use wrapper API to access Visual Styles.
Current limitations are
Step15: Set default values
To set default values for Visual Properties, simply pass key-value pairs as dictionary.
Step16: Visual Mappings
Step17: Layouts
Currently, this supports automatic layouts with default parameters.
Step18: Embed Interactive Widget | Python Code:
from py2cytoscape.data.cynetwork import CyNetwork
from py2cytoscape.data.cyrest_client import CyRestClient
from py2cytoscape.data.style import StyleUtil
import py2cytoscape.util.cytoscapejs as cyjs
import py2cytoscape.cytoscapejs as renderer
import networkx as nx
import pandas as pd
import json
# !!!!!!!!!!!!!!!!! Step 0: Start Cytoscape 3 with cyREST App !!!!!!!!!!!!!!!!!!!!!!!!!!
# Step 1: Create py2cytoscape client
cy = CyRestClient()
# Reset
cy.session.delete()
# Step 2: Load network from somewhere
yeast_net = cy.network.create_from('../tests/data/galFiltered.json')
# Step 3: Load table as pandas' DataFrame
table_data = pd.read_csv('sample_data_table.csv', index_col=0)
table_data.head()
# Step 4: Merge them in Cytoscape
yeast_net.update_node_table(df=table_data, network_key_col='name')
# Step 5: Apply layout
cy.layout.apply(name='force-directed', network=yeast_net)
# Step 6: Create Visual Style as code (or by hand if you prefer)
my_yeast_style = cy.style.create('GAL Style')
basic_settings = {
# You can set default values as key-value pairs.
'NODE_FILL_COLOR': '#6AACB8',
'NODE_SIZE': 55,
'NODE_BORDER_WIDTH': 0,
'NODE_LABEL_COLOR': '#555555',
'EDGE_WIDTH': 2,
'EDGE_TRANSPARENCY': 100,
'EDGE_STROKE_UNSELECTED_PAINT': '#333333',
'NETWORK_BACKGROUND_PAINT': '#FFFFEA'
}
my_yeast_style.update_defaults(basic_settings)
# Create some mappings
my_yeast_style.create_passthrough_mapping(column='COMMON', vp='NODE_LABEL', col_type='String')
degrees = yeast_net.get_node_column('Degree')
color_gradient = StyleUtil.create_2_color_gradient(min=degrees.min(), max=degrees.max(), colors=('white', '#6AACB8'))
degree_to_size = StyleUtil.create_slope(min=degrees.min(), max=degrees.max(), values=(10, 100))
my_yeast_style.create_continuous_mapping(column='Degree', vp='NODE_FILL_COLOR', col_type='Integer', points=color_gradient)
my_yeast_style.create_continuous_mapping(column='Degree', vp='NODE_SIZE', col_type='Integer', points=degree_to_size)
my_yeast_style.create_continuous_mapping(column='Degree', vp='NODE_LABEL_FONT_SIZE', col_type='Integer', points=degree_to_size)
cy.style.apply(my_yeast_style, yeast_net)
# Step 7: (Optional) Embed as interactive Cytoscape.js widget
yeast_net_view = yeast_net.get_first_view()
style_for_widget = cy.style.get(my_yeast_style.get_name(), data_format='cytoscapejs')
renderer.render(yeast_net_view, style=style_for_widget['style'], background='radial-gradient(#FFFFFF 15%, #DDDDDD 105%)')
Explanation: Introduction to py2cytoscape: Pythonista-friendly wrapper for cyREST
<h1 align="center">For</h1>
by Keiichiro Ono - University of California, San Diego Trey Ideker Lab
Requirments
Java 8
Cytoscape 3.2.1+
cyREST 1.1.0+
py2cytoscape 0.4.2+
Q. What is py2cytoscape?
A. A Python package to drive Cytoscape in pythonic way
In a Nutshell...
End of explanation
# HTTP Client for Python
import requests
# Standard JSON library
import json
# Basic Setup
PORT_NUMBER = 1234
BASE = 'http://localhost:' + str(PORT_NUMBER) + '/v1/'
# Header for posting data to the server as JSON
HEADERS = {'Content-Type': 'application/json'}
# Define dictionary of empty network
empty_network = {
'data': {
'name': 'I\'m empty!'
},
'elements': {
'nodes':[],
'edges':[]
}
}
res = requests.post(BASE + 'networks?collection=My%20Collection', data=json.dumps(empty_network), headers=HEADERS)
new_network_id = res.json()['networkSUID']
print('New network created with raw REST API. Its SUID is ' + str(new_network_id))
Explanation: Long Description
From version 0.4.0, py2cytoscape has wrapper modules for cyREST RESTful API. This means you can access Cytoscape features in more Pythonic way instead of calling raw REST API via HTTP.
Features
Pandas for basic data exchange
Since pandas is a standard library for data mangling/analysis in Python, this new version uses its DataFrame as its basic data object.
Embedded Cytoscaep.js Widget
You can use Cytoscape.js widget to embed your final result as a part of your notebook.
Simpler Code to access Cytoscape
cyREST provides language-agnostic RESTful API, but you need to use a lot of template code to access raw API. Here is an example. Both of the following do the same task, which is creating an empty network in Cytoscape. You will notice it is significantly simpler if you use py2cytoscape wrapper API.
Raw cyREST
End of explanation
network = cy.network.create(name='My Network', collection='My network collection')
print('New network created with py2cytoscape. Its SUID is ' + str(network.get_id()))
Explanation: With py2cytoscape
End of explanation
# Create an instance of cyREST client. Default IP is 'localhost', and port number is 1234.
# cy = CyRestClient() - This default constructor creates connection to http://localhost:1234/v1
cy = CyRestClient(ip='127.0.0.1', port=1234)
# Cleanup: Delete all existing networks and tables in current Cytoscape session
cy.session.delete()
Explanation: Status
As of 6/4/2015, this is still in alpha status and feature requests are always welcome. If youi have questions or feature requests, please send them to our Google Groups:
https://groups.google.com/forum/#!forum/cytoscape-discuss
Quick Tour of py2cytoscape Features
Create a client object to connect to Cytoscape
End of explanation
# Empty network
empty1 = cy.network.create()
# With name
empty2 = cy.network.create(name='Created in Jupyter Notebook')
# With name and collection name
empty3 = cy.network.create(name='Also created in Jupyter', collection='New network collection')
Explanation: Creating empty networks
End of explanation
# Load a single local file
net_from_local2 = cy.network.create_from('../tests/data/galFiltered.json')
net_from_local1 = cy.network.create_from('sample_yeast_network.xgmml', collection='My Collection')
net_from_local2 = cy.network.create_from('../tests/data/galFiltered.gml', collection='My Collection')
# Load from multiple locations
network_locations = [
'sample_yeast_network.xgmml', # Local file
'http://chianti.ucsd.edu/cytoscape-data/galFiltered.sif', # Static file on a web server
'http://www.ebi.ac.uk/Tools/webservices/psicquic/intact/webservices/current/search/query/brca1?format=xml25' # or a web service
]
# This requrns Series
networks = cy.network.create_from(network_locations)
pd.DataFrame(networks, columns=['CyNetwork'])
Explanation: Load networks from files, URLs or web services
End of explanation
# Cytoscape.js JSON
n1 = cy.network.create(data=cyjs.get_empty_network(), name='Created from Cytoscape.js JSON')
# Pandas DataFrame
# Example 1: From a simple text table
df_from_sif = pd.read_csv('../tests/data/galFiltered.sif', names=['source', 'interaction', 'target'], sep=' ')
df_from_sif.head()
# By default, it uses 'source' for source node column, 'target' for target node column, and 'interaction' for interaction
yeast1 = cy.network.create_from_dataframe(df_from_sif, name='Yeast network created from pandas DataFrame')
# Example 2: from more complicated table
df_from_mitab = pd.read_csv('intact_pubid_22094256.txt', sep='\t')
df_from_mitab.head()
source = df_from_mitab.columns[0]
target = df_from_mitab.columns[1]
interaction = 'Interaction identifier(s)'
title='A Systematic Screen for CDK4/6 Substrates Links FOXM1 Phosphorylation to Senescence Suppression in Cancer Cells.'
human1 = cy.network.create_from_dataframe(df_from_mitab, source_col=source, target_col=target, interaction_col=interaction, name=title)
# Import edge attributes and node attributes at the same time (TBD)
# NetworkX
nx_graph = nx.scale_free_graph(100)
nx.set_node_attributes(nx_graph, 'Degree', nx.degree(nx_graph))
nx.set_node_attributes(nx_graph, 'Betweenness_Centrality', nx.betweenness_centrality(nx_graph))
scale_free100 = cy.network.create_from_networkx(nx_graph, collection='Generated by NetworkX')
# TODO: igraph
# TODO: Numpy adj. martix
# TODO: GraphX
Explanation: Create networks from various types of data
Currently, py2cytoscape accepts the following data as input:
Cytoscape.js
NetworkX
Pandas DataFrame
igraph (TBD)
Numpy adjacency matrix (binary or weighted) (TBD)
GraphX (TBD)
End of explanation
# As Cytoscape.js (dict)
yeast1_json = yeast1.to_json()
# print(json.dumps(yeast1_json, indent=4))
# As NetworkX graph object
sf100 = scale_free100.to_networkx()
num_nodes = sf100.number_of_nodes()
num_edges = sf100.number_of_edges()
print('Number of Nodes: ' + str(num_nodes))
print('Number of Edges: ' + str(num_edges))
# As a simple, SIF-like DataFrame
yeast1_df = yeast1.to_dataframe()
yeast1_df.head()
Explanation: Get Network from Cytoscape
You can get network data in the following forms:
Cytoscape.js
NetworkX
DataFrame
End of explanation
network_suid = yeast1.get_id()
print('This object references to Cytoscape network with SUID ' + str(network_suid) + '\n')
print('And its name is: ' + str(yeast1.get_network_value(column='name')) + '\n')
nodes = yeast1.get_nodes()
edges = yeast1.get_edges()
print('* This network has ' + str(len(nodes)) + ' nodes and ' + str(len(edges)) + ' edges\n')
# Get a row in the node table as pandas Series object
node0 = nodes[0]
row = yeast1.get_node_value(id=node0)
print(row)
# Or, pick one cell in the table
cell = yeast1.get_node_value(id=node0, column='name')
print('\nThis node has name: ' + str(cell))
Explanation: Working with CyNetwork API
CyNetwork class is a simple wrapper for network-related cyREST raw REST API. It does not hold the actual network data. It's a reference to a network in current Cytoscape session. With CyNetwork API, you can access Cytoscape data objects in more Pythonista-friendly way.
End of explanation
# Create a new CyNetwork object from existing network
network_ref1 = cy.network.create(suid=yeast1.get_id())
# And they are considered as same objects.
print(network_ref1 == yeast1)
print(network_ref1.get_network_value(column='name'))
Explanation: Get references from existing networks
And of course, you can grab references to existing Cytoscape networks:
End of explanation
# Get table from Cytoscape
node_table = scale_free100.get_node_table()
edge_table = scale_free100.get_edge_table()
network_table = scale_free100.get_network_table()
node_table.head()
network_table.transpose().head()
names = scale_free100.get_node_column('Degree')
print(names.head())
# Node Column information. "name" is the unique Index
scale_free100.get_node_columns()
Explanation: Tables as DataFrame
Cytoscape has two main data types: Network and Table. Network is the graph topology, and Tables are properties for those graphs. For simplicity, this library has access to three basic table objects:
Node Table
Edge Table
Network Table
For 99% of your use cases, you can use these three to store properties. Since pandas is extremely useful to handle table data, default data type for tables is DataFrame. However, you can also use other data types including:
Cytoscape.js style JSON
CSV
TSV
CX (TBD)
End of explanation
# Add new nodes: Simply send the list of node names. NAMES SHOULD BE UNIQUE!
new_node_names = ['a', 'b', 'c']
# Return value contains dictionary from name to SUID.
new_nodes = scale_free100.add_nodes(new_node_names)
# Add new edges
# Send a list of tuples: (source node SUID, target node SUID, interaction type
new_edges = []
new_edges.append((new_nodes['a'], new_nodes['b'], 'type1'))
new_edges.append((new_nodes['a'], new_nodes['c'], 'type2'))
new_edges.append((new_nodes['b'], new_nodes['c'], 'type3'))
new_edge_ids = scale_free100.add_edges(new_edges)
new_edge_ids
# Delete node
scale_free100.delete_node(new_nodes['a'])
# Delete edge
scale_free100.delete_edge(new_edge_ids.index[0])
Explanation: Edit Network Topology
Adding and deleteing nodes/edges
End of explanation
# Small utility function to convert ID sets
import requests
def uniprot_id_mapping_service(query=None, from_id=None, to_id=None):
# Uniprot ID Mapping service
url = 'http://www.uniprot.org/mapping/'
payload = {
'from': from_id,
'to': to_id,
'format':'tab',
'query': query
}
res = requests.get(url, params=payload)
df = pd.read_csv(res.url, sep='\t')
res.close()
return df
# Get node table from Cytoscape
yeast_node_table = yeast1.get_node_table()
# From KEGG ID to UniprotKB ID
query1 = ' '.join(yeast_node_table['name'].map(lambda gene_id: 'sce:' + gene_id).values)
id_map_kegg2uniprot = uniprot_id_mapping_service(query1, from_id='KEGG_ID', to_id='ID')
id_map_kegg2uniprot.columns = ['kegg', 'uniprot']
# From UniprotKB to SGD
query2 = ' '.join(id_map_kegg2uniprot['uniprot'].values)
id_map_uniprot2sgd = uniprot_id_mapping_service(query2, from_id='ID', to_id='SGD_ID')
id_map_uniprot2sgd.columns = ['uniprot', 'sgd']
# From UniprotKB to Entrez Gene ID
query3 = ' '.join(id_map_kegg2uniprot['uniprot'].values)
id_map_uniprot2ncbi = uniprot_id_mapping_service(query3, from_id='ID', to_id='P_ENTREZGENEID')
id_map_uniprot2ncbi.columns = ['uniprot', 'entrez']
# Merge them
merged = pd.merge(id_map_kegg2uniprot, id_map_uniprot2sgd, on='uniprot')
merged = pd.merge(merged, id_map_uniprot2ncbi, on='uniprot')
# Add key column by removing prefix
merged['name'] = merged['kegg'].map(lambda kegg_id : kegg_id[4:])
merged.head()
update_url = BASE + 'networks/' + str(yeast1.get_id()) + '/tables/defaultnode'
print(update_url)
ut = {
'key': 'name',
'dataKey': 'name',
'data': [
{
'name': 'YBR112C',
'foo': 'aaaaaaaa'
}
]
}
requests.put(update_url, json=ut, headers=HEADERS)
# Now update existing node table with the data frame above.
yeast1.update_node_table(merged, network_key_col='name', data_key_col='name')
# Check the table is actually updated
yeast1.get_node_table().head()
Explanation: Update Table
Let's do something a bit more realistic. You can update any Tables by using DataFrame objects.
1. ID conversion with external service
Let's use ID Conversion web service by Uniprot to add more information to existing yeast network in current session.
End of explanation
# Delete columns
yeast1.delete_node_table_column('kegg')
# Create columns
yeast1.create_node_column(name='New Empty Double Column', data_type='Double', is_immutable=False, is_list=False)
# Default is String, mutable column.
yeast1.create_node_column(name='Empty String Col')
yeast1.get_node_table().head()
Explanation: Create / Delete Table Data
Currently, you cannot delete the table or rows due to the Cytoscape data model design. However, it is easy to create / delete columns:
End of explanation
# Get all existing Visual Styles
import json
styles = cy.style.get_all()
print(json.dumps(styles, indent=4))
# Create a new style
style1 = cy.style.create('sample_style1')
# Get a reference to the existing style
default_style = cy.style.create('default')
print(style1.get_name())
print(default_style.get_name())
# Get all available Visual Properties
print(len(cy.style.vps.get_all()))
# Get Visual Properties for each data type
node_vps = cy.style.vps.get_node_visual_props()
edge_vps = cy.style.vps.get_edge_visual_props()
network_vps = cy.style.vps.get_network_visual_props()
print(pd.Series(edge_vps).head())
Explanation: Visual Styles
You can also use wrapper API to access Visual Styles.
Current limitations are:
You need to use unique name for the Styles
Need to know how to write serialized form of objects
End of explanation
# Prepare key-value pair for Style defaults
new_defaults = {
# Node defaults
'NODE_FILL_COLOR': '#eeeeff',
'NODE_SIZE': 20,
'NODE_BORDER_WIDTH': 0,
'NODE_TRANSPARENCY': 120,
'NODE_LABEL_COLOR': 'white',
# Edge defaults
'EDGE_WIDTH': 3,
'EDGE_STROKE_UNSELECTED_PAINT': '#aaaaaa',
'EDGE_LINE_TYPE': 'LONG_DASH',
'EDGE_TRANSPARENCY': 120,
# Network defaults
'NETWORK_BACKGROUND_PAINT': 'black'
}
# Update
style1.update_defaults(new_defaults)
# Apply the new style
cy.style.apply(style1, yeast1)
Explanation: Set default values
To set default values for Visual Properties, simply pass key-value pairs as dictionary.
End of explanation
# Passthrough mapping
style1.create_passthrough_mapping(column='name', col_type='String', vp='NODE_LABEL')
# Discrete mapping: Simply prepare key-value pairs and send it
kv_pair = {
'pp': 'pink',
'pd': 'green'
}
style1.create_discrete_mapping(column='interaction',
col_type='String', vp='EDGE_STROKE_UNSELECTED_PAINT', mappings=kv_pair)
# Continuous mapping
points = [
{
'value': '1.0',
'lesser':'white',
'equal':'white',
'greater': 'white'
},
{
'value': '20.0',
'lesser':'green',
'equal':'green',
'greater': 'green'
}
]
minimal_style = cy.style.create('Minimal')
minimal_style.create_continuous_mapping(column='Degree', col_type='Double', vp='NODE_FILL_COLOR', points=points)
# Or, use utility for simple mapping
simple_slope = StyleUtil.create_slope(min=1, max=20, values=(10, 60))
minimal_style.create_continuous_mapping(column='Degree', col_type='Double', vp='NODE_SIZE', points=simple_slope)
# Apply the new style
cy.style.apply(minimal_style, scale_free100)
Explanation: Visual Mappings
End of explanation
# Get list of available layout algorithms
layouts = cy.layout.get_all()
print(json.dumps(layouts, indent=4))
# Apply layout
cy.layout.apply(name='circular', network=yeast1)
yeast1.get_views()
yeast_view1 = yeast1.get_first_view()
node_views = yeast_view1['elements']['nodes']
df3 = pd.DataFrame(node_views)
df3.head()
Explanation: Layouts
Currently, this supports automatic layouts with default parameters.
End of explanation
from py2cytoscape.cytoscapejs import viewer as cyjs
cy.layout.apply(network=scale_free100)
view1 = scale_free100.get_first_view()
view2 = yeast1.get_first_view()
# print(view1)
cyjs.render(view2, 'default2', background='#efefef')
# Use Cytoscape.js style JSON
cyjs_style = cy.style.get(minimal_style.get_name(), data_format='cytoscapejs')
cyjs.render(view1, style=cyjs_style['style'], background='white')
Explanation: Embed Interactive Widget
End of explanation |
11,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's start with the necessary imports and setup commands
Step1: Loading the data, and getting rid of NAs
Step2: The fitted linear regression model, using statsmodels R style formula API
Step3: Calculations required for some of the plots
Step4: And now, the actual plots
Step5: 2. QQ plot
This one shows how well the distribution of residuals fit the normal distribution. This plots the standardized (z-score) residuals against the theoretical normal quantiles. Anything quite off the diagonal lines may be a concern for further investigation.
For this, I'm using ProbPlot and its qqplot method from statsmodels graphics API. statsmodels actually has a qqplot method that we can use directly, but it's not very customizable, hence this two-step approach. Annotations were a bit tricky, as theoretical quantiles from ProbPlot are already sorted
Step6: 3. Scale-Location Plot
This is another residual plot, showing their spread, which you can use to assess heteroscedasticity.
It's essentially a scatter plot of absolute square-rooted normalized residuals and fitted values, with a lowess regression line. Scatterplot is a standard matplotlib function, lowess line comes from seaborn regplot. Top 3 absolute square-rooted normalized residuals are also annotated
Step7: 4. Leverage plot
This plot shows if any outliers have influence over the regression fit. Anything outside the group and outside "Cook's Distance" lines, may have an influential effect on model fit.
statsmodels has a built-in leverage plot for linear regression, but again, it's not very customizable. Digging around the source of the statsmodels.graphics package, it's pretty straightforward to implement it from scratch and customize with standard matplotlib functions. There are three parts to this plot | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
from statsmodels.graphics.gofplots import ProbPlot
plt.style.use('seaborn') # pretty matplotlib plots
plt.rc('font', size=14)
plt.rc('figure', titlesize=18)
plt.rc('axes', labelsize=15)
plt.rc('axes', titlesize=18)
Explanation: Let's start with the necessary imports and setup commands:
End of explanation
auto = pd.read_csv('../../../../data/ISLR/datasets/Auto.csv', na_values=['?'])
auto.dropna(inplace=True)
auto.reset_index(drop=True, inplace=True)
Explanation: Loading the data, and getting rid of NAs:
End of explanation
model_f = 'mpg ~ cylinders + \
displacement + \
horsepower + \
weight + \
acceleration + \
year + \
origin'
model = smf.ols(formula=model_f, data=auto)
model_fit = model.fit()
Explanation: The fitted linear regression model, using statsmodels R style formula API:
End of explanation
# fitted values (need a constant term for intercept)
model_fitted_y = model_fit.fittedvalues
# model residuals
model_residuals = model_fit.resid
# normalized residuals
model_norm_residuals = model_fit.get_influence().resid_studentized_internal
# absolute squared normalized residuals
model_norm_residuals_abs_sqrt = np.sqrt(np.abs(model_norm_residuals))
# absolute residuals
model_abs_resid = np.abs(model_residuals)
# leverage, from statsmodels internals
model_leverage = model_fit.get_influence().hat_matrix_diag
# cook's distance, from statsmodels internals
model_cooks = model_fit.get_influence().cooks_distance[0]
Explanation: Calculations required for some of the plots:
End of explanation
plot_lm_1 = plt.figure(1)
plot_lm_1.set_figheight(8)
plot_lm_1.set_figwidth(12)
plot_lm_1.axes[0] = sns.residplot(model_fitted_y, 'mpg', data=auto,
lowess=True,
scatter_kws={'alpha': 0.5},
line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8})
plot_lm_1.axes[0].set_title('Residuals vs Fitted')
plot_lm_1.axes[0].set_xlabel('Fitted values')
plot_lm_1.axes[0].set_ylabel('Residuals')
# annotations
abs_resid = model_abs_resid.sort_values(ascending=False)
abs_resid_top_3 = abs_resid[:3]
for i in abs_resid_top_3.index:
plot_lm_1.axes[0].annotate(i,
xy=(model_fitted_y[i],
model_residuals[i]));
Explanation: And now, the actual plots:
1. Residual plot
First plot that's generated by plot() in R is the residual plot, which draws a scatterplot of fitted values against residuals, with a "locally weighted scatterplot smoothing (lowess)" regression line showing any apparent trend.
This one can be easily plotted using seaborn residplot with fitted values as x parameter, and the dependent variable as y. lowess=True makes sure the lowess regression line is drawn. Additional parameters are passed to underlying matplotlib scatter and line functions using scatter_kws and line_kws, also titles and labels are set using matplotlib methods. The ; in the end gets rid of the output text <matplotlib.text.Text at 0x000000000> at the top of the plot <sup>1</sup>. Top 3 absolute residuals are also annotated:
End of explanation
QQ = ProbPlot(model_norm_residuals)
plot_lm_2 = QQ.qqplot(line='45', alpha=0.5, color='#4C72B0', lw=1)
plot_lm_2.set_figheight(8)
plot_lm_2.set_figwidth(12)
plot_lm_2.axes[0].set_title('Normal Q-Q')
plot_lm_2.axes[0].set_xlabel('Theoretical Quantiles')
plot_lm_2.axes[0].set_ylabel('Standardized Residuals');
# annotations
abs_norm_resid = np.flip(np.argsort(np.abs(model_norm_residuals)), 0)
abs_norm_resid_top_3 = abs_norm_resid[:3]
for r, i in enumerate(abs_norm_resid_top_3):
plot_lm_2.axes[0].annotate(i,
xy=(np.flip(QQ.theoretical_quantiles, 0)[r],
model_norm_residuals[i]));
Explanation: 2. QQ plot
This one shows how well the distribution of residuals fit the normal distribution. This plots the standardized (z-score) residuals against the theoretical normal quantiles. Anything quite off the diagonal lines may be a concern for further investigation.
For this, I'm using ProbPlot and its qqplot method from statsmodels graphics API. statsmodels actually has a qqplot method that we can use directly, but it's not very customizable, hence this two-step approach. Annotations were a bit tricky, as theoretical quantiles from ProbPlot are already sorted:
End of explanation
plot_lm_3 = plt.figure(3)
plot_lm_3.set_figheight(8)
plot_lm_3.set_figwidth(12)
plt.scatter(model_fitted_y, model_norm_residuals_abs_sqrt, alpha=0.5)
sns.regplot(model_fitted_y, model_norm_residuals_abs_sqrt,
scatter=False,
ci=False,
lowess=True,
line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8})
plot_lm_3.axes[0].set_title('Scale-Location')
plot_lm_3.axes[0].set_xlabel('Fitted values')
plot_lm_3.axes[0].set_ylabel('$\sqrt{|Standardized Residuals|}$');
# annotations
abs_sq_norm_resid = np.flip(np.argsort(model_norm_residuals_abs_sqrt), 0)
abs_sq_norm_resid_top_3 = abs_sq_norm_resid[:3]
for i in abs_norm_resid_top_3:
plot_lm_3.axes[0].annotate(i,
xy=(model_fitted_y[i],
model_norm_residuals_abs_sqrt[i]));
Explanation: 3. Scale-Location Plot
This is another residual plot, showing their spread, which you can use to assess heteroscedasticity.
It's essentially a scatter plot of absolute square-rooted normalized residuals and fitted values, with a lowess regression line. Scatterplot is a standard matplotlib function, lowess line comes from seaborn regplot. Top 3 absolute square-rooted normalized residuals are also annotated:
End of explanation
plot_lm_4 = plt.figure(4)
plot_lm_4.set_figheight(8)
plot_lm_4.set_figwidth(12)
plt.scatter(model_leverage, model_norm_residuals, alpha=0.5)
sns.regplot(model_leverage, model_norm_residuals,
scatter=False,
ci=False,
lowess=True,
line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8})
plot_lm_4.axes[0].set_xlim(0, 0.20)
plot_lm_4.axes[0].set_ylim(-3, 5)
plot_lm_4.axes[0].set_title('Residuals vs Leverage')
plot_lm_4.axes[0].set_xlabel('Leverage')
plot_lm_4.axes[0].set_ylabel('Standardized Residuals')
# annotations
leverage_top_3 = np.flip(np.argsort(model_cooks), 0)[:3]
for i in leverage_top_3:
plot_lm_4.axes[0].annotate(i,
xy=(model_leverage[i],
model_norm_residuals[i]))
# shenanigans for cook's distance contours
def graph(formula, x_range, label=None):
x = x_range
y = formula(x)
plt.plot(x, y, label=label, lw=1, ls='--', color='red')
p = len(model_fit.params) # number of model parameters
graph(lambda x: np.sqrt((0.5 * p * (1 - x)) / x),
np.linspace(0.001, 0.200, 50),
'Cook\'s distance') # 0.5 line
graph(lambda x: np.sqrt((1 * p * (1 - x)) / x),
np.linspace(0.001, 0.200, 50)) # 1 line
plt.legend(loc='upper right');
Explanation: 4. Leverage plot
This plot shows if any outliers have influence over the regression fit. Anything outside the group and outside "Cook's Distance" lines, may have an influential effect on model fit.
statsmodels has a built-in leverage plot for linear regression, but again, it's not very customizable. Digging around the source of the statsmodels.graphics package, it's pretty straightforward to implement it from scratch and customize with standard matplotlib functions. There are three parts to this plot: First is the scatterplot of leverage values (got from statsmodels fitted model using get_influence().hat_matrix_diag) vs. standardized residuals. Second one is the lowess regression line for that. And the third and the most tricky part is the Cook's distance lines, which I currently couldn't figure out how to draw in Python. But statsmodels has Cook's distance already calculated, so we can use that to annotate top 3 influencers on the plot:
Update: I think I figured out how to draw Cook's distance ($D_i$) contours for $D_i=0.5$ and $D_i=1$
The trick was rearranging the formula $p{D_i} = r_i^2 h_i/(1-h_i)$ to plot the lines at 0.5 and 1.
End of explanation |
11,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NLTK ์์ฐ์ด ์ฒ๋ฆฌ ํจํค์ง ์๊ฐ
NLTK(Natural Language Toolkit) ํจํค์ง๋ ๊ต์ก์ฉ์ผ๋ก ๊ฐ๋ฐ๋ ์์ฐ์ด ์ฒ๋ฆฌ ๋ฐ ๋ฌธ์ ๋ถ์์ฉ ํ์ด์ฌ ํจํค์ง๋ค. ๋ค์ํ ๊ธฐ๋ฅ ๋ฐ ์์ ๋ฅผ ๊ฐ์ง๊ณ ์์ผ๋ฉฐ ์ค๋ฌด ๋ฐ ์ฐ๊ตฌ์์๋ ๋ง์ด ์ฌ์ฉ๋๋ค.
NLTK ํจํค์ง๊ฐ ์ ๊ณตํ๋ ์ฃผ์ ๊ธฐ๋ฅ์ ๋ค์๊ณผ ๊ฐ๋ค.
์ํ corpus ๋ฐ ์ฌ์
ํ ํฐ ์์ฑ(tokenizing)
ํํ์ ๋ถ์(stemming/lemmatizing)
ํ์ฌ ํ๊น
(part-of-speech tagging)
๊ตฌ๋ฌธ ๋ถ์(syntax parsing)
์ํ corpus
corpus๋ ๋ถ์ ์์
์ ์ํ ์ํ ๋ฌธ์ ์งํฉ์ ๋งํ๋ค. ๋จ์ํ ์์ค, ์ ๋ฌธ ๋ฑ์ ๋ฌธ์๋ฅผ ๋ชจ์๋์ ๊ฒ๋ ์์ง๋ง ๋๋ถ๋ถ ํ์ฌ. ํํ์, ๋ฑ์ ๋ณด์กฐ์ ์๋ฏธ๋ฅผ ์ถ๊ฐํ๊ณ ์ฌ์ด ๋ถ์์ ์ํด ๊ตฌ์กฐ์ ์ธ ํํ๋ก ์ ๋ฆฌํด ๋์ ๊ฒ์ด ๋ง๋ค.
NLTK ํจํค์ง์ corpus ์๋ธํจํค์ง์์๋ ๋ค์๊ณผ ๊ฐ์ ๋ค์ํ ์ฐ๊ตฌ์ฉ corpus๋ฅผ ์ ๊ณตํ๋ค. ์ด ๋ชฉ๋ก์ ์ ์ฒด corpus์ ์ผ๋ถ์ผ ๋ฟ์ด๋ค.
averaged_perceptron_tagger Averaged Perceptron Tagger
book_grammars
Step1: ํ ํฐ ์์ฑ(tokenizing)
๋ฌธ์๋ฅผ ๋ถ์ํ๊ธฐ ์ํด์๋ ์ฐ์ ๊ธด ๋ฌธ์์ด์ ๋ถ์์ ์ํ ์์ ๋จ์๋ก ๋๋์ด์ผ ํ๋ค. ์ด ๋ฌธ์์ด ๋จ์๋ฅผ ํ ํฐ(token)์ด๋ผ๊ณ ํ๋ค.
Step2: ํํ์ ๋ถ์
ํํ์ ๋ถ์์ด๋ ์ด๊ทผ, ์ ๋์ฌ/์ ๋ฏธ์ฌ, ํ์ฌ(POS, part-of-speech) ๋ฑ ๋ค์ํ ์ธ์ด์ ์์ฑ์ ๊ตฌ์กฐ๋ฅผ ํ์
ํ๋ ์์
์ด๋ค. ๊ตฌ์ฒด์ ์ผ๋ก๋ ๋ค์๊ณผ ๊ฐ์ ์์
์ผ๋ก ๋๋๋ค.
stemming (์ด๊ทผ ์ถ์ถ)
lemmatizing (์ํ ๋ณต์)
POS tagging (ํ์ฌ ํ๊น
)
### Stemming and lemmatizing
Step3: POS tagging
POS(part-of-speech)๋ ํ์ฌ๋ฅผ ๋งํ๋ค.
Part-of-Speech Tagset
https | Python Code:
nltk.download('averaged_perceptron_tagger')
nltk.download("gutenberg")
nltk.download('punkt')
nltk.download('reuters')
nltk.download("stopwords")
nltk.download("taggers")
nltk.download("webtext")
nltk.download("wordnet")
nltk.corpus.gutenberg.fileids()
emma_raw = nltk.corpus.gutenberg.raw("austen-emma.txt")
print(emma_raw[:1302])
Explanation: NLTK ์์ฐ์ด ์ฒ๋ฆฌ ํจํค์ง ์๊ฐ
NLTK(Natural Language Toolkit) ํจํค์ง๋ ๊ต์ก์ฉ์ผ๋ก ๊ฐ๋ฐ๋ ์์ฐ์ด ์ฒ๋ฆฌ ๋ฐ ๋ฌธ์ ๋ถ์์ฉ ํ์ด์ฌ ํจํค์ง๋ค. ๋ค์ํ ๊ธฐ๋ฅ ๋ฐ ์์ ๋ฅผ ๊ฐ์ง๊ณ ์์ผ๋ฉฐ ์ค๋ฌด ๋ฐ ์ฐ๊ตฌ์์๋ ๋ง์ด ์ฌ์ฉ๋๋ค.
NLTK ํจํค์ง๊ฐ ์ ๊ณตํ๋ ์ฃผ์ ๊ธฐ๋ฅ์ ๋ค์๊ณผ ๊ฐ๋ค.
์ํ corpus ๋ฐ ์ฌ์
ํ ํฐ ์์ฑ(tokenizing)
ํํ์ ๋ถ์(stemming/lemmatizing)
ํ์ฌ ํ๊น
(part-of-speech tagging)
๊ตฌ๋ฌธ ๋ถ์(syntax parsing)
์ํ corpus
corpus๋ ๋ถ์ ์์
์ ์ํ ์ํ ๋ฌธ์ ์งํฉ์ ๋งํ๋ค. ๋จ์ํ ์์ค, ์ ๋ฌธ ๋ฑ์ ๋ฌธ์๋ฅผ ๋ชจ์๋์ ๊ฒ๋ ์์ง๋ง ๋๋ถ๋ถ ํ์ฌ. ํํ์, ๋ฑ์ ๋ณด์กฐ์ ์๋ฏธ๋ฅผ ์ถ๊ฐํ๊ณ ์ฌ์ด ๋ถ์์ ์ํด ๊ตฌ์กฐ์ ์ธ ํํ๋ก ์ ๋ฆฌํด ๋์ ๊ฒ์ด ๋ง๋ค.
NLTK ํจํค์ง์ corpus ์๋ธํจํค์ง์์๋ ๋ค์๊ณผ ๊ฐ์ ๋ค์ํ ์ฐ๊ตฌ์ฉ corpus๋ฅผ ์ ๊ณตํ๋ค. ์ด ๋ชฉ๋ก์ ์ ์ฒด corpus์ ์ผ๋ถ์ผ ๋ฟ์ด๋ค.
averaged_perceptron_tagger Averaged Perceptron Tagger
book_grammars: Grammars from NLTK Book
brown: Brown Corpus
chat80: Chat-80 Data Files
city_database: City Database
comparative_sentences Comparative Sentence Dataset
dependency_treebank. Dependency Parsed Treebank
gutenberg: Project Gutenberg Selections
hmm_treebank_pos_tagger Treebank Part of Speech Tagger (HMM)
inaugural: C-Span Inaugural Address Corpus
large_grammars: Large context-free and feature-based grammars for parser comparison
mac_morpho: MAC-MORPHO: Brazilian Portuguese news text with part-of-speech tags
masc_tagged: MASC Tagged Corpus
maxent_ne_chunker: ACE Named Entity Chunker (Maximum entropy)
maxent_treebank_pos_tagger Treebank Part of Speech Tagger (Maximum entropy)
movie_reviews: Sentiment Polarity Dataset Version 2.0
names: Names Corpus, Version 1.3 (1994-03-29)
nps_chat: NPS Chat
omw: Open Multilingual Wordnet
opinion_lexicon: Opinion Lexicon
pros_cons: Pros and Cons
ptb: Penn Treebank
punkt: Punkt Tokenizer Models
reuters: The Reuters-21578 benchmark corpus, ApteMod version
sample_grammars: Sample Grammars
sentence_polarity: Sentence Polarity Dataset v1.0
sentiwordnet: SentiWordNet
snowball_data: Snowball Data
stopwords: Stopwords Corpus
subjectivity: Subjectivity Dataset v1.0
tagsets: Help on Tagsets
treebank: Penn Treebank Sample
twitter_samples: Twitter Samples
unicode_samples: Unicode Samples
universal_tagset: Mappings to the Universal Part-of-Speech Tagset
universal_treebanks_v20 Universal Treebanks Version 2.0
verbnet: VerbNet Lexicon, Version 2.1
webtext: Web Text Corpus
word2vec_sample: Word2Vec Sample
wordnet: WordNet
words: Word Lists
์ด๋ฌํ corpus ์๋ฃ๋ ์ค์น์์ ์ ๊ณต๋๋ ๊ฒ์ด ์๋๋ผ download ๋ช
๋ น์ผ๋ก ์ฌ์ฉ์๊ฐ ๋ค์ด๋ก๋ ๋ฐ์์ผ ํ๋ค.
End of explanation
from nltk.tokenize import word_tokenize
word_tokenize(emma_raw[50:100])
from nltk.tokenize import RegexpTokenizer
t = RegexpTokenizer("[\w]+")
t.tokenize(emma_raw[50:100])
from nltk.tokenize import sent_tokenize
print(sent_tokenize(emma_raw[:1000])[3])
Explanation: ํ ํฐ ์์ฑ(tokenizing)
๋ฌธ์๋ฅผ ๋ถ์ํ๊ธฐ ์ํด์๋ ์ฐ์ ๊ธด ๋ฌธ์์ด์ ๋ถ์์ ์ํ ์์ ๋จ์๋ก ๋๋์ด์ผ ํ๋ค. ์ด ๋ฌธ์์ด ๋จ์๋ฅผ ํ ํฐ(token)์ด๋ผ๊ณ ํ๋ค.
End of explanation
from nltk.stem import PorterStemmer
st = PorterStemmer()
st.stem("eating")
from nltk.stem import LancasterStemmer
st = LancasterStemmer()
st.stem("shopping")
from nltk.stem import RegexpStemmer
st = RegexpStemmer("ing")
st.stem("cooking")
from nltk.stem import WordNetLemmatizer
lm = WordNetLemmatizer()
print(lm.lemmatize("cooking"))
print(lm.lemmatize("cooking", pos="v"))
print(lm.lemmatize("cookbooks"))
print(WordNetLemmatizer().lemmatize("believes"))
print(LancasterStemmer().stem("believes"))
Explanation: ํํ์ ๋ถ์
ํํ์ ๋ถ์์ด๋ ์ด๊ทผ, ์ ๋์ฌ/์ ๋ฏธ์ฌ, ํ์ฌ(POS, part-of-speech) ๋ฑ ๋ค์ํ ์ธ์ด์ ์์ฑ์ ๊ตฌ์กฐ๋ฅผ ํ์
ํ๋ ์์
์ด๋ค. ๊ตฌ์ฒด์ ์ผ๋ก๋ ๋ค์๊ณผ ๊ฐ์ ์์
์ผ๋ก ๋๋๋ค.
stemming (์ด๊ทผ ์ถ์ถ)
lemmatizing (์ํ ๋ณต์)
POS tagging (ํ์ฌ ํ๊น
)
### Stemming and lemmatizing
End of explanation
from nltk.tag import pos_tag
tagged_list = pos_tag(word_tokenize(emma_raw[:100]))
tagged_list
from nltk.tag import untag
untag(tagged_list)
Explanation: POS tagging
POS(part-of-speech)๋ ํ์ฌ๋ฅผ ๋งํ๋ค.
Part-of-Speech Tagset
https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.htm
http://www.ibm.com/support/knowledgecenter/ko/SS5RWK_3.5.0/com.ibm.discovery.es.ta.doc/iiysspostagset.htm
End of explanation |
11,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
Step1: Here we can see one of the images.
Step2: Train a network
To make things more concise here, I moved the model architecture and training code from the last part to a file called fc_model. Importing this, we can easily create a fully-connected network with fc_model.Network, and train the network using fc_model.train. I'll use this model (once it's trained) to demonstrate how we can save and load models.
Step3: Saving and loading networks
As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.
The parameters for PyTorch networks are stored in a model's state_dict. We can see the state dict contains the weight and bias matrices for each of our layers.
Step4: The simplest thing to do is simply save the state dict with torch.save. For example, we can save it to a file 'checkpoint.pth'.
Step5: Then we can load the state dict with torch.load.
Step6: And to load the state dict in to the network, you do model.load_state_dict(state_dict).
Step7: Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
Step8: This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model.
Step9: Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
Explanation: Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
End of explanation
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
Explanation: Here we can see one of the images.
End of explanation
# Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2)
Explanation: Train a network
To make things more concise here, I moved the model architecture and training code from the last part to a file called fc_model. Importing this, we can easily create a fully-connected network with fc_model.Network, and train the network using fc_model.train. I'll use this model (once it's trained) to demonstrate how we can save and load models.
End of explanation
print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys())
Explanation: Saving and loading networks
As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.
The parameters for PyTorch networks are stored in a model's state_dict. We can see the state dict contains the weight and bias matrices for each of our layers.
End of explanation
torch.save(model.state_dict(), 'checkpoint.pth')
Explanation: The simplest thing to do is simply save the state dict with torch.save. For example, we can save it to a file 'checkpoint.pth'.
End of explanation
state_dict = torch.load('checkpoint.pth')
print(state_dict.keys())
Explanation: Then we can load the state dict with torch.load.
End of explanation
model.load_state_dict(state_dict)
Explanation: And to load the state dict in to the network, you do model.load_state_dict(state_dict).
End of explanation
# Try this
model = fc_model.Network(784, 10, [400, 200, 100])
# This will throw an error because the tensor sizes are wrong!
model.load_state_dict(state_dict)
Explanation: Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
End of explanation
checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, 'checkpoint.pth')
Explanation: This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model.
End of explanation
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('checkpoint.pth')
print(model)
Explanation: Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints.
End of explanation |
11,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Module 12 - Programming Assignment
Directions
There are general instructions on Blackboard and in the Syllabus for Programming Assignments. This Notebook also has instructions specific to this assignment. Read all the instructions carefully and make sure you understand them. Please ask questions on the discussion boards or email me at [email protected] if you do not understand something.
<div style="background
Step1: attributes_domain
A helper function to return a dictionary of attributes, and the domains possible for that attribute.
This is used to start the Naive Bayes algorithm with the appropriate possible attributes and their domains.
A '?' attribute is added to every domain in case a record is missing a value for a given domain. In the Record the value for that domain is expected to have a '?' indicating that for that record the attribute value is unknown.
input
Step2: get_positive_label
A helper function to return the positive label for this implimentation of a Naive Bayes Classifier. Used incase the positive label were to change. "positive" in this context is simply derived from the data set, and that it is a POSITIVE thing to be able to eat a mushroom, thus the label for the dataset e is "Positive". This is the ONLY reason it's called positive.
The label is used in calculating the information gain, as well as determining the majority label of an attribute.
input
Step3: get_negative_label
A helper function to return the negative label for this implimentation of a Naive Bayes Classifier. Used incase the negative label were to change. "Negative" in this context is simply derived from the data set, and that it is a NEGATIVE thing to eat a Poisonous mushroom, thus the label for the dataset p is "Negative". This is the ONLY reason it's called negative.
The label is used in calculating the information gain, as well as determining the majority label of an attribute.
input
Step4: create_record
A helper function to create a record to be used in the Naive Bayes Classifier, given a record from the csv file.
Creates a dictionary that maps the attribute_name to the value of that attribute for a given record.
This is used to transform all of the data read in from the csv file into an easily usable dictionary for Naive Bayes Classifier.
input
Step5: create_distribution_dict
A helper function to create a dictionary that holds the Naive Bayes Classifier distibutions for all of the $P(a_i|c_i)$ probabilities, for each $A$ where $A$ is all attributes and $a_i$ is a domain for a specific attribute.
The dictionary has the following strucutre
Step6: read_file
A helper function to read in the data from a CSV file, and transform it into a list of records, as described in the create_record description.
NOTE
Step7: create_distribution_key
A helper function the key needed to access a given probability in the Naive Bayes Distribution dictionary, described in create_distribution_dict.
input
Step8: put_value_in_distribution
A helper function to increment the count by 1, in the distribution dictionary, of a given key.
Used when counting the number of occurenses of a particular $A=a_i, C=c_i$ when building out the distribution of the training set.
input
Step9: get_label_count
A helper function that returns the number of records that have a given label.
This is used to get the total number of records with a given label.
This value is then used when calculating the normalized probabilites of the distribution, $$P(f_i | c_i) = \frac{Num((f_i,c_i)) + 1}{Num(c_i) + 1}$$
Specifically the $Num(c_i)$ part.
input
Step10: create_percentages
A helper function that, given a distibution of counts for $(f_i, c_i)$ calculates the probability according to
Step11: learn
The main function that learns the distribution for the Naive Bayes Classifier.
The function works as follows
Step12: calculate_probability_of
A helper function that calculates the un_normalized probability of a given instance (record), for a given label.
The un_normalized probability is caclulated as follows
Step13: normalize
A helper function that normalizes a list of probabilities. The list of probabilities is for a single record, and should have the following structure
Step14: classify_instance
A helper that does most of the work to classifiy a given instance of a record.
It works as follows
Step15: classify
A function to classify a list of instances(records).
Given a list of instances (records), classify each instance using classify_instance and put the result into a result list. Return the result list after each instance has been classified.
The Structure of the return list will be a List of lists where each inner list is a list of tuples, as described by the classify_instance function. An example will look as follows
Step16: evaluate
The main evaluation method. Uses a simple $\frac{ Num Errors}{total Data Points}$ to calculate the error rate of the Naive Bayes Classifier.
Given a list of records (test_data) and a list of predicted classifications for that data set, run through both lists, and compire the label for the record to the predicted classification. If they do not match, increase the number of errors seen.
The label for the predicted classification is at position 0 of the predicted probabilities list, and position 0 of the tuple for that holds the label and probability of that label. i.e. for a classifications list that is as follows
Step17: Put your main function calls here.
Set up Training Sets
Shuffle training set to ensure no bias from data order
Step18: Train Naive Bayes 1 on Set 1
Step19: Get Predicted Classifications for Set 2 From Naive Bayes 1
Step20: Evaluate Predicted Set 2 against Actual Set 2
Step21: Train Naive Bayes 2 on Set 2
Step22: Get Predicted Classifications for Set 1 From Naive Bayes 2
Step23: Evaluate Predicted Set 1 against Actual Set 1
Step24: Calculate Average Error for Both Naive Bayes Distrobutions | Python Code:
from __future__ import division # so that 1/2 = 0.5 and not 0
from IPython.core.display import *
import csv, math, copy, random
Explanation: Module 12 - Programming Assignment
Directions
There are general instructions on Blackboard and in the Syllabus for Programming Assignments. This Notebook also has instructions specific to this assignment. Read all the instructions carefully and make sure you understand them. Please ask questions on the discussion boards or email me at [email protected] if you do not understand something.
<div style="background: mistyrose; color: firebrick; border: 2px solid darkred; padding: 5px; margin: 10px;">
You must follow the directions *exactly* or you will get a 0 on the assignment.
</div>
You must submit a zip file of your assignment and associated files (if there are any) to Blackboard. The zip file will be named after you JHED ID: <jhed_id>.zip. It will not include any other information. Inside this zip file should be the following directory structure:
<jhed_id>
|
+--module-01-programming.ipynb
+--module-01-programming.html
+--(any other files)
For example, do not name your directory programming_assignment_01 and do not name your directory smith122_pr1 or any else. It must be only your JHED ID. Make sure you submit both an .ipynb and .html version of your completed notebook. You can generate the HTML version using:
ipython nbconvert [notebookname].ipynb
or use the File menu.
Naive Bayes Classifier
In this assignment you will be using the mushroom data from the Decision Tree module:
http://archive.ics.uci.edu/ml/datasets/Mushroom
The assignment is to write a program that will learn and apply a Naive Bayes Classifier for this problem. You'll first need to calculate all of the necessary probabilities (don't forget to use +1 smoothing) using a learn function. You'll then need to have a classify function that takes your probabilities, a List of instances (possibly a list of 1) and returns a List of Tuples. Each Tuple is a class and the normalized probability of that class. The List should be sorted so that the probabilities are in descending order. For example,
[("e", 0.98), ("p", 0.02)]
when calculating the error rate of your classifier, you should pick the class with the highest probability (the first one in the list).
As a reminder, the Naive Bayes Classifier generates the un-normalized probabilities from the numerator of Bayes Rule:
$$P(C|A) \propto P(A|C)P(C)$$
where C is the class and A are the attributes (data). Since the normalizer of Bayes Rule is the sum of all possible numerators and you have to calculate them all, the normalizer is just the sum of the probabilities.
You'll also need an evaluate function as before. You should use the $error_rate$ again.
Use the same testing procedure as last time, on two randomized subsets of the data:
learn the probabilities for set 1
classify set 2
evaluate the predictions
learn the probabilities for set 2
classify set 1
evalute the the predictions
average the classification error.
Imports
End of explanation
def attributes_domains():
return {
'label': ['e', 'p', '?'],
'cap-shape': ['b', 'c', 'x', 'f', 'k', 's', '?'],
'cap-surface': ['f', 'g', 'y', 's', '?'],
'cap-color': ['n', 'b', 'c', 'g', 'r', 'p', 'u', 'e', 'w', 'y', '?'],
'bruises?': ['t', 'f', '?'],
'odor': ['a', 'l', 'c', 'y', 'f', 'm', 'n', 'p', 's', '?'],
'gill-attachment': ['a', 'd', 'f', 'n', '?'],
'gill-spacing': ['c', 'w', 'd', '?'],
'gill-size': ['b', 'n', '?'],
'gill-color': ['k', 'n', 'b', 'h', 'g', 'r', 'o', 'p', 'u', 'e', 'w', 'y', '?'],
'stalk-shape': ['e', 't', '?'],
'salk-root': ['b', 'c', 'u', 'e', 'z', 'r', '?'],
'stalk-surface-above-ring': ['f', 'y', 'k', 's', '?'],
'stalk-surface-below-ring': ['f', 'y', 'k', 's', '?'],
'stalk-color-above-ring': ['n', 'b', 'c', 'g', 'o', 'p', 'e', 'w', 'y', '?'],
'stalk-color-below-ring': ['n', 'b', 'c', 'g', 'o', 'p', 'e', 'w', 'y', '?'],
'veil-type': ['p', 'u', '?'],
'veil-color': ['n', 'o', 'w', 'y', '?'],
'ring-number': ['n', 'o', 't', '?'],
'ring-type': ['c', 'e', 'f', 'l', 'n', 'p', 's', 'z', '?'],
'spore-print-color': ['k', 'n', 'b', 'h', 'r', 'o', 'u', 'w', 'y', '?'],
'population': ['a', 'c', 'n', 's', 'v', 'y', '?'],
'habitat': ['g', 'l', 'm', 'p', 'u', 'w', 'd', '?'],
}
Explanation: attributes_domain
A helper function to return a dictionary of attributes, and the domains possible for that attribute.
This is used to start the Naive Bayes algorithm with the appropriate possible attributes and their domains.
A '?' attribute is added to every domain in case a record is missing a value for a given domain. In the Record the value for that domain is expected to have a '?' indicating that for that record the attribute value is unknown.
input:
None
return:
+ attributes: a dictionary of attribute names as keys and the attributes domain as a list of strings.
End of explanation
def get_positive_label():
return 'e'
Explanation: get_positive_label
A helper function to return the positive label for this implimentation of a Naive Bayes Classifier. Used incase the positive label were to change. "positive" in this context is simply derived from the data set, and that it is a POSITIVE thing to be able to eat a mushroom, thus the label for the dataset e is "Positive". This is the ONLY reason it's called positive.
The label is used in calculating the information gain, as well as determining the majority label of an attribute.
input:
None
return:
+ the label, a string.
End of explanation
def get_negative_label():
return 'p'
Explanation: get_negative_label
A helper function to return the negative label for this implimentation of a Naive Bayes Classifier. Used incase the negative label were to change. "Negative" in this context is simply derived from the data set, and that it is a NEGATIVE thing to eat a Poisonous mushroom, thus the label for the dataset p is "Negative". This is the ONLY reason it's called negative.
The label is used in calculating the information gain, as well as determining the majority label of an attribute.
input:
None
return:
+ the label, a string.
End of explanation
def create_record(csv_record):
return {
'label': csv_record[0],
'cap-shape': csv_record[1],
'cap-surface': csv_record[2],
'cap-color': csv_record[3],
'bruises?': csv_record[4],
'odor': csv_record[5],
'gill-attachment': csv_record[6],
'gill-spacing': csv_record[7],
'gill-size': csv_record[8],
'gill-color': csv_record[9],
'stalk-shape': csv_record[10],
'salk-root': csv_record[11],
'stalk-surface-above-ring': csv_record[12],
'stalk-surface-below-ring': csv_record[13],
'stalk-color-above-ring': csv_record[14],
'stalk-color-below-ring': csv_record[15],
'veil-type': csv_record[16],
'veil-color': csv_record[17],
'ring-number': csv_record[18],
'ring-type': csv_record[19],
'spore-print-color': csv_record[20],
'population': csv_record[21],
'habitat': csv_record[22],
}
Explanation: create_record
A helper function to create a record to be used in the Naive Bayes Classifier, given a record from the csv file.
Creates a dictionary that maps the attribute_name to the value of that attribute for a given record.
This is used to transform all of the data read in from the csv file into an easily usable dictionary for Naive Bayes Classifier.
input:
+ csv_record: a list of strings
return:
+ a dictionary that maps attribute_names to the value for that attribute.
End of explanation
def create_distribution_dict():
attributes_with_domains = attributes_domains()
distribution = {}
for attribute, domains in attributes_with_domains.iteritems():
if attribute == 'label':
continue
for domain in domains:
distribution[(attribute, domain, 'label', get_positive_label())] = 1
distribution[(attribute, domain, 'label', get_negative_label())] = 1
return distribution
Explanation: create_distribution_dict
A helper function to create a dictionary that holds the Naive Bayes Classifier distibutions for all of the $P(a_i|c_i)$ probabilities, for each $A$ where $A$ is all attributes and $a_i$ is a domain for a specific attribute.
The dictionary has the following strucutre:
python
{
(attribute, attribute_domain_value, 'label', label_value) : value
}
The key allows us to specify for which attribute, and for what domain value we are creating the distribution for, and the 'label' label_value allow us to create the "Given $c_i$" part of the distribution.
This dictionary is used first to create an overall count of each disitbution, and then is later used to hold the actual probability distibution for the Naive Bayes Classifier.
Note that the distibution for "counting" is initialized to 1. This is to account for the "+1" smoothing that is needed for calculating the probabilities later on for the $P(f_i | c)$ which describes the probability.
This is an important method for the algorithm because this function specifies how the distibution is stored.
input:
None
return:
+ a dictionary with the structure specified in the above discription.
End of explanation
def read_file(path=None):
if path is None:
path = 'agaricus-lepiota.data'
with open(path, 'r') as f:
reader = csv.reader(f)
csv_list = list(reader)
records = []
for value in csv_list:
records.append(create_record(value))
return records
Explanation: read_file
A helper function to read in the data from a CSV file, and transform it into a list of records, as described in the create_record description.
NOTE: If not given a path to a file, it assumes that the file is in your local directory, from which you are running this notebook. It also assumes that the file it is reading is "agaricus-lepiota.data".
The file can be found at https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.data
Please also note that this file is the expected format of input for this entire Naive Bayes Classifier implementation.
Please do not try to run this with other data that is not in this format, or have the same bounds as this data set.
input:
+ path (optional): the path to the csv file you wish to read in.
return:
+ records: A list of records. Records have the shape described by the create_record description.
End of explanation
def create_distribution_key(attribute, domain_value, label_value):
return (attribute, domain_value, 'label', label_value)
Explanation: create_distribution_key
A helper function the key needed to access a given probability in the Naive Bayes Distribution dictionary, described in create_distribution_dict.
input:
+ attribute: a String that specifies the attribute for the probability to access
+ domain: a string that specifies the domain value for the probability to access
+ label_value: a string that specifies which classification label to use when accessing the probability.
return:
+ a tuple with the structure: (attribute_name, domain_value, 'label', label_value)
End of explanation
def put_value_in_distribution(distribution, attribute, domain_value, label_value):
key = create_distribution_key(attribute, domain_value, label_value)
distribution[key] += 1
Explanation: put_value_in_distribution
A helper function to increment the count by 1, in the distribution dictionary, of a given key.
Used when counting the number of occurenses of a particular $A=a_i, C=c_i$ when building out the distribution of the training set.
input:
+ distribution: a dictionary with the structure specified by create_distribution_dict
+ attribute: a String that specifies the attribute for the probability to access
+ domain: a string that specifies the domain value for the probability to access
+ label_value: a string that specifies which classification label to use when accessing the probability.
return:
None
End of explanation
def get_label_count(records, label):
count = 0
for record in records:
if record['label'] == label:
count += 1
return count
Explanation: get_label_count
A helper function that returns the number of records that have a given label.
This is used to get the total number of records with a given label.
This value is then used when calculating the normalized probabilites of the distribution, $$P(f_i | c_i) = \frac{Num((f_i,c_i)) + 1}{Num(c_i) + 1}$$
Specifically the $Num(c_i)$ part.
input:
+ records: a list of records.
return:
+ count: the number of records with the specified label
End of explanation
def create_percentages(pos_count, neg_count, distribution):
pos_count_plus_1 = pos_count + 1
neg_count_plus_1 = neg_count + 1
pos_label = get_positive_label()
neg_label = get_negative_label()
for key in distribution:
if key[3] == pos_label:
distribution[key] = distribution[key] / pos_count_plus_1
elif key[3] == neg_label:
distribution[key] = distribution[key] / neg_count_plus_1
return distribution
Explanation: create_percentages
A helper function that, given a distibution of counts for $(f_i, c_i)$ calculates the probability according to:
$$P(f_i | c_i) = \frac{Num((f_i,c_i)) + 1}{Num(c_i) + 1}$$
The distribution already contains the "count" for the probability, the $Num((f_i,c_i)) + 1$ part. To calculte the probability, we just divide by the dividend which is passed in in the form of the count for the positive and negative lables.
For each key in the distribution, we determine which $c_i$ it uses, and divide by the appropriate dividend.
These percentages or distributions are then used during the classification step.
input:
+ pos_count: an int, the number of records with the "positive" label in the training set.
+ neg_count: an int, the number of records with the "negative" label in the training set.
+ distribution: a dictionary, with the structure specified in create_distribution_dict
return:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, now with values that are probabilites rather than raw counts. Probability is calculated according to the above formula.
End of explanation
def learn(records):
distribution = create_distribution_dict()
pos_count = get_label_count(records, get_positive_label())
neg_count = get_label_count(records, get_negative_label())
for record in records:
for attribute, domain_value in record.iteritems():
if attribute == 'label':
continue
put_value_in_distribution(distribution, attribute, domain_value, record['label'])
distribution = create_percentages(pos_count, neg_count, distribution)
distribution[('label', get_positive_label())] = pos_count / (pos_count + neg_count)
distribution[('label', get_negative_label())] = neg_count / (pos_count + neg_count)
return distribution
Explanation: learn
The main function that learns the distribution for the Naive Bayes Classifier.
The function works as follows:
+ Create initial distribution counts
+ get positive label counts
+ get negative label counts
+ for each record in the training set:
+ For each attribute, and domain_value for the attribute:
+ put the value into the distribution (i.e incriment the value for that attribute, domain, and label tuple
+ the Corresponding value in the distribution is (Attribute, domain_value, 'label', actual label for record)
+ change the distribution from counts to probabilities
+ add special entries in the distribution for the Probability of each possible label.
+ the Probability of a given label is as follows: $P(c_i) = \frac{Num(c_i)}{Size Of Training Set}$
We then return the learned distribution, as our Naive Bayes Classifier.
input:
+ records: a list of records, as described by the create_record function.
return:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, with values that are the probabilites for each $A$ and $C$ so that we have $P(A=a_i | C=c_i)$
End of explanation
def calculate_probability_of(distribution, instance, label):
un_normalized_prob = distribution[('label', label)]
for attribute, domain_value in instance.iteritems():
if attribute == 'label':
continue
key = create_distribution_key(attribute, domain_value, label)
un_normalized_prob *= distribution[key]
return un_normalized_prob
Explanation: calculate_probability_of
A helper function that calculates the un_normalized probability of a given instance (record), for a given label.
The un_normalized probability is caclulated as follows:
$$P(c_i) \prod_i P(f_i | c_i)$$
Where $f_i$ is a given attribute and attributes value, and c_i is a given label.
To calculate this, we itterate throught the instance's (record's) attributes, and values for the attributes, create the key into the distribution from the attribute and attribute's value and the label we are wishing to calculate the probability for.
This is then multiplied to the running product of the other probabilities.
The running product is initialized to the $P(c_i)$ to take care of the initial multiplicative term.
The un_normalized probability is then returned.
This is used when classifying a record, to get the probability that the record should have a certain label.
This is important because this probability is then normalized after all probabilities are gotten for all labels, and then used to determing how likely a record is part of a given class label.
input:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, with values that are the probabilites.
+ instance: a record, as described by create_record
+ labelL: a string that describes a given label value.
return:
+ un_normalized_prob: a float that represents the un_normalized probability that a record belongs to the given class label.
End of explanation
def normalize(probability_list):
sum_of_probabilities = 0
normalized_list = []
for prob_tuple in probability_list:
sum_of_probabilities += prob_tuple[1]
for prob_tuple in probability_list:
normalized_prob = prob_tuple[1] / sum_of_probabilities
normalized_list.append((prob_tuple[0], normalized_prob))
normalized_list.sort(key=lambda x: x[1], reverse=True)
return normalized_list
Explanation: normalize
A helper function that normalizes a list of probabilities. The list of probabilities is for a single record, and should have the following structure:
python
[(label, probability), (label, probability)]
These probabilities should be un_normalized probabilities for each label.
This function normalizes the probabilities by summing the probabilities for each label together, then calculating the normalized probability for each label by dividing the probability for that label by the sum of all the probabilities.
This normalized probability is then placed into a new list with the same structure and same corresponding label.
The list of normalized probabilies is then SORTED in descending order. I.E. the label with the most likely possibility is in index position 0 for the list of probabilities**
This new normalized list of probabilities is then returned.
This function is important because this calculates the probabilities that are then used to choose which label should be used to describe a record. This is done during validation
input:
+ probability_list: a list of tuples, as described by: [(label, probability), (label, probability)]
return:
+ normalized_list: a list of tuples, as described by: [(label, probability), (label, probability)] with the probabilities being normalized as described above.
End of explanation
def classify_instance(distribution, instance):
labels = [get_positive_label(), get_negative_label()]
probability_results = []
for label in labels:
probability = calculate_probability_of(distribution, instance, label)
probability_results.append((label, probability))
probability_results = normalize(probability_results)
return probability_results
Explanation: classify_instance
A helper that does most of the work to classifiy a given instance of a record.
It works as follows:
+ create a list of possible labels
+ initialize results list.
+ for each label
+ calculate the un_normalized probability of the instance using calculate_probabily_of
+ add the probability to the results list as a tuple of (label, un_normalized probability)
+ normalize the probabilities, using normalize
+ note that now the list of results (a list of tuples) is now sorted in descending order by the value of the probability
return the normalized probabilities for that instance of a record.
This is important because this list describes the probabilities that this record should have a given label.
The First tuple in the list is the tuple with the label that has the Hightest probability for this record.
input:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, with values that are the probabilites.
+ instace: a record, as described by create_record
return:
+ probability_results: a List of tuples with the structure as: [(label, normalized probability), (label, normalized probability)] sorted in descending order by probability.
NOTE: This is these are the probabilites for a SINGLE record
End of explanation
def classify(distribution, instances):
results = []
for instance in instances:
results.append(classify_instance(distribution, instance))
return results
Explanation: classify
A function to classify a list of instances(records).
Given a list of instances (records), classify each instance using classify_instance and put the result into a result list. Return the result list after each instance has been classified.
The Structure of the return list will be a List of lists where each inner list is a list of tuples, as described by the classify_instance function. An example will look as follows:
python
[ [('e', .999),('p', .001)], [('p', .78), ('e', .22)] ]
The first list [('e', .999),('p', .001)] corresponds to the probabilities for the first instance in the instances list and the second list to the second instance of the instances list. So on and so forth for each entry in the instances list.
input:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, with values that are the probabilites.
+ instace: a record, as described by create_record
return:
+ results: a list of lists of tuples as described above.
End of explanation
def evaluate(test_data, classifications):
number_of_errors = 0
for record, classification in zip(test_data, classifications):
if record['label'] != classification[0][0]:
number_of_errors += 1
return number_of_errors/len(test_data)
Explanation: evaluate
The main evaluation method. Uses a simple $\frac{ Num Errors}{total Data Points}$ to calculate the error rate of the Naive Bayes Classifier.
Given a list of records (test_data) and a list of predicted classifications for that data set, run through both lists, and compire the label for the record to the predicted classification. If they do not match, increase the number of errors seen.
The label for the predicted classification is at position 0 of the predicted probabilities list, and position 0 of the tuple for that holds the label and probability of that label. i.e. for a classifications list that is as follows:
python
[ [('e', .999),('p', .001)], [('p', .78), ('e', .22)] ]
The predicted label for record 1 is 'e' since the corresponding predicted probabilities are [('e', .999),('p', .001)], the most likely label is at position 0 in the list, since they are sorted from most probable to least probable. Position 0 of the list gives us ('e', .999). The label for this selected probability is then at position 0 of the tuple, which gives us 'e'.
This label is then compared to the actual label for the record for correctness.
Return the number of erros seen divided by the total number of data points. This is the error rate.
input:
+ test_data: a list of records
+ classifications: a list of lists of tuples, as described by the classify function.
return:
+ error_rate: a float that represents the number of errors / total number of data points.
End of explanation
test_records = read_file()
random.shuffle(test_records)
half_way = int(math.floor(len(test_records)/2))
set_1 = test_records[:half_way]
set_2 = test_records[half_way:]
Explanation: Put your main function calls here.
Set up Training Sets
Shuffle training set to ensure no bias from data order
End of explanation
distro_1 = learn(set_1)
Explanation: Train Naive Bayes 1 on Set 1
End of explanation
b1_c2 = classify(distro_1, set_2)
Explanation: Get Predicted Classifications for Set 2 From Naive Bayes 1
End of explanation
evaluation_b1_c2 = evaluate(set_2, b1_c2)
print "Error Rate for Naive Bayes 1 with Set 2 = {}".format(evaluation_b1_c2)
Explanation: Evaluate Predicted Set 2 against Actual Set 2
End of explanation
distro_2 = learn(set_2)
Explanation: Train Naive Bayes 2 on Set 2
End of explanation
b2_c1 = classify(distro_2, set_1)
Explanation: Get Predicted Classifications for Set 1 From Naive Bayes 2
End of explanation
evaluation_b2_c1 = evaluate(set_1, b2_c1)
print "Error Rate for Naive Bayes 2 with Set 1 = {}".format(evaluation_b2_c1)
Explanation: Evaluate Predicted Set 1 against Actual Set 1
End of explanation
average_error = (evaluation_b1_c2 + evaluation_b2_c1)/2
print "Average Error Rate: {}".format(average_error)
Explanation: Calculate Average Error for Both Naive Bayes Distrobutions
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.