Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST classification with Vowpal Wabbit
Neural Net
Step1: Train
I found some help with parameters here
Step2: Predict
-t
is for test file
-i
specifies the model file created earlier
-p
where to store the class predictions [1,10]
Step4: Analyze | Python Code:
from __future__ import division
import re
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
#%qtconsole
Explanation: MNIST classification with Vowpal Wabbit
Neural Net
End of explanation
!rm train.vw.cache
!rm mnist_train_nn.model
!vw -d data/mnist_train_pca.vw --cache_file train.vw.cache -f mnist_train_nn.model --nn 200 -b 19 --oaa 10 --passes 55 -l 0.4 --early_terminate 3 --power_t 0.6
Explanation: Train
I found some help with parameters here:
* https://github.com/JohnLangford/vowpal_wabbit/wiki/Tutorial
* https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments
--cache_file train.cache
converts train_ALL.vw to a binary file for future faster processing.
Next time we go through the model building, we will use the cache file
and not the text file.
--passes
is the number of passes
--oaa 10
refers to oaa learning algorithm with 10 classes (1 to 10)
-q ii
creates interaction between variables in the two referred to namespaces
which here are the same i.e. 'image' Namespace.
An interaction variable is created from two variables 'A' and 'B'
by multiplying the values of 'A' and 'B'.
-f mnist_ALL.model
refers to file where model will be saved.
-b
refers to number of bits in the feature table.
Default number is 18 but as we have increased the number of features much more
by introducing interaction features, value of '-b' has been increased to 22.
-l rate
Adjust the learning rate. Defaults to 0.5
--power_t p
This specifies the power on the learning rate decay. You can adjust this --power_t p where p is in the range [0,1]. 0 means the learning rate does not decay, which can be helpful when state tracking, while 1 is very aggressive. Defaults to 0.5
End of explanation
!rm predict.txt
!vw -t data/mnist_test_pca.vw -i mnist_train_nn.model -p predict.txt
Explanation: Predict
-t
is for test file
-i
specifies the model file created earlier
-p
where to store the class predictions [1,10]
End of explanation
y_true=[]
with open("data/mnist_test_pca.vw", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
found = m.group()
y_true.append(int(found))
y_pred = []
with open("predict.txt", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
found = m.group()
y_pred.append(int(found))
target_names = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"] # NOTE: plus one
def plot_confusion_matrix(cm,
target_names,
title='Proportional Confusion matrix: VW on PCA',
cmap=plt.cm.Paired):
given a confusion matrix (cm), make a nice plot
see the skikit-learn documentation for the original done for the iris dataset
plt.figure(figsize=(8, 6))
plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm = confusion_matrix(y_true, y_pred)
print(cm)
model_accuracy = sum(cm.diagonal())/len(y_pred)
model_misclass = 1 - model_accuracy
print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass))
plot_confusion_matrix(cm, target_names)
Explanation: Analyze
End of explanation |
13,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras Toy 2D binary classification
Install Keras
https
Step1: Make the dataset
Step2: Make the classifier
Step3: Bonnus | Python Code:
import tensorflow as tf
tf.__version__
import keras
keras.__version__
import h5py
h5py.__version__
import pydot
pydot.__version__
Explanation: Keras Toy 2D binary classification
Install Keras
https://keras.io/#installation
Install dependencies
Install TensorFlow backend: https://www.tensorflow.org/install/
pip install tensorflow
Insall h5py (required if you plan on saving Keras models to disk): http://docs.h5py.org/en/latest/build.html#wheels
pip install h5py
Install pydot (used by visualization utilities to plot model graphs): https://github.com/pydot/pydot#installation
pip install pydot
Install Keras
pip install keras
Import packages and check versions
End of explanation
df_train = gen_2d_samples(n_samples=200)
x_train = df_train[['x1', 'x2']].values
y_train = df_train.y.values
ax = df_train.loc[df_train.y == 0].plot.scatter(x='x1', y='x2', color="r")
df_train.loc[df_train.y == 1].plot.scatter(x='x1', y='x2', ax=ax);
df_test = gen_2d_samples(n_samples=200)
x_test = df_test[['x1', 'x2']].values
y_test = df_test.y.values
ax = df_test.loc[df_test.y == 0].plot.scatter(x='x1', y='x2', color="r")
df_test.loc[df_test.y == 1].plot.scatter(x='x1', y='x2', ax=ax);
Explanation: Make the dataset
End of explanation
model = keras.models.Sequential()
model.add(keras.layers.Dense(units=2, activation='relu', input_dim=2))
model.add(keras.layers.Dense(units=1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
print(model.get_config())
hist = model.fit(x_train, y_train, batch_size=100, epochs=10, verbose=False)
plt.plot(hist.history['loss']);
model.evaluate(x_test, y_test)
y_predicted = model.predict(x_test)
predicted_data = np.concatenate([x_test, y_predicted], axis=1)
predicted_df = pd.DataFrame(predicted_data, columns=['x1', 'x2', 'y'])
ax = predicted_df.loc[predicted_df.y <= 0.5].plot.scatter(x='x1', y='x2', color="r")
predicted_df.loc[predicted_df.y > 0.5].plot.scatter(x='x1', y='x2', ax=ax);
Explanation: Make the classifier
End of explanation
from keras.utils import plot_model
plot_model(model, show_shapes=True, to_file="model.png")
Explanation: Bonnus: plot the regressor
End of explanation |
13,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ipynb for a 2-D CNN for classifying ECGs
Best results found so far used
Step9: Import and process data
Step10: Neural Network
Step11: Test accuracy of model(s)
20% of training data held back for testing (4000 "heartbeats")
Step12: What if the model hasn't seen data from the patient? What then?! | Python Code:
import tensorflow as tf
#import tensorflow.contrib.learn.python.learn as learn
import tflearn
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from random import shuffle, randint
from sklearn.utils import shuffle as mutualShuf
import os
import pandas as pd
import sklearn
import datetime
%matplotlib inline
Explanation: ipynb for a 2-D CNN for classifying ECGs
Best results found so far used:
* 3 VCG leads concatenated
* 200 buffer, 150 shift (looking at QRS -> T lump)
* Input data chunked into 10000 healthy and 10000 unhealthy samples
* Peak finder threshold of 0.02 on differentiated and absoluted input data
(then it is returned to undiff, unabs data before it is fed in)
* Trained over 1 epoch.
* The CNN:
* Conv with 32 features, map 5x3.
* 2x2 max pool.
* Conv 64 features, map 5x3.
* 2x2 max pool.
* 1024 neuron dense layer, L2 regularisation with weight_decay=0.001.
* 50% dropout layer.
* 2 wide softmax layer.
* ADAM optimiser with learning_rate=0.00001.
* Loss function is categorical x-entropy.
This gives a result of Sensitivity: 1.0 Specifity: 0.9965 Accuracy: 0.9982 for data taken from the training set (but not trained with).
And Sensitivity: 0.9988 Specifity: 0.9959 Accuracy: 0.9974 on patients it hasn't seen before.
End of explanation
def importData(filepath):
ppt = np.genfromtxt(filepath)
dppt = np.diff(np.transpose(ppt))
print(filepath, "Shape:", dppt[1:16,:].shape)
return dppt[1:16,:]
pathIll = "./inData/clean_ecg/ill/"
pathHealth = "./inData/clean_ecg/health/"
illLst = []
healthLst = []
for file in os.listdir(pathIll):
illLst.append(importData(pathIll+file))
for file in os.listdir(pathHealth):
healthLst.append(importData(pathHealth+file))
print("Outputing Frank leads")
healthPat = np.concatenate((healthLst[:]), axis=1)[12:15]
illPat = np.concatenate((illLst[:]), axis=1)[12:15]
print(healthPat.shape, illPat.shape)
def findAbove(arr, threshold, skip):
Return indices for values above threshhold in array, arr. Keep only first items in sequence.
inlst = []
for index, item in enumerate(arr):
if item >= threshold:
inlst.append(index)
return inlst[::skip]
def processClassData(classData):
Process classData.
Returns a one-hot array of shape [len(classData), 2].
# Convert label data to one-hot array
classDataOH = np.zeros((len(classData),2))
classDataOH[np.arange(len(classData)), classData] = 1
return classDataOH
def getSamples(Arr, indexArr, buffer):
Get samples for inputting into CNN.
sampleArr = []
for index, item in enumerate(indexArr):
if Arr[0:, item-buffer:item+buffer].shape != (Arr.shape[0], buffer*2):
pass
else:
sampleArr.append(Arr[0:, item-buffer:item+buffer])
return np.array(sampleArr)
def visualiseData(ecgData, classData, gridSize, axis):
Plot labelled example data in a gridSize*gridSize grid.
fig, ax = plt.subplots(gridSize, gridSize, subplot_kw=dict(projection='3d'))
plt.suptitle("Labelled example data")
r = randint(0,len(classData)-ecgData.shape[1])
k = 0
if gridSize == 1:
ax.plot(ecgData[r+k,0], ecgData[r+k,1], ecgData[r+k,2])
else:
for i in np.arange(0,gridSize,1):
for j in np.arange(0,gridSize,1):
k = k + 1
ax[i,j].plot(ecgData[r+k,0], ecgData[r+k,1], ecgData[r+k,2])
if axis == False:
ax[i,j].axis("off")
ax[i,j].annotate(classData[r+k], xy=(0, 0), xycoords='axes points',\
size=10, ha='left', va='top')
def undiff(ecgData, buffer):
Reverse the differentiation done earlier through np.cumsum.
ecgData = np.array(ecgData)
ecgData = np.reshape(ecgData, (ecgData.shape[0], ecgData.shape[1], buffer*2))
for i in np.arange(0,ecgData.shape[0],1):
for j in np.arange(0,ecgData.shape[1],1):
ecgData[i,j] = np.cumsum(ecgData[i,j])
ecgData = np.reshape(ecgData, (ecgData.shape[0], ecgData.shape[1], buffer*2, 1))
return ecgData
def splitData(coilData, classData):
Split data into healthy and ill types.
illData = []
healthData = []
for index, item in enumerate(classData):
if item == 1:
illData.append(coilData[index])
if item == 0:
healthData.append(coilData[index])
return illData, healthData
def chunkify(lst,n):
Chunk a list into n chunks of approximately equal size
return [ lst[i::n] for i in range(n) ]
def functionTownCat(illArr, healthArr, illThreshold, healthThreshold, skip, shift, buffer, shuffle):
Return the processed ecgData with the leads concatenated into a 2d array per heartbeat
and the classData (one-hot). Also return arrays of ill and healthy ppts.
If shuffle is true, shuffle data.
illPeakArr = findAbove(np.abs(illArr[0]), illThreshold, skip)
sampleArrI = getSamples(illArr, np.array(illPeakArr), buffer)
healthPeakArr = findAbove(np.abs(healthArr[0]), healthThreshold, skip)
sampleArrH = getSamples(healthArr, np.array(healthPeakArr), buffer)
chunkyI = chunkify(sampleArrI, 10000)
chunkyH = chunkify(sampleArrH , 10000)
avgI = []
avgH = []
for i in np.arange(0,len(chunkyI),1):
avgI.append(np.mean(chunkyI[i], axis=0))
for i in np.arange(0,len(chunkyH),1):
avgH.append(np.mean(chunkyH[i], axis=0))
sampleArrI = np.array(avgI)
sampleArrH = np.array(avgH)
print("Total ill samples", len(illPeakArr), ". Compressed to", sampleArrI.shape)
print("Total healthy samples", len(healthPeakArr), ". Compressed to", sampleArrH.shape)
classData = []
for i in np.arange(0, sampleArrI.shape[0], 1):
classData.append(1)
for i in np.arange(0, sampleArrH.shape[0], 1):
classData.append(0)
ecgData = np.concatenate((sampleArrI, sampleArrH), axis=0)
if shuffle == True:
classData, ecgData = mutualShuf(np.array(classData), ecgData, random_state=0)
classDataOH = processClassData(classData)
ecgData = np.reshape(ecgData, [-1, sampleArrI.shape[1], buffer*2, 1])
return ecgData, classDataOH, classData
buffer = 300
healthThreshold = 0.02
illThreshold = 0.02
skip = 1
shift = 0
shuf = True
ecgData, classDataOH, classData = functionTownCat(illPat, healthPat, illThreshold, healthThreshold, skip,\
shift, buffer, shuf)
# Reintegrate the found values...
ecgData = undiff(ecgData, buffer)
# Take 20% for testing later:
testData = ecgData[:round(ecgData.shape[0]*0.2)]
trainData = ecgData[round(ecgData.shape[0]*0.2):]
testLabels = classDataOH[:round(ecgData.shape[0]*0.2)]
trainLabels = classDataOH[round(ecgData.shape[0]*0.2):]
print(ecgData.shape)
visualiseData(np.reshape(ecgData,(-1,ecgData.shape[1],buffer*2))[:,:], classData, 2, True)
#plt.plot(ecgData[0,0,:]*ecgData[0,1,:])
#plt.savefig("./outData/figures/exampleDataECGundiff.pdf")
print(trainData.shape)
Explanation: Import and process data
End of explanation
sess = tf.InteractiveSession()
tf.reset_default_graph()
tflearn.initializations.normal()
# ecgData = np.zeros((50,12,400,1)) # If ecgData is not defined
# Input layer:
net = tflearn.layers.core.input_data(shape=[None, buffer*2, buffer*2, buffer*2, 1])
# First layer:
net = tflearn.layers.conv.conv_3d(net, 32, 5, activation="leaky_relu")
net = tflearn.layers.conv.max_pool_3d(net, 2)
# Second layer:
net = tflearn.layers.conv.conv_3d(net, 64, 5, activation="leaky_relu")
net = tflearn.layers.conv.max_pool_3d(net, 2)
net = tflearn.layers.core.flatten(net)
# Fully connected layer 1:
net = tflearn.layers.core.fully_connected(net, 1024, regularizer="L2", weight_decay=0.001, activation="leaky_relu")
# Dropout layer:
net = tflearn.layers.core.dropout(net, keep_prob=0.5)
# Output layer:
net = tflearn.layers.core.fully_connected(net, 2, activation="softmax")
net = tflearn.layers.estimator.regression(net, optimizer='adam', loss='categorical_crossentropy',\
learning_rate=0.00001)
model = tflearn.DNN(net, tensorboard_verbose=3)
model.fit(trainData, trainLabels, n_epoch=1, show_metric=True)
# Save model?
#now = datetime.datetime.now()
#model.save("./outData/models/cleanECG_2dconv_12lead_"+now.isoformat()+"_.tflearn")
Explanation: Neural Network
End of explanation
#model.load("./outData/models/cleanECG_undiff_20e_300buff_0shift_2017-02-21T19:20:35.702943_.tflearn")
#model.load("./outData/models/cleanECG_undiff_20e_150buff_2017-02-21T16:15:02.602923_.tflearn")
#model.load("./outData/models/cleanECG_2dconv_12lead_2017-03-08T10:15:17.200943_.tflearn")
#model.load("./outData/models/cleanECG_2dconv_12lead_2017-03-09T18:05:18.655939_.tflearn")
labellst = classData[:round(ecgData.shape[0]*0.2)]
healthTest = []
illTest = []
for index, item in enumerate(labellst):
if item == 1:
illTest.append(testData[index])
if item == 0:
healthTest.append(testData[index])
healthLabel = np.tile([1,0], (len(healthTest), 1))
illLabel = np.tile([0,1], (len(illTest), 1))
print("Sensitivity:", model.evaluate(np.array(healthTest), healthLabel), "Specifity:",\
model.evaluate(np.array(illTest), illLabel),\
"Accuracy:", model.evaluate(testData, testLabels))
Explanation: Test accuracy of model(s)
20% of training data held back for testing (4000 "heartbeats")
End of explanation
tpathIll = "./inData/clean_ecg/testIll/"
tpathHealth = "./inData/clean_ecg/testHealth/"
tillLst = []
thealthLst = []
for file in os.listdir(tpathIll):
tillLst.append(importData(tpathIll+file))
for file in os.listdir(tpathHealth):
thealthLst.append(importData(tpathHealth+file))
if frank == False:
print("Outputing standard ECG leads...")
thealth = np.concatenate((thealthLst[:]), axis=1)[0:12]
till = np.concatenate((tillLst[:]), axis=1)[0:12]
elif frank == True:
print("Outputing Frank leads...")
thealth = np.concatenate((thealthLst[:]), axis=1)[12:15]
till = np.concatenate((tillLst[:]), axis=1)[12:15]
print(thealth.shape, till.shape)
unseenData, unseenClassOH, unseenClass = functionTownCat(till, thealth, illThreshold, healthThreshold, \
skip, shift, buffer, True)
# Undifferentiate values
unseenData = undiff(unseenData, buffer)
tillarr, thealtharr = splitData(unseenData, unseenClass)
sens = model.evaluate(np.array(thealtharr), np.tile([1,0], (len(thealtharr), 1)))[0]
spec = model.evaluate(np.array(tillarr), np.tile([0,1], (len(tillarr), 1)))[0]
acc = model.evaluate(unseenData, unseenClassOH)[0]
lenh = len(thealtharr)
leni = len(tillarr)
print("Sensitivity:", sens,\
"Specifity:", spec,\
"Accuracy:", acc)
visualiseData(np.reshape(unseenData,(-1,unseenData.shape[1],buffer*2))[:,:,::20], unseenClass, 3, False)
Explanation: What if the model hasn't seen data from the patient? What then?!
End of explanation |
13,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 11
Step1: Functions for parsing the initial state
Map the floors to integers
Step2: Parse an item (microchip or generator)
Step3: Parse all items on a floor
Step4: Use these functions for parsing the initial items on all floors
Step5: Compact representation of the item positions
Our current representation of the positions of the microchips and generators is inefficient. Assuming that there is exactly one microchip and one generator per element, we can make the following simplifications
Step6: This function can create the compact representation for initialItems
Step7: A state is a tuple the contains the elevator position and the compressed representation of the item positions
Step8: Functions for working with states
Step9: Check if a state is valid
A state is valid unless there is a floor
* which has at least one generator, and
* which has at least one microchip which is not accompanied by the matching generator.
Step10: Calculate all states that can be reached in one step
Step11: Calculate the minimal number of moves to reach the final state
Step12: Solution for Part one
Step13: Part two | Python Code:
with open("input/day11.txt", "r") as f:
inputLines = tuple(line.strip() for line in f)
import itertools
import re
Explanation: Day 11: Radioisotope Thermoelectric Generators
End of explanation
floors = {
"first" : 1,
"second" : 2,
"third" : 3,
"fourth" : 4,
}
Explanation: Functions for parsing the initial state
Map the floors to integers
End of explanation
def parseItem(item):
microchipMatch = re.fullmatch("([a-z]+)-compatible microchip", item)
if microchipMatch is not None:
return microchipMatch.group(1), "M"
generatorMatch = re.fullmatch("([a-z]+) generator", item)
assert generatorMatch is not None
return generatorMatch.group(1), "G"
assert parseItem("hydrogen-compatible microchip") == ("hydrogen", "M")
assert parseItem("lithium generator") == ("lithium", "G")
Explanation: Parse an item (microchip or generator)
End of explanation
def parseFloor(line):
m = re.fullmatch("The ([a-z]+) floor contains (.*).", line)
floor, itemsStr = m.groups()
return tuple(sorted(parseItem(item[2:]) + (floors[floor],)
for item in re.split("(?:,\ )?(?:\ ?and\ )?", itemsStr)
if item.startswith("a ")))
assert (parseFloor("The first floor contains a hydrogen-compatible microchip and a lithium generator.") ==
(("hydrogen", "M", 1),
("lithium", "G", 1)))
assert (parseFloor("The second floor contains a hydrogen generator, and a lithium-compatible microchip.") ==
(("hydrogen", "G", 2),
("lithium", "M", 2)))
assert (parseFloor("The second floor contains a hydrogen generator.") ==
(("hydrogen", "G", 2),))
assert (parseFloor("The third floor contains a lithium-compatible microchip.") ==
(("lithium", "M", 3),))
assert (parseFloor("The fourth floor contains nothing relevant.") ==
())
Explanation: Parse all items on a floor
End of explanation
initialItems = tuple(sorted(itertools.chain.from_iterable(parseFloor(line) for line in inputLines)))
print(initialItems)
Explanation: Use these functions for parsing the initial items on all floors
End of explanation
# Takes an iterable that yields two (element, type, floor) tuples, where
# * the element should be the same for both tuples,
# * the first item should be a generator (type 'G'),
# * the second item should be a microchip (type 'M').
# Returns a tuple that contains only the floors where the generator and the microchip are.
def tupleForElement(items):
result = tuple(floor for element, itemType, floor in items)
assert len(result) == 2
return result
assert tupleForElement((("iron", "G", 3), ("iron", "M", 1))) == (3, 1)
Explanation: Compact representation of the item positions
Our current representation of the positions of the microchips and generators is inefficient. Assuming that there is exactly one microchip and one generator per element, we can make the following simplifications:
* For each element, it is sufficient to store the positions of the generator and the microchip in a tuple with two elements.
* For the solution of the problem, the element names are irrelevant. Therefore, it is sufficient to store only the tuples with the positions of the generator and the microchip for each element, and ignore the element name.
* In order to reduce the problem space, the list of tuples can be sorted: for the number of moves that are needed to solve the puzzle, it does not matter if the positions for two elements are ((2, 3), (1, 1)) or ((1, 1), (2, 3)).
Helper function that generates a position tuple for a single element: tupleForElement
End of explanation
def compressedItems(items):
return tuple(sorted(tupleForElement(itemsForElement)
for _, itemsForElement in itertools.groupby(items, lambda t: t[0])))
assert (compressedItems((("copper", "G", 4), ("copper", "M", 2), ("iron", "G", 1), ("iron", "M", 3)))
== ((1, 3), (4, 2)))
Explanation: This function can create the compact representation for initialItems
End of explanation
initialState = (1, compressedItems(initialItems))
print(initialState)
Explanation: A state is a tuple the contains the elevator position and the compressed representation of the item positions
End of explanation
def isFinalState(state, targetFloor=4):
currentFloor, items = state
return currentFloor == targetFloor and all(item == (targetFloor, targetFloor) for item in items)
Explanation: Functions for working with states
End of explanation
def isValidState(state):
currentFloor, items = state
floorsWithGenerators = set(generatorFloor for generatorFloor, microchipFloor in items)
floorsWithVulnerableMicrochips = set(microchipFloor
for generatorFloor, microchipFloor in items
if generatorFloor != microchipFloor)
return len(floorsWithGenerators & floorsWithVulnerableMicrochips) == 0
assert isValidState((1, ((2, 2), (2, 3), (4, 3), (4, 4))))
assert not isValidState((1, ((2, 2), (2, 3), (4, 2), (4, 4))))
Explanation: Check if a state is valid
A state is valid unless there is a floor
* which has at least one generator, and
* which has at least one microchip which is not accompanied by the matching generator.
End of explanation
def nextStates(state):
currentFloor, items = state
# Put all item positions into a flat list for easier manipulation
flattenedPositions = tuple(itertools.chain.from_iterable(items))
# Find the index (in flattenedPositions) of all items that are on the current floor
onCurrentFloor = tuple(index
for index, pos in enumerate(flattenedPositions)
if pos == currentFloor)
# Each combination of items that can be moved by the elevator from the current floor is
# represented by a tuple in 'candidatesForMoving'.
# Note that the elevator can take either one or two items.
candidatesForMoving = (tuple((i,) for i in onCurrentFloor) +
tuple(itertools.combinations(onCurrentFloor, 2)))
# Calculate the possible new states for each direction (-1: down, +1: up)
for direction in (-1, 1):
newFloor = currentFloor + direction
if newFloor < 1 or newFloor > 4:
continue
for movedIndices in candidatesForMoving:
# 'movedIndices' is a tuple that contains either one index, or two indices (in the list
# 'flattenedPositions') of the items which are moved by the elevator.
# Find the 'flattenedPositions' for the next state if the items in 'candidate' are moved
# to 'newFloor'.
newFlattenedPositions = tuple(newFloor if index in movedIndices else pos
for index, pos in enumerate(flattenedPositions))
# Convert 'newFlattenedPositions' to the compressed format (see above) by
# * grouping neighboring items to 2-element tuples,
# * sorting the list of these tuples.
newItems = tuple(
sorted(tuple(p for _, p in ps)
for _, ps in itertools.groupby(enumerate(newFlattenedPositions),
lambda x: x[0] // 2)))
newState = (newFloor, newItems)
# Only yield the new state if it is valid.
if isValidState(newState):
yield newState
# If there are two microchips and generators on the first floor initially, the elevator can move
# * both microchips, or
# * both generators, or
# * one microchip, or
# * one microchip and its generator
# to the second floor. Moving one generator without its microchip is not possible because this would
# leave this microchip vulnerable on the first floor.
assert set(nextStates((1, ((1, 1), (1, 1))))) == {(2, ((1, 2), (1, 2))),
(2, ((2, 1), (2, 1))),
(2, ((1, 1), (1, 2))),
(2, ((1, 1), (2, 2)))}
Explanation: Calculate all states that can be reached in one step
End of explanation
def movesToFinish(initialState):
currentStates = {initialState}
seenStates = {initialState}
for numberOfMoves in itertools.count():
if any(isFinalState(state) for state in currentStates):
return numberOfMoves
currentStates = set(newState
for state in currentStates
for newState in nextStates(state)
if not newState in seenStates)
seenStates |= currentStates
Explanation: Calculate the minimal number of moves to reach the final state
End of explanation
movesToFinish(initialState)
Explanation: Solution for Part one
End of explanation
initialItems2 = compressedItems(initialItems) + ((1, 1), (1, 1))
initialState2 = (1, initialItems2)
movesToFinish(initialState2)
Explanation: Part two: two more elements with generators and microchips on first floor
End of explanation |
13,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tree-based Methods
Tree-based methods can be used to solve regression and classification problems.
Decision Trees
A decision tree is a tree structure that partition data points into regions or categories. Each vertex represents a decision to be made. The outgoing edges from a vertex represent possible choices for that decision.
<img src="https
Step1: We can see that 549 people died while the remaining 342 survived. This means that following histogram is associated with the root node
Step2: The histogram of the left child of the root vertex (female passengers) is
Step3: The right child (male passengers) is associated with following histogram
Step4: Given a new observation $x'$, if we have reached to the right child of the root vertex, then we know that the probability of $x'$ belonging to Died category is 81 percent | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
df = pd.read_csv('data/titanic-train.csv')
df.head()
df['Survival State'] = df['Survived'].apply(lambda x: 'Survived' if x == 1 else 'Died')
df['Survival State'].value_counts()
Explanation: Tree-based Methods
Tree-based methods can be used to solve regression and classification problems.
Decision Trees
A decision tree is a tree structure that partition data points into regions or categories. Each vertex represents a decision to be made. The outgoing edges from a vertex represent possible choices for that decision.
<img src="https://tfbarker.files.wordpress.com/2013/12/tree.png" />
Image from <a href="https://tfbarker.wordpress.com/2013/12/22/datamining/">#</a>
For instance, the figure above illustrates a decision tree model for the Titanic data set from Kaggle. Let us see the number of men vs. women in the data.
End of explanation
sns.countplot(x='Survival State', order=['Died', 'Survived'], data=df)
Explanation: We can see that 549 people died while the remaining 342 survived. This means that following histogram is associated with the root node:
End of explanation
sns.countplot(x='Survival State', order=['Died', 'Survived'], data=df[df['Sex'] == 'female'])
Explanation: The histogram of the left child of the root vertex (female passengers) is:
End of explanation
sns.countplot(x='Survival State', order=['Died', 'Survived'], data=df[df['Sex'] == 'male'])
Explanation: The right child (male passengers) is associated with following histogram:
End of explanation
total_count = df[df['Sex'] == 'male'].shape[0]
died_count = df[(df['Sex'] == 'male') & (df['Survival State'] == 'Died')].shape[0]
probab_pct = round(died_count / total_count * 100, 2)
print('{0} percent'.format(probab_pct))
Explanation: Given a new observation $x'$, if we have reached to the right child of the root vertex, then we know that the probability of $x'$ belonging to Died category is 81 percent:
End of explanation |
13,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Language modeling
Data
The large movie view dataset contains a collection of 50,000 reviews from IMDB. The dataset contains an even number of positive and negative reviews. The authors considered only highly polarized reviews. A negative review has a score ≤ 4 out of 10, and a positive review has a score ≥ 7 out of 10. Neutral reviews are not included in the dataset. The dataset is divided into training and test sets. The training set is the same 25,000 labeled reviews.
The sentiment classification task consists of predicting the polarity (positive or negative) of a given text.
However, before we try to classify sentiment, we will simply try to create a language model; that is, a model that can predict the next word in a sentence. Why? Because our model first needs to understand the structure of English, before we can expect it to recognize positive vs negative sentiment.
So our plan of attack is the same as we used for Dogs v Cats
Step1: Let's look inside the training folder...
Step2: ...and at an example review.
Step3: Sounds like I'd really enjoy Zombiegeddon...
Now we'll check how many words are in the dataset.
Step4: Before we can analyze text, we must first tokenize it. This refers to the process of splitting a sentence into an array of words (or more generally, into an array of tokens).
Step5: We use Pytorch's torchtext library to preprocess our data, telling it to use the wonderful spacy library to handle tokenization.
First, we create a torchtext field, which describes how to preprocess a piece of text - in this case, we tell torchtext to make everything lowercase, and tokenize it with spacy.
Step6: fastai works closely with torchtext. We create a ModelData object for language modeling by taking advantage of LanguageModelData, passing it our torchtext field object, and the paths to our training, test, and validation sets. In this case, we don't have a separate test set, so we'll just use VAL_PATH for that too.
As well as the usual bs (batch size) parameter, we also not have bptt; this define how many words are processing at a time in each row of the mini-batch. More importantly, it defines how many 'layers' we will backprop through. Making this number higher will increase time and memory requirements, but will improve the model's ability to handle long sentences.
Step7: After building our ModelData object, it automatically fills the TEXT object with a very important attribute
Step8: Here are the
Step9: This is the start of the mapping from integer IDs to unique tokens.
Step10: Note that in a LanguageModelData object there is only one item in each dataset
Step11: torchtext will handle turning this words into integer IDs for us automatically.
Step12: Our LanguageModelData object will create batches with 64 columns (that's our batch size), and varying sequence lengths of around 80 tokens (that's our bptt parameter - backprop through time).
Each batch also contains the exact same data as labels, but one word later in the text - since we're trying to always predict the next word. The labels are flattened into a 1d array.
Step13: Train
We have a number of parameters to set - we'll learn more about these later, but you should find these values suitable for many problems.
Step14: Researchers have found that large amounts of momentum (which we'll learn about later) don't work well with these kinds of RNN models, so we create a version of the Adam optimizer with less momentum than it's default of 0.9.
Step15: fastai uses a variant of the state of the art AWD LSTM Language Model developed by Stephen Merity. A key feature of this model is that it provides excellent regularization through Dropout. There is no simple way known (yet!) to find the best values of the dropout parameters below - you just have to experiment...
However, the other parameters (alpha, beta, and clip) shouldn't generally need tuning.
Step16: As you can see below, I gradually tuned the language model in a few stages. I possibly could have trained it further (it wasn't yet overfitting), but I didn't have time to experiment more. Maybe you can see if you can train it to a better accuracy! (I used lr_find to find a good learning rate, but didn't save the output in this notebook. Feel free to try running it yourself now.)
Step17: In the sentiment analysis section, we'll just need half of the language model - the encoder, so we save that part.
Step18: Language modeling accuracy is generally measured using the metric perplexity, which is simply exp() of the loss function we used.
Step20: Test
We can play around with our language model a bit to check it seems to be working OK. First, let's create a short bit of text to 'prime' a set of predictions. We'll use our torchtext field to numericalize it so we can feed it to our language model.
Step21: We haven't yet added methods to make it easy to test a language model, so we'll need to manually go through the steps.
Step22: Let's see what the top 10 predictions were for the next word after our short text
Step23: ...and let's see if our model can generate a bit more text all by itself!
Step24: Sentiment
We'll need to the saved vocab from the language model, since we need to ensure the same words map to the same IDs.
Step25: sequential=False tells torchtext that a text field should be tokenized (in this case, we just want to store the 'positive' or 'negative' single label).
splits is a torchtext method that creates train, test, and validation sets. The IMDB dataset is built into torchtext, so we can take advantage of that. Take a look at lang_model-arxiv.ipynb to see how to define your own fastai/torchtext datasets.
Step26: fastai can create a ModelData object directly from torchtext splits.
Step27: Because we're fine-tuning a pretrained model, we'll use differential learning rates, and also increase the max gradient for clipping, to allow the SGDR to work better. | Python Code:
PATH='data/aclImdb/'
TRN_PATH = 'train/all/'
VAL_PATH = 'test/all/'
TRN = f'{PATH}{TRN_PATH}'
VAL = f'{PATH}{VAL_PATH}'
%ls {PATH}
Explanation: Language modeling
Data
The large movie view dataset contains a collection of 50,000 reviews from IMDB. The dataset contains an even number of positive and negative reviews. The authors considered only highly polarized reviews. A negative review has a score ≤ 4 out of 10, and a positive review has a score ≥ 7 out of 10. Neutral reviews are not included in the dataset. The dataset is divided into training and test sets. The training set is the same 25,000 labeled reviews.
The sentiment classification task consists of predicting the polarity (positive or negative) of a given text.
However, before we try to classify sentiment, we will simply try to create a language model; that is, a model that can predict the next word in a sentence. Why? Because our model first needs to understand the structure of English, before we can expect it to recognize positive vs negative sentiment.
So our plan of attack is the same as we used for Dogs v Cats: pretrain a model to do one thing (predict the next word), and fine tune it to do something else (classify sentiment).
Unfortunately, there are no good pretrained language models available to download, so we need to create our own. To follow along with this notebook, we suggest downloading the dataset from this location on files.fast.ai.
End of explanation
trn_files = !ls {TRN}
trn_files[:10]
Explanation: Let's look inside the training folder...
End of explanation
review = !cat {TRN}{trn_files[6]}
review[0]
Explanation: ...and at an example review.
End of explanation
!find {TRN} -name '*.txt' | xargs cat | wc -w
!find {VAL} -name '*.txt' | xargs cat | wc -w
Explanation: Sounds like I'd really enjoy Zombiegeddon...
Now we'll check how many words are in the dataset.
End of explanation
' '.join(spacy_tok(review[0]))
Explanation: Before we can analyze text, we must first tokenize it. This refers to the process of splitting a sentence into an array of words (or more generally, into an array of tokens).
End of explanation
TEXT = data.Field(lower=True, tokenize=spacy_tok)
Explanation: We use Pytorch's torchtext library to preprocess our data, telling it to use the wonderful spacy library to handle tokenization.
First, we create a torchtext field, which describes how to preprocess a piece of text - in this case, we tell torchtext to make everything lowercase, and tokenize it with spacy.
End of explanation
bs=64; bptt=70
FILES = dict(train=TRN_PATH, validation=VAL_PATH, test=VAL_PATH)
md = LanguageModelData.from_text_files(PATH, TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10)
Explanation: fastai works closely with torchtext. We create a ModelData object for language modeling by taking advantage of LanguageModelData, passing it our torchtext field object, and the paths to our training, test, and validation sets. In this case, we don't have a separate test set, so we'll just use VAL_PATH for that too.
As well as the usual bs (batch size) parameter, we also not have bptt; this define how many words are processing at a time in each row of the mini-batch. More importantly, it defines how many 'layers' we will backprop through. Making this number higher will increase time and memory requirements, but will improve the model's ability to handle long sentences.
End of explanation
pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb'))
Explanation: After building our ModelData object, it automatically fills the TEXT object with a very important attribute: TEXT.vocab. This is a vocabulary, which stores which words (or tokens) have been seen in the text, and how each word will be mapped to a unique integer id. We'll need to use this information again later, so we save it.
(Technical note: python's standard Pickle library can't handle this correctly, so at the top of this notebook we used the dill library instead and imported it as pickle).
End of explanation
len(md.trn_dl), md.nt, len(md.trn_ds), len(md.trn_ds[0].text)
Explanation: Here are the: # batches; # unique tokens in the vocab; # tokens in the training set; # sentences
End of explanation
# 'itos': 'int-to-string'
TEXT.vocab.itos[:12]
# 'stoi': 'string to int'
TEXT.vocab.stoi['the']
Explanation: This is the start of the mapping from integer IDs to unique tokens.
End of explanation
md.trn_ds[0].text[:12]
Explanation: Note that in a LanguageModelData object there is only one item in each dataset: all the words of the text joined together.
End of explanation
TEXT.numericalize([md.trn_ds[0].text[:12]])
Explanation: torchtext will handle turning this words into integer IDs for us automatically.
End of explanation
next(iter(md.trn_dl))
Explanation: Our LanguageModelData object will create batches with 64 columns (that's our batch size), and varying sequence lengths of around 80 tokens (that's our bptt parameter - backprop through time).
Each batch also contains the exact same data as labels, but one word later in the text - since we're trying to always predict the next word. The labels are flattened into a 1d array.
End of explanation
em_sz = 200 # size of each embedding vector
nh = 500 # number of hidden activations per layer
nl = 3 # number of layers
Explanation: Train
We have a number of parameters to set - we'll learn more about these later, but you should find these values suitable for many problems.
End of explanation
opt_fn = partial(optim.Adam, betas=(0.7, 0.99))
Explanation: Researchers have found that large amounts of momentum (which we'll learn about later) don't work well with these kinds of RNN models, so we create a version of the Adam optimizer with less momentum than it's default of 0.9.
End of explanation
learner = md.get_model(opt_fn, em_sz, nh, nl,
dropouti=0.05, dropout=0.05, wdrop=0.1, dropoute=0.02, dropouth=0.05)
learner.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
learner.clip=0.3
Explanation: fastai uses a variant of the state of the art AWD LSTM Language Model developed by Stephen Merity. A key feature of this model is that it provides excellent regularization through Dropout. There is no simple way known (yet!) to find the best values of the dropout parameters below - you just have to experiment...
However, the other parameters (alpha, beta, and clip) shouldn't generally need tuning.
End of explanation
learner.fit(3e-3, 4, wds=1e-6, cycle_len=1, cycle_mult=2)
learner.save_encoder('adam1_enc')
learner.load_encoder('adam1_enc')
learner.load_cycle('adam3_10',2)
learner.fit(3e-3, 1, wds=1e-6, cycle_len=10)
learner.save_encoder('adam3_10_enc')
Explanation: As you can see below, I gradually tuned the language model in a few stages. I possibly could have trained it further (it wasn't yet overfitting), but I didn't have time to experiment more. Maybe you can see if you can train it to a better accuracy! (I used lr_find to find a good learning rate, but didn't save the output in this notebook. Feel free to try running it yourself now.)
End of explanation
learner.save_encoder('adam3_20_enc')
learner.load_encoder('adam3_20_enc')
Explanation: In the sentiment analysis section, we'll just need half of the language model - the encoder, so we save that part.
End of explanation
math.exp(4.165)
pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb'))
Explanation: Language modeling accuracy is generally measured using the metric perplexity, which is simply exp() of the loss function we used.
End of explanation
m=learner.model
ss=. So, it wasn't quite was I was expecting, but I really liked it anyway! The best
s = [spacy_tok(ss)]
t=TEXT.numericalize(s)
' '.join(s[0])
Explanation: Test
We can play around with our language model a bit to check it seems to be working OK. First, let's create a short bit of text to 'prime' a set of predictions. We'll use our torchtext field to numericalize it so we can feed it to our language model.
End of explanation
# Set batch size to 1
m[0].bs=1
# Turn off dropout
m.eval()
# Reset hidden state
m.reset()
# Get predictions from model
res,*_ = m(t)
# Put the batch size back to what it was
m[0].bs=bs
Explanation: We haven't yet added methods to make it easy to test a language model, so we'll need to manually go through the steps.
End of explanation
nexts = torch.topk(res[-1], 10)[1]
[TEXT.vocab.itos[o] for o in to_np(nexts)]
Explanation: Let's see what the top 10 predictions were for the next word after our short text:
End of explanation
print(ss,"\n")
for i in range(50):
n=res[-1].topk(2)[1]
n = n[1] if n.data[0]==0 else n[0]
print(TEXT.vocab.itos[n.data[0]], end=' ')
res,*_ = m(n[0].unsqueeze(0))
print('...')
Explanation: ...and let's see if our model can generate a bit more text all by itself!
End of explanation
TEXT = pickle.load(open(f'{PATH}models/TEXT.pkl','rb'))
Explanation: Sentiment
We'll need to the saved vocab from the language model, since we need to ensure the same words map to the same IDs.
End of explanation
IMDB_LABEL = data.Field(sequential=False)
splits = torchtext.datasets.IMDB.splits(TEXT, IMDB_LABEL, 'data/')
t = splits[0].examples[0]
t.label, ' '.join(t.text[:16])
Explanation: sequential=False tells torchtext that a text field should be tokenized (in this case, we just want to store the 'positive' or 'negative' single label).
splits is a torchtext method that creates train, test, and validation sets. The IMDB dataset is built into torchtext, so we can take advantage of that. Take a look at lang_model-arxiv.ipynb to see how to define your own fastai/torchtext datasets.
End of explanation
md2 = TextData.from_splits(PATH, splits, bs)
m3 = md2.get_model(opt_fn, 1500, bptt, emb_sz=em_sz, n_hid=nh, n_layers=nl,
dropout=0.1, dropouti=0.4, wdrop=0.5, dropoute=0.05, dropouth=0.3)
m3.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
m3.load_encoder(f'adam3_20_enc')
Explanation: fastai can create a ModelData object directly from torchtext splits.
End of explanation
m3.clip=25.
lrs=np.array([1e-4,1e-3,1e-2])
m3.freeze_to(-1)
m3.fit(lrs/2, 1, metrics=[accuracy])
m3.unfreeze()
m3.fit(lrs, 1, metrics=[accuracy], cycle_len=1)
m3.fit(lrs, 7, metrics=[accuracy], cycle_len=2, cycle_save_name='imdb2')
m3.load_cycle('imdb2', 4)
accuracy(*m3.predict_with_targs())
Explanation: Because we're fine-tuning a pretrained model, we'll use differential learning rates, and also increase the max gradient for clipping, to allow the SGDR to work better.
End of explanation |
13,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
The notebook shows how the lime_image tools can be applied to a smaller dataset like mnist. The dataset is very low resolution and allows quite a bit of rapid-iteration.
Step2: Setup a Pipeline
Here we make a pipeline for processing the images where basically we flatten the image back to 1d vectors and then use a RandomForest Classifier
Step3: Gaining Insight
Can we find an explanation for a classification the algorithm got wrong | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from skimage.color import gray2rgb, rgb2gray, label2rgb # since the code wants color images
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784')
# make each image color so lime_image works correctly
X_vec = np.stack([gray2rgb(iimg) for iimg in mnist.data.reshape((-1, 28, 28))],0).astype(np.uint8)
y_vec = mnist.target.astype(np.uint8)
%matplotlib inline
fig, ax1 = plt.subplots(1,1)
ax1.imshow(X_vec[0], interpolation = 'none')
ax1.set_title('Digit: {}'.format(y_vec[0]))
Explanation: Overview
The notebook shows how the lime_image tools can be applied to a smaller dataset like mnist. The dataset is very low resolution and allows quite a bit of rapid-iteration.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import Normalizer
class PipeStep(object):
Wrapper for turning functions into pipeline transforms (no-fitting)
def __init__(self, step_func):
self._step_func=step_func
def fit(self,*args):
return self
def transform(self,X):
return self._step_func(X)
makegray_step = PipeStep(lambda img_list: [rgb2gray(img) for img in img_list])
flatten_step = PipeStep(lambda img_list: [img.ravel() for img in img_list])
simple_rf_pipeline = Pipeline([
('Make Gray', makegray_step),
('Flatten Image', flatten_step),
#('Normalize', Normalizer()),
#('PCA', PCA(16)),
('RF', RandomForestClassifier())
])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_vec, y_vec,
train_size=0.55)
simple_rf_pipeline.fit(X_train, y_train)
%load_ext autoreload
%autoreload 2
import os,sys
try:
import lime
except:
sys.path.append(os.path.join('..', '..')) # add the current directory
import lime
from lime import lime_image
from lime.wrappers.scikit_image import SegmentationAlgorithm
explainer = lime_image.LimeImageExplainer(verbose = False)
segmenter = SegmentationAlgorithm('quickshift', kernel_size=1, max_dist=200, ratio=0.2)
%%time
explanation = explainer.explain_instance(X_test[0],
classifier_fn = simple_rf_pipeline.predict_proba,
top_labels=10, hide_color=0, num_samples=10000, segmentation_fn=segmenter)
temp, mask = explanation.get_image_and_mask(y_test[0], positive_only=True, num_features=10, hide_rest=False, min_weight = 0.01)
fig, (ax1, ax2) = plt.subplots(1,2, figsize = (8, 4))
ax1.imshow(label2rgb(mask,temp, bg_label = 0), interpolation = 'nearest')
ax1.set_title('Positive Regions for {}'.format(y_test[0]))
temp, mask = explanation.get_image_and_mask(y_test[0], positive_only=False, num_features=10, hide_rest=False, min_weight = 0.01)
ax2.imshow(label2rgb(3-mask,temp, bg_label = 0), interpolation = 'nearest')
ax2.set_title('Positive/Negative Regions for {}'.format(y_test[0]))
# now show them for each class
fig, m_axs = plt.subplots(2,5, figsize = (12,6))
for i, c_ax in enumerate(m_axs.flatten()):
temp, mask = explanation.get_image_and_mask(i, positive_only=True, num_features=1000, hide_rest=False, min_weight = 0.01 )
c_ax.imshow(label2rgb(mask,X_test[0], bg_label = 0), interpolation = 'nearest')
c_ax.set_title('Positive for {}\nActual {}'.format(i, y_test[0]))
c_ax.axis('off')
Explanation: Setup a Pipeline
Here we make a pipeline for processing the images where basically we flatten the image back to 1d vectors and then use a RandomForest Classifier
End of explanation
pipe_pred_test = simple_rf_pipeline.predict(X_test)
wrong_idx = np.random.choice(np.where(pipe_pred_test!=y_test)[0])
print('Using #{} where the label was {} and the pipeline predicted {}'.format(wrong_idx, y_test[wrong_idx], pipe_pred_test[wrong_idx]))
%%time
explanation = explainer.explain_instance(X_test[wrong_idx],
classifier_fn = simple_rf_pipeline.predict_proba,
top_labels=10, hide_color=0, num_samples=10000, segmentation_fn=segmenter)
# now show them for each class
fig, m_axs = plt.subplots(2,5, figsize = (12,6))
for i, c_ax in enumerate(m_axs.flatten()):
temp, mask = explanation.get_image_and_mask(i, positive_only=True, num_features=10, hide_rest=False, min_weight = 0.01 )
c_ax.imshow(label2rgb(mask,temp, bg_label = 0), interpolation = 'nearest')
c_ax.set_title('Positive for {}\nActual {}'.format(i, y_test[wrong_idx]))
c_ax.axis('off')
Explanation: Gaining Insight
Can we find an explanation for a classification the algorithm got wrong
End of explanation |
13,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We'll start with this image
Step1: And here it is now that we've blurred it | Python Code:
imgpath = 'images/original/image.bmp'
blurredpath = 'images/image_blurred.bmp'
img = Image.open(imgpath)
blurred = img.copy().filter(ImageFilter.BLUR)
blurred.save(blurredpath)
Explanation: We'll start with this image:
End of explanation
[red_flipped, green_flipped, blue_flipped] = compare_images(imgpath, blurredpath)
Explanation: And here it is now that we've blurred it:
Now, let's compare the two to see what kind of error rates we can expect:
End of explanation |
13,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Dataframe
Step2: Create Functions To Process Data
Step3: Create A Pipeline Of Those Functions | Python Code:
import pandas as pd
Explanation: Title: Create A Pipeline In Pandas
Slug: pandas_create_pipeline
Summary: Create a pipeline in pandas.
Date: 2017-01-16 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Pandas' pipeline feature allows you to string together Python functions in order to build a pipeline of data processing.
Preliminaries
End of explanation
# Create empty dataframe
df = pd.DataFrame()
# Create a column
df['name'] = ['John', 'Steve', 'Sarah']
df['gender'] = ['Male', 'Male', 'Female']
df['age'] = [31, 32, 19]
# View dataframe
df
Explanation: Create Dataframe
End of explanation
# Create a function that
def mean_age_by_group(dataframe, col):
# groups the data by a column and returns the mean age per group
return dataframe.groupby(col).mean()
# Create a function that
def uppercase_column_name(dataframe):
# Capitalizes all the column headers
dataframe.columns = dataframe.columns.str.upper()
# And returns them
return dataframe
Explanation: Create Functions To Process Data
End of explanation
# Create a pipeline that applies the mean_age_by_group function
(df.pipe(mean_age_by_group, col='gender')
# then applies the uppercase column name function
.pipe(uppercase_column_name)
)
Explanation: Create A Pipeline Of Those Functions
End of explanation |
13,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.eco - Traitement automatique de la langue en Python - correction
Correction d'exercices liés au traitement automatique du langage naturel.
Step1: On télécharge les données textuelles nécessaires pour le package nltk.
Step2: Exercice 1
Step3: Deux documents possibles
Step4: Le score TF-IDF ne tient pas compte de l'ordre des mots. Approche "bag of words".
Step5: Scores proches entre a et b. a contient deux fois 'green', mais b est plus de deux fois plus court, donc le score est plus élevé. Il existe d'autres variantes de tf-idf. Il faut choisir celui qui correspond le mieux à vos besoins.
Exercice 2
Elections américaines
Step6: Loi Zipf
Step7: Diversité du vocabulaire
Step8: Exercice 3
3-1 Autres termes de recherche
Step9: 3-2 Autres métriques de distance
Step10: Pensez-vous que pour notre cas la fonction tf_binary est justifiée ?
Exercice 4
Step11: Heatmap
Step12: Clustering Hiérarchique
Step13: La matrice doit être symmétrique.
Step14: On voit que les documents sont globalement assez différents les uns des autres.
Exercice 5
Comparaison des différentes fonctions de distances.
Step15: Pour comparer les sets deux à deux, on peut calculer de nouveau une distance de jaccard... des sets de collocations. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.eco - Traitement automatique de la langue en Python - correction
Correction d'exercices liés au traitement automatique du langage naturel.
End of explanation
import nltk
nltk.download('stopwords')
Explanation: On télécharge les données textuelles nécessaires pour le package nltk.
End of explanation
corpus = {
'a' : "Mr. Green killed Colonel Mustard in the study with the candlestick. "
"Mr. Green is not a very nice fellow.",
'b' : "Professor Plum has a green plant in his study.",
'c' : "Miss Scarlett watered Professor Plum's green plant while he was away "
"from his office last week."
}
terms = {
'a' : [ i.lower() for i in corpus['a'].split() ],
'b' : [ i.lower() for i in corpus['b'].split() ],
'c' : [ i.lower() for i in corpus['c'].split() ]
}
from math import log
QUERY_TERMS = ['green', 'plant']
def tf(term, doc, normalize=True):
doc = doc.lower().split()
if normalize:
return doc.count(term.lower()) / float(len(doc))
else:
return doc.count(term.lower()) / 1.0
def idf(term, corpus):
num_texts_with_term = len([True for text in corpus if term.lower()
in text.lower().split()])
try:
return 1.0 + log(float(len(corpus)) / num_texts_with_term)
except ZeroDivisionError:
return 1.0
def tf_idf(term, doc, corpus):
return tf(term, doc) * idf(term, corpus)
query_scores = {'a': 0, 'b': 0, 'c': 0}
for term in [t.lower() for t in QUERY_TERMS]:
for doc in sorted(corpus):
score = tf_idf(term, corpus[doc], corpus.values())
query_scores[doc] += score
print("Score TF-IDF total pour le terme '{}'".format(' '.join(QUERY_TERMS), ))
for (doc, score) in sorted(query_scores.items()):
print(doc, score)
Explanation: Exercice 1
End of explanation
QUERY_TERMS = ['plant', 'green']
query_scores = {'a': 0, 'b': 0, 'c': 0}
for term in [t.lower() for t in QUERY_TERMS]:
for doc in sorted(corpus):
score = tf_idf(term, corpus[doc], corpus.values())
query_scores[doc] += score
print("Score TF-IDF total pour le terme '{}'".format(' '.join(QUERY_TERMS), ))
for (doc, score) in sorted(query_scores.items()):
print(doc, score)
Explanation: Deux documents possibles : b ou c (a ne contient pas le mot "plant"). B est plus court : donc green plant "pèse" plus.
End of explanation
QUERY_TERMS = ['green']
term = [t.lower() for t in QUERY_TERMS]
term = 'green'
query_scores = {'a': 0, 'b': 0, 'c': 0}
for doc in sorted(corpus):
score = tf_idf(term, corpus[doc], corpus.values())
query_scores[doc] += score
print("Score TF-IDF total pour le terme '{}'".format(term))
for (doc, score) in sorted(query_scores.items()):
print(doc, score)
len(corpus['b'])/len(corpus['a'])
Explanation: Le score TF-IDF ne tient pas compte de l'ordre des mots. Approche "bag of words".
End of explanation
import json
import nltk
USER_ID = '107033731246200681024'
with open('./ressources_googleplus/' + USER_ID + '.json', 'r') as f:
activity_results=json.load(f)
all_content = " ".join([ a['object']['content'] for a in activity_results ])
tokens = all_content.split()
text = nltk.Text(tokens)
text.concordance('Hillary')
text.concordance('Trump')
text.concordance('vote')
text.concordance('politics')
fdist = text.vocab()
fdist['Hillary'], fdist['Trump'], fdist['vote'], fdist['politics']
Explanation: Scores proches entre a et b. a contient deux fois 'green', mais b est plus de deux fois plus court, donc le score est plus élevé. Il existe d'autres variantes de tf-idf. Il faut choisir celui qui correspond le mieux à vos besoins.
Exercice 2
Elections américaines
End of explanation
%matplotlib inline
fdist = text.vocab()
no_stopwords = [(k,v) for (k,v) in fdist.items() if k.lower() \
not in nltk.corpus.stopwords.words('english')]
#nltk a été porté en python récemment, quelques fonctionnalités se sont perdues
#(par exemple, Freq Dist n'est pas toujours ordonné par ordre décroissant)
#fdist_no_stopwords = nltk.FreqDist(no_stopwords)
#fdist_no_stopwords.plot(100, cumulative = True)
#le plus rapide : passer par pandas
import pandas as p
df_nostopwords = p.Series(dict(no_stopwords))
df_nostopwords.sort_values(ascending=False)
df_nostopwords.plot();
import matplotlib.pyplot as plt
df_nostopwords=p.Series(dict(no_stopwords))
df_nostopwords.sort_values(ascending=False)
df_nostopwords=p.DataFrame(df_nostopwords)
df_nostopwords.rename(columns={0:'count'},inplace=True)
df_nostopwords['one']=1
df_nostopwords['rank']=df_nostopwords['one'].cumsum()
df_nostopwords['zipf_law']=df_nostopwords['count'].iloc[0]/df_nostopwords['rank']
df_nostopwords=df_nostopwords[1:]
plt.plot(df_nostopwords['count'],df_nostopwords['zipf_law'], '.');
df = p.Series(fdist)
df.sort_values(ascending=False)
df.plot();
df = p.Series(fdist)
df.sort_values(ascending=False)
df=p.DataFrame(df)
df.rename(columns={0:'count'},inplace=True)
df['one']=1
df['rank']=df['one'].cumsum()
df['zipf_law']=df['count'].iloc[0]/df['rank']
df=df[1:]
fig, ax = plt.subplots(1, 1)
ax.plot(df['count'], df['zipf_law'], '.')
ax.set_title("zipf_law");
Explanation: Loi Zipf
End of explanation
def lexical_diversity(token_list):
return len(token_list) / len(set(token_list))
USER_ID = '107033731246200681024'
with open('./ressources_googleplus/' + USER_ID + '.json', 'r') as f:
activity_results=json.load(f)
all_content = " ".join([ a['object']['content'] for a in activity_results ])
tokens = all_content.split()
text = nltk.Text(tokens)
lexical_diversity(tokens)
Explanation: Diversité du vocabulaire
End of explanation
import json
import nltk
path = 'ressources_googleplus/107033731246200681024.json'
text_data = json.loads(open(path).read())
QUERY_TERMS = ['open','data']
activities = [activity['object']['content'].lower().split() \
for activity in text_data \
if activity['object']['content'] != ""]
# Le package TextCollection contient un module tf-idf
tc = nltk.TextCollection(activities)
relevant_activities = []
for idx in range(len(activities)):
score = 0
for term in [t.lower() for t in QUERY_TERMS]:
score += tc.tf_idf(term, activities[idx])
if score > 0:
relevant_activities.append({'score': score, 'title': text_data[idx]['title'],
'url': text_data[idx]['url']})
# Tri par score et présentation des résultats
relevant_activities = sorted(relevant_activities,
key=lambda p: p['score'], reverse=True)
c=0
for activity in relevant_activities:
if c < 6:
print(activity['title'])
print('\tLink: {}'.format(activity['url']))
print('\tScore: {}'.format(activity['score']))
c+=1
Explanation: Exercice 3
3-1 Autres termes de recherche
End of explanation
from math import log
def tf_binary(term, doc):
doc_l = [d.lower() for d in doc]
if term.lower() in doc:
return 1.0
else:
return 0.0
def tf_rawfreq(term, doc):
doc_l = [d.lower() for d in doc]
return doc_l.count(term.lower())
def tf_lognorm(term,doc):
doc_l = [d.lower() for d in doc]
if doc_l.count(term.lower()) > 0:
return 1.0 + log(doc_l.count(term.lower()))
else:
return 1.0
def idf(term,corpus):
num_texts_with_term = len([True for text in corpus\
if term.lower() in text])
try:
return log(float(len(corpus) / num_texts_with_term))
except ZeroDivisionError:
return 1.0
def idf_init(term, corpus):
num_texts_with_term = len([True for text in corpus\
if term.lower() in text])
try:
return 1.0 + log(float(len(corpus)) / num_texts_with_term)
except ZeroDivisionError:
return 1.0
def idf_smooth(term,corpus):
num_texts_with_term = len([True for text in corpus\
if term.lower() in text])
try:
return log(1.0 + float(len(corpus) / num_texts_with_term))
except ZeroDivisionError:
return 1.0
def tf_idf0(term, doc, corpus):
return tf_binary(term, doc) * idf(term, corpus)
def tf_idf1(term, doc, corpus):
return tf_rawfreq(term, doc) * idf(term, corpus)
def tf_idf2(term, doc, corpus):
return tf_lognorm(term, doc) * idf(term, corpus)
def tf_idf3(term, doc, corpus):
return tf_rawfreq(term, doc) * idf_init(term, corpus)
def tf_idf4(term, doc, corpus):
return tf_lognorm(term, doc) * idf_init(term, corpus)
def tf_idf5(term, doc, corpus):
return tf_rawfreq(term, doc) * idf_smooth(term, corpus)
def tf_idf6(term, doc, corpus):
return tf_lognorm(term, doc) * idf_smooth(term, corpus)
import json
import nltk
path = 'ressources_googleplus/107033731246200681024.json'
text_data = json.loads(open(path).read())
QUERY_TERMS = ['open','data']
activities = [activity['object']['content'].lower().split() \
for activity in text_data \
if activity['object']['content'] != ""]
relevant_activities = []
for idx in range(len(activities)):
score = 0
for term in [t.lower() for t in QUERY_TERMS]:
score += tf_idf1(term, activities[idx],activities)
if score > 0:
relevant_activities.append({'score': score, 'title': text_data[idx]['title'],
'url': text_data[idx]['url']})
# Tri par score et présentation des résultats
relevant_activities = sorted(relevant_activities,
key=lambda p: p['score'], reverse=True)
c=0
for activity in relevant_activities:
if c < 6:
print(activity['title'])
print('\tLink: {}'.format(activity['url']))
print('\tScore: {}'.format(activity['score']))
c+=1
Explanation: 3-2 Autres métriques de distance
End of explanation
import json
import nltk
path = 'ressources_googleplus/107033731246200681024.json'
data = json.loads(open(path).read())
# Sélection des textes qui ont plus de 1000 mots
data = [ post for post in json.loads(open(path).read()) \
if len(post['object']['content']) > 1000 ]
all_posts = [post['object']['content'].lower().split()
for post in data ]
tc = nltk.TextCollection(all_posts)
# Calcul d'une matrice terme de recherche x document
# Renvoie un score tf-idf pour le terme dans le document
td_matrix = {}
for idx in range(len(all_posts)):
post = all_posts[idx]
fdist = nltk.FreqDist(post)
doc_title = data[idx]['title']
url = data[idx]['url']
td_matrix[(doc_title, url)] = {}
for term in fdist.keys():
td_matrix[(doc_title, url)][term] = tc.tf_idf(term, post)
distances = {}
for (title1, url1) in td_matrix.keys():
distances[(title1, url1)] = {}
(min_dist, most_similar) = (1.0, ('', ''))
for (title2, url2) in td_matrix.keys():
#copie des valeurs (un dictionnaire étant mutable)
terms1 = td_matrix[(title1, url1)].copy()
terms2 = td_matrix[(title2, url2)].copy()
#on complete les gaps pour avoir des vecteurs de même longueur
for term1 in terms1:
if term1 not in terms2:
terms2[term1] = 0
for term2 in terms2:
if term2 not in terms1:
terms1[term2] = 0
#on créé des vecteurs de score pour l'ensemble des terms de chaque document
v1 = [score for (term, score) in sorted(terms1.items())]
v2 = [score for (term, score) in sorted(terms2.items())]
#calcul des similarité entre documents : distance cosine entre les deux vecteurs de scores tf-idf
distances[(title1, url1)][(title2, url2)] = \
nltk.cluster.util.cosine_distance(v1, v2)
import pandas as p
df = p.DataFrame(distances)
df.index = df.index.droplevel(0)
df.iloc[:3,:3]
knn_post7EaHeYc1BiB = df.loc['https://plus.google.com/+TimOReilly/posts/7EaHeYc1BiB']
knn_post7EaHeYc1BiB.sort_values()
#le post [0] est lui-même
knn_post7EaHeYc1BiB[1:6]
Explanation: Pensez-vous que pour notre cas la fonction tf_binary est justifiée ?
Exercice 4
End of explanation
import pandas as p
import seaborn as sns; sns.set()
import matplotlib.pyplot as plt
fig = plt.figure( figsize=(8,8) )
ax = fig.add_subplot(111)
df = p.DataFrame(distances)
for i in range(len(df)):
df.iloc[i,i]=0
pal = sns.light_palette((210, 90, 60), input="husl",as_cmap=True)
g = sns.heatmap(df, yticklabels=True, xticklabels=True, cbar=False, cmap=pal);
Explanation: Heatmap
End of explanation
import scipy.spatial as sp, scipy.cluster.hierarchy as hc
df = p.DataFrame(distances)
for i in range(len(df)):
df.iloc[i,i] = 0
Explanation: Clustering Hiérarchique
End of explanation
mat = df.values
mat = (mat + mat.T) / 2
dist = sp.distance.squareform(mat)
from pkg_resources import parse_version
import scipy
if parse_version(scipy.__version__) <= parse_version('0.17.1'):
# Il peut y avoir quelques soucis avec la méthode Ward
data_link = hc.linkage(dist, method='single')
else:
data_link = hc.linkage(dist, method='ward')
fig = plt.figure( figsize=(8,8) )
g = sns.clustermap(df, row_linkage=data_link, col_linkage=data_link)
# instance de l'objet axes, c'est un peu caché :)
ax = g.ax_heatmap
ax;
Explanation: La matrice doit être symmétrique.
End of explanation
import json
import nltk
path = 'ressources_googleplus/107033731246200681024.json'
data = json.loads(open(path).read())
# Nombre de co-occurrences à trouver
N = 25
all_tokens = [token for activity in data for token in \
activity['object']['content'].lower().split()]
finder = nltk.BigramCollocationFinder.from_words(all_tokens)
finder.apply_freq_filter(2)
#filtre des mots trop fréquents
finder.apply_word_filter(lambda w: w in nltk.corpus.stopwords.words('english'))
bim = nltk.collocations.BigramAssocMeasures()
distances_func = [bim.raw_freq, bim.jaccard, bim.dice, bim.student_t, \
bim.chi_sq, bim.likelihood_ratio, bim.pmi]
collocations = {}
collocations_sets = {}
for d in distances_func:
collocations[d] = finder.nbest(d,N)
collocations_sets[d] = set([' '.join(c) for c in collocations[d]])
print('\n')
print(d)
for collocation in collocations[d]:
c = ' '.join(collocation)
print(c)
Explanation: On voit que les documents sont globalement assez différents les uns des autres.
Exercice 5
Comparaison des différentes fonctions de distances.
End of explanation
for d1 in distances_func:
for d2 in distances_func:
if d1 != d2:
jac = len(collocations_sets[d1].intersection(collocations_sets[d2])) / \
len(collocations_sets[d1].union(collocations_sets[d2]))
if jac > 0.8:
print('Méthode de distances comparables')
print(jac,'\n'+str(d1),'\n'+str(d2))
print('\n')
print('\n')
print('\n')
for d1 in distances_func:
for d2 in distances_func:
if d1 != d2:
jac = len(collocations_sets[d1].intersection(collocations_sets[d2])) / \
len(collocations_sets[d1].union(collocations_sets[d2]))
if jac < 0.2:
print('Méthode de distances avec des résultats très différents')
print(jac,'\n'+str(d1),'\n'+str(d2))
print('\n')
import json
import nltk
path = 'ressources_googleplus/107033731246200681024.json'
data = json.loads(open(path).read())
# Nombre de co-occurrences à trouver
N = 25
all_tokens = [token for activity in data for token in \
activity['object']['content'].lower().split()]
finder = nltk.TrigramCollocationFinder.from_words(all_tokens)
finder.apply_freq_filter(2)
#filtre des mots trop fréquents
finder.apply_word_filter(lambda w: w in nltk.corpus.stopwords.words('english'))
trigram_measures = nltk.collocations.TrigramAssocMeasures()
collocations = finder.nbest(trigram_measures.jaccard, N)
for collocation in collocations:
c = ' '.join(collocation)
print(c)
Explanation: Pour comparer les sets deux à deux, on peut calculer de nouveau une distance de jaccard... des sets de collocations.
End of explanation |
13,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-lr', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MPI-M
Source ID: MPI-ESM-1-2-LR
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
13,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SiPANN
We can also leverage the sipann compact model library.
SIPANN provides with a linear regression fit from mode solver simulations to compute the Sparameters.
Straight
Step1: Coupler ring
Step2: Coupler
Model for evanescent coupler
Step3: Reproducing numbers from thesis page 88 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import gdsfactory as gf
import gdsfactory.simulation.sipann as gs
def pltAttr(x, y, title=None, legend="upper right", save=None):
if legend is not None:
plt.legend(loc=legend)
plt.xlabel(x)
plt.ylabel(y)
if title is not None:
plt.title(title)
if save is not None:
plt.savefig(save)
s = gs.straight(width=0.5)
hr = gs.straight(wg_width=0.45, length_x=20.0)
width = np.linspace(300, 500, 100)
wavelength = 1550
hr.update(width=width)
t = hr.predict(wavelength)
title = "Straight $\lambda=1550$ 20um long"
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(width, np.abs(t) ** 2, label="t")
pltAttr("width (nm)", "|S|", title)
plt.subplot(122)
plt.plot(width, -np.unwrap(np.angle(t)), label="t")
pltAttr("width (nm)", "Phase (rad)", title)
Explanation: SiPANN
We can also leverage the sipann compact model library.
SIPANN provides with a linear regression fit from mode solver simulations to compute the Sparameters.
Straight
End of explanation
# Lets look at the layout of a coupler_ring
gf.components.coupler_ring()
hr = gs.coupler_ring()
r = np.linspace(5000, 50000, 100)
wavelength = 1550
hr.update(radius=r)
k = hr.predict((1, 4), wavelength)
t = hr.predict((1, 3), wavelength)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(r / 1e3, np.abs(k) ** 2, label="k")
plt.plot(r / 1e3, np.abs(t) ** 2, label="t")
pltAttr("Radius (um)", "Magnitude Squared", "HalfRing $\lambda=1550$")
plt.subplot(122)
plt.plot(r / 1e3, np.unwrap(np.angle(k)), label="k")
plt.plot(r / 1e3, -np.unwrap(np.angle(t)), label="t")
pltAttr("Radius (um)", "Phase (rad)", "HalfRing $\lambda=1550$")
hr = gs.coupler_ring(width=0.45, length_x=20.0)
gap = np.linspace(200, 500, 100)
wavelength = 1550
hr.update(gap=gap)
k = hr.predict((1, 4), wavelength)
t = hr.predict((1, 3), wavelength)
title = "Half ring coupler $\lambda=1550$ length=20um 450nm waveguides"
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(gap, np.abs(k) ** 2 * 100, label="k")
plt.plot(gap, np.abs(t) ** 2 * 100, label="t")
pltAttr("gap (nm)", "Coupling (%)", title)
plt.subplot(122)
plt.plot(gap, np.unwrap(np.angle(k)), label="k")
plt.plot(gap, -np.unwrap(np.angle(t)), label="t")
pltAttr("gap (nm)", "Phase (rad)", title)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(gap, np.abs(k) ** 2 * 100, label="k")
pltAttr("gap (nm)", "Coupling (%)", "HalfRing $\lambda=1550$ 20um straight")
Explanation: Coupler ring
End of explanation
gap = 0.236
length = 20.0
width = 0.5
dx = 5.0
dy = 5.0
coupler_layout = gf.components.coupler(
gap=gap, length=length, width=width, dx=dx, dy=dy
)
coupler_layout.plot()
# lets see the default parameters for the circuit model
gs.coupler?
# lets see the different parameters for the layout
gf.components.coupler?
c = gs.coupler(gap=gap, length=length, width=width, dx=dx, dy=dy)
wavelength = np.linspace(1500, 1600, 500)
k = c.predict((1, 4), wavelength)
t = c.predict((1, 3), wavelength)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(wavelength, np.abs(k) ** 2, label="k")
plt.plot(wavelength, np.abs(t) ** 2, label="t")
plt.xlabel("Wavelength (nm)")
plt.ylabel("Magnitude Squared")
plt.title("Crossover at $\lambda \approx 1550nm$")
plt.legend()
hr = gs.coupler()
length = np.linspace(1, 70, 100) * 1e3
wavelength = 1550
hr.update(length=length)
k = hr.predict((1, 4), wavelength)
t = hr.predict((1, 3), wavelength)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(length / 1e3, np.abs(k) ** 2, label="k")
plt.plot(length / 1e3, np.abs(t) ** 2, label="t")
plt.xlabel("length (um)")
plt.ylabel("Magnitude Squared")
plt.title("Crossover at $\lambda \approx 1550nm$")
plt.legend()
plt.subplot(122)
plt.plot(length / 1e3, np.unwrap(np.angle(k)), label="k")
plt.plot(length / 1e3, -np.unwrap(np.angle(t)), label="t")
plt.xlabel("length (um)")
plt.ylabel("Magnitude Squared")
plt.title("Crossover at $\lambda \approx 1550nm$")
plt.legend()
Explanation: Coupler
Model for evanescent coupler
End of explanation
hr = gs.coupler(length=10, gap=0.25, width=0.450)
length = np.linspace(1, 45, 100) * 1e3
wavelength = 1550
hr.update(length=length)
k = hr.predict((1, 4), wavelength)
t = hr.predict((1, 3), wavelength)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(length / 1e3, np.abs(k) ** 2, label="k")
plt.plot(length / 1e3, np.abs(t) ** 2, label="t")
plt.xlabel("Wavelength (nm)")
plt.ylabel("Magnitude Squared")
plt.title("Crossover at $\lambda \approx 1550nm$")
plt.legend()
plt.subplot(122)
plt.plot(length / 1e3, np.unwrap(np.angle(k)), label="k")
plt.plot(length / 1e3, -np.unwrap(np.angle(t)), label="t")
plt.xlabel("length (um)")
plt.ylabel("Magnitude Squared")
plt.title("Crossover at $\lambda \approx 1550nm$")
plt.legend()
hr = gs.coupler(length=10, gap=0.13, width=0.5)
length = np.linspace(1, 45, 100) * 1e3
wavelength = 1550
hr.update(length=length)
k = hr.predict((1, 4), wavelength)
t = hr.predict((1, 3), wavelength)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(length / 1e3, np.abs(k) ** 2, label="k")
plt.plot(length / 1e3, np.abs(t) ** 2, label="t")
plt.xlabel("Wavelength (nm)")
plt.ylabel("Magnitude Squared")
plt.title("Crossover at $\lambda \approx 1550nm$")
plt.legend()
plt.subplot(122)
plt.plot(length / 1e3, np.unwrap(np.angle(k)), label="k")
plt.plot(length / 1e3, -np.unwrap(np.angle(t)), label="t")
plt.xlabel("length (um)")
plt.ylabel("Magnitude Squared")
plt.title("Crossover at $\lambda \approx 1550nm$")
plt.legend()
c50 = gs.coupler(length=18, gap=0.25, width=0.45)
wavelength = np.linspace(1500, 1600, 500)
k = c50.predict((1, 4), wavelength)
t = c50.predict((1, 3), wavelength)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(wavelength, np.abs(k) ** 2, label="k")
plt.plot(wavelength, np.abs(t) ** 2, label="t")
pltAttr("Wavelength (nm)", "Magnitude Squared", "Crossover at $\lambda \approx 1550nm$")
import numpy as np
import matplotlib.pyplot as plt
hr = gs.coupler_ring(length_x=2e3, width=0.45)
gap = np.linspace(0.5, 3, 40) * 1e3
wavelength = 1550
hr.update(gap=gap)
k = hr.predict((1, 4), wavelength)
t = hr.predict((1, 3), wavelength)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(gap / 1e3, np.abs(k) ** 2, label="k")
plt.plot(gap / 1e3, np.abs(t) ** 2, label="t")
plt.xlabel("coupler gap (nm)")
plt.ylabel("Magnitude Squared")
plt.title("2 mm coupling $\lambda=1550$")
c = gs.coupler_ring(length_x=20, wg_width=0.45, gap=0.45)
wavelength = np.linspace(1500, 1600, 500)
k = c.predict((1, 4), wavelength)
t = c.predict((1, 3), wavelength)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(wavelength, np.abs(k) ** 2, label="k")
plt.plot(wavelength, np.abs(t) ** 2, label="t")
plt.ylabel("Magnitude Squared")
plt.plot(wavelength, np.abs(k) ** 2 * 100, label="k")
plt.ylabel("Coupling (%)")
plt.xlabel("wavelength (nm)")
plt.title("20um long 450nm wide 450nm gap straight waveguides")
Explanation: Reproducing numbers from thesis page 88
End of explanation |
13,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
9. Audience Upload to GMP
GMP and Google Ads Connector is used to upload audience data to GMP (e.g. Google Analytics, Campaign Manager) or Google Ads in an automatic and reliable way.
Following sections provide high level guidelines on deploying and configuring GMP and Google Ads Connector. For detailed instructions on how to set up different GMP endpoints, refer to solution's README.md.
Requirements
This notebook requires BigQuery table containing scored audience list. Refer to 7.batch_scoring.ipynb for details on how to get scored audience.
Import required modules
Step1: Deploy GMP and Google Ads Connector
First clone the source code by executing below cell
Step2: Next, exectute following two steps to deploy GMP and Google Ads Connector on your GCP project.
Copy following content
Step3: When the deployment is done, you can verify three Cloud Functions deployments via the Cloud Console UI. If deployment is succeeded, move to next section to upload audience data to Google Analytics via JSONL file.
Configure audience upload endpoint
Different audience upload endpoint APIs have different configurations. Following demonstrates how endpoint for Google Analytics can be configured via Measurement Protocol. Refer to 3.3. Configurations of APIs for detailed configuration options for other endpoints.
Update following GA values according to your needs in the following cell. Refer to Working with the Measurement Protocol for details on field names and correct values.
json
{
"t"
Step4: Create audience list JSON files
GMP and Google Ads Connector's Google Analytics Measurement Protocol pipeline requires JSONL text format. Following cells help to export BigQuery table containing audience list as JSONL file to Google Cloud Storage Bucket. NOTE | Python Code:
# Add custom utils module to Python environment
import os
import sys
sys.path.append(os.path.abspath(os.pardir))
from IPython import display
from utils import helpers
Explanation: 9. Audience Upload to GMP
GMP and Google Ads Connector is used to upload audience data to GMP (e.g. Google Analytics, Campaign Manager) or Google Ads in an automatic and reliable way.
Following sections provide high level guidelines on deploying and configuring GMP and Google Ads Connector. For detailed instructions on how to set up different GMP endpoints, refer to solution's README.md.
Requirements
This notebook requires BigQuery table containing scored audience list. Refer to 7.batch_scoring.ipynb for details on how to get scored audience.
Import required modules
End of explanation
!git clone https://github.com/GoogleCloudPlatform/cloud-for-marketing.git
Explanation: Deploy GMP and Google Ads Connector
First clone the source code by executing below cell:
End of explanation
display.HTML('<a href="" data-commandlinker-command="terminal:create-new">▶Access Terminal◀︎</a>')
Explanation: Next, exectute following two steps to deploy GMP and Google Ads Connector on your GCP project.
Copy following content:
bash
cd cloud-for-marketing/marketing-analytics/activation/gmp-googleads-connector && ./deploy.sh default_install
Execute following cell to start a new Terminal session and paste above copied content to the Terminal. NOTE: This notebook uses Google Analytics Measurement Protocol API to demonstrate audience upload, thus choose 0 on Step 5: Confirm the integration with external APIs... during the installation process on the Terminal session.
It takes about 3 minutes to setup audience uploader pipeline.
End of explanation
%%writefile cloud-for-marketing/marketing-analytics/activation/gmp-googleads-connector/config_api.json
{
"MP": {
"default": {
"mpConfig": {
"v": "1",
"t": "event",
"ec": "video",
"ea": "play",
"ni": "1",
"tid": "UA-XXXXXXXXX-Y"
}
}
}
}
Explanation: When the deployment is done, you can verify three Cloud Functions deployments via the Cloud Console UI. If deployment is succeeded, move to next section to upload audience data to Google Analytics via JSONL file.
Configure audience upload endpoint
Different audience upload endpoint APIs have different configurations. Following demonstrates how endpoint for Google Analytics can be configured via Measurement Protocol. Refer to 3.3. Configurations of APIs for detailed configuration options for other endpoints.
Update following GA values according to your needs in the following cell. Refer to Working with the Measurement Protocol for details on field names and correct values.
json
{
"t": "event",
"ec": "video",
"ea": "play",
"ni": "1",
"tid": "UA-112752759-1"
}
End of explanation
configs = helpers.get_configs('config.yaml')
dest_configs = configs.destination
# GCP project ID
PROJECT_ID = dest_configs.project_id
# Name of BigQuery dataset
DATASET_NAME = dest_configs.dataset_name
# Google Cloud Storage Bucket name to store audience upload JSON files
# NOTE: The name should be same as indicated while deploying
# "GMP and Google Ads Connector" on the Terminal
GCS_BUCKET = 'bucket'
# This Cloud Storage folder is monitored by the "GMP and Google Ads Connector"
# to send over to endpoint (eg: Google Analytics).
GCS_FOLDER = 'outbound'
# File name to export BigQuery Table to Cloud Storage
JSONL_FILENAME = 'myproject_API[MP]_config[default].jsonl'
# BigQuery table containing scored audience data
AUDIENCE_SCORE_TABLE_NAME = 'table'
%%bash -s $PROJECT_ID $DATASET_NAME $AUDIENCE_SCORE_TABLE_NAME $GCS_BUCKET $GCS_FOLDER $JSONL_FILENAME
bq extract \
--destination_format NEWLINE_DELIMITED_JSON \
$1:$2.$3 \
gs://$4/$5/$6
Explanation: Create audience list JSON files
GMP and Google Ads Connector's Google Analytics Measurement Protocol pipeline requires JSONL text format. Following cells help to export BigQuery table containing audience list as JSONL file to Google Cloud Storage Bucket. NOTE: This solution has specific file naming requirement to work properly. Refer to 3.4. Name convention of data files for more details.
As soon as the file is uploaded, GMP and Google Ads Connector processes it and sends it via Measurement Protocol to Google Analytics property configured above ("tid": "UA-XXXXXXXXX-Y").
End of explanation |
13,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciPy 2016 Scikit-learn Tutorial
Out-of-core Learning - Large Scale Text Classification for Sentiment Analysis
Scalability Issues
The sklearn.feature_extraction.text.CountVectorizer and sklearn.feature_extraction.text.TfidfVectorizer classes suffer from a number of scalability issues that all stem from the internal usage of the vocabulary_ attribute (a Python dictionary) used to map the unicode string feature names to the integer feature indices.
The main scalability issues are
Step1: The vocabulary is used at transform time to build the occurrence matrix
Step2: Let's refit with a slightly larger corpus
Step3: The vocabulary_ is the (logarithmically) growing with the size of the training corpus. Note that we could not have built the vocabularies in parallel on the 2 text documents as they share some words hence would require some kind of shared datastructure or synchronization barrier which is complicated to setup, especially if we want to distribute the processing on a cluster.
With this new vocabulary, the dimensionality of the output space is now larger
Step4: The IMDb movie dataset
To illustrate the scalability issues of the vocabulary-based vectorizers, let's load a more realistic dataset for a classical text classification task
Step5: Now, let's load them into our active session via scikit-learn's load_files function
Step6: Note
Since the movie datasets consists of 50,000 individual text files, executing the code snippet above may take ~20 sec or longer.
The load_files function loaded the datasets into sklearn.datasets.base.Bunch objects, which are Python dictionaries
Step7: In particular, we are only interested in the data and target arrays.
Step8: As we can see above the 'target' array consists of integers 0 and 1, where 0 stands for negative and 1 stands for positive.
The Hashing Trick
Remember the bag of word representation using a vocabulary based vectorizer
Step9: This mapping is completely stateless and the dimensionality of the output space is explicitly fixed in advance (here we use a modulo 2 ** 20 which means roughly 1M dimensions). The makes it possible to workaround the limitations of the vocabulary based vectorizer both for parallelizability and online / out-of-core learning.
The HashingVectorizer class is an alternative to the CountVectorizer (or TfidfVectorizer class with use_idf=False) that internally uses the murmurhash hash function
Step10: It shares the same "preprocessor", "tokenizer" and "analyzer" infrastructure
Step11: We can vectorize our datasets into a scipy sparse matrix exactly as we would have done with the CountVectorizer or TfidfVectorizer, except that we can directly call the transform method
Step12: The dimension of the output is fixed ahead of time to n_features=2 ** 20 by default (nearly 1M features) to minimize the rate of collision on most classification problem while having reasonably sized linear models (1M weights in the coef_ attribute)
Step13: Now, let's compare the computational efficiency of the HashingVectorizer to the CountVectorizer
Step14: As we can see, the HashingVectorizer is much faster than the Countvectorizer in this case.
Finally, let us train a LogisticRegression classifier on the IMDb training subset
Step15: Out-of-Core learning
Out-of-Core learning is the task of training a machine learning model on a dataset that does not fit into memory or RAM. This requires the following conditions
Step16: Next, let us create the target label array
Step17: Now, we implement the batch_train function as follows
Step18: Note that we are not using LogisticRegression as in the previous section, but we will use a SGDClassifier with a logistic cost function instead. SGD stands for stochastic gradient descent, an optimization alrogithm that optimizes the weight coefficients iteratively sample by sample, which allows us to feed the data to the classifier chunk by chuck.
And we train the SGDClassifier; using the default settings of the batch_train function, it will train the classifier on 25*1000=25000 documents. (Depending on your machine, this may take >2 min)
Step19: Eventually, let us evaluate its performance
Step20: Limitations of the Hashing Vectorizer
Using the Hashing Vectorizer makes it possible to implement streaming and parallel text classification but can also introduce some issues | Python Code:
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
])
vectorizer.vocabulary_
Explanation: SciPy 2016 Scikit-learn Tutorial
Out-of-core Learning - Large Scale Text Classification for Sentiment Analysis
Scalability Issues
The sklearn.feature_extraction.text.CountVectorizer and sklearn.feature_extraction.text.TfidfVectorizer classes suffer from a number of scalability issues that all stem from the internal usage of the vocabulary_ attribute (a Python dictionary) used to map the unicode string feature names to the integer feature indices.
The main scalability issues are:
Memory usage of the text vectorizer: the all the string representations of the features are loaded in memory
Parallelization problems for text feature extraction: the vocabulary_ would be a shared state: complex synchronization and overhead
Impossibility to do online or out-of-core / streaming learning: the vocabulary_ needs to be learned from the data: its size cannot be known before making one pass over the full dataset
To better understand the issue let's have a look at how the vocabulary_ attribute work. At fit time the tokens of the corpus are uniquely indentified by a integer index and this mapping stored in the vocabulary:
End of explanation
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
Explanation: The vocabulary is used at transform time to build the occurrence matrix:
End of explanation
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
"The quick brown fox jumps over the lazy dog.",
])
vectorizer.vocabulary_
Explanation: Let's refit with a slightly larger corpus:
End of explanation
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
Explanation: The vocabulary_ is the (logarithmically) growing with the size of the training corpus. Note that we could not have built the vocabularies in parallel on the 2 text documents as they share some words hence would require some kind of shared datastructure or synchronization barrier which is complicated to setup, especially if we want to distribute the processing on a cluster.
With this new vocabulary, the dimensionality of the output space is now larger:
End of explanation
import os
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')
test_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'test')
Explanation: The IMDb movie dataset
To illustrate the scalability issues of the vocabulary-based vectorizers, let's load a more realistic dataset for a classical text classification task: sentiment analysis on text documents. The goal is to tell apart negative from positive movie reviews from the Internet Movie Database (IMDb).
In the following sections, with a large subset of movie reviews from the IMDb that has been collected by Maas et al.
A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning Word Vectors for Sentiment Analysis. In the proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
This dataset contains 50,000 movie reviews, which were split into 25,000 training samples and 25,000 test samples. The reviews are labeled as either negative (neg) or positive (pos). Moreover, positive means that a movie received >6 stars on IMDb; negative means that a movie received <5 stars, respectively.
Assuming that the ../fetch_data.py script was run successfully the following files should be available:
End of explanation
from sklearn.datasets import load_files
train = load_files(container_path=(train_path),
categories=['pos', 'neg'])
test = load_files(container_path=(test_path),
categories=['pos', 'neg'])
Explanation: Now, let's load them into our active session via scikit-learn's load_files function
End of explanation
train.keys()
Explanation: Note
Since the movie datasets consists of 50,000 individual text files, executing the code snippet above may take ~20 sec or longer.
The load_files function loaded the datasets into sklearn.datasets.base.Bunch objects, which are Python dictionaries:
End of explanation
import numpy as np
for label, data in zip(('TRAINING', 'TEST'), (train, test)):
print('\n\n%s' % label)
print('Number of documents:', len(data['data']))
print('\n1st document:\n', data['data'][0])
print('\n1st label:', data['target'][0])
print('\nClass names:', data['target_names'])
print('Class count:',
np.unique(data['target']), ' -> ',
np.bincount(data['target']))
Explanation: In particular, we are only interested in the data and target arrays.
End of explanation
from sklearn.utils.murmurhash import murmurhash3_bytes_u32
# encode for python 3 compatibility
for word in "the cat sat on the mat".encode("utf-8").split():
print("{0} => {1}".format(
word, murmurhash3_bytes_u32(word, 0) % 2 ** 20))
Explanation: As we can see above the 'target' array consists of integers 0 and 1, where 0 stands for negative and 1 stands for positive.
The Hashing Trick
Remember the bag of word representation using a vocabulary based vectorizer:
<img src="figures/bag_of_words.svg" width="100%">
To workaround the limitations of the vocabulary-based vectorizers, one can use the hashing trick. Instead of building and storing an explicit mapping from the feature names to the feature indices in a Python dict, we can just use a hash function and a modulus operation:
<img src="figures/hashing_vectorizer.svg" width="100%">
More info and reference for the original papers on the Hashing Trick in the following site as well as a description specific to language here.
End of explanation
from sklearn.feature_extraction.text import HashingVectorizer
h_vectorizer = HashingVectorizer(encoding='latin-1')
h_vectorizer
Explanation: This mapping is completely stateless and the dimensionality of the output space is explicitly fixed in advance (here we use a modulo 2 ** 20 which means roughly 1M dimensions). The makes it possible to workaround the limitations of the vocabulary based vectorizer both for parallelizability and online / out-of-core learning.
The HashingVectorizer class is an alternative to the CountVectorizer (or TfidfVectorizer class with use_idf=False) that internally uses the murmurhash hash function:
End of explanation
analyzer = h_vectorizer.build_analyzer()
analyzer('This is a test sentence.')
Explanation: It shares the same "preprocessor", "tokenizer" and "analyzer" infrastructure:
End of explanation
docs_train, y_train = train['data'], train['target']
docs_valid, y_valid = test['data'][:12500], test['target'][:12500]
docs_test, y_test = test['data'][12500:], test['target'][12500:]
Explanation: We can vectorize our datasets into a scipy sparse matrix exactly as we would have done with the CountVectorizer or TfidfVectorizer, except that we can directly call the transform method: there is no need to fit as HashingVectorizer is a stateless transformer:
End of explanation
h_vectorizer.transform(docs_train)
Explanation: The dimension of the output is fixed ahead of time to n_features=2 ** 20 by default (nearly 1M features) to minimize the rate of collision on most classification problem while having reasonably sized linear models (1M weights in the coef_ attribute):
End of explanation
h_vec = HashingVectorizer(encoding='latin-1')
%timeit -n 1 -r 3 h_vec.fit(docs_train, y_train)
count_vec = CountVectorizer(encoding='latin-1')
%timeit -n 1 -r 3 count_vec.fit(docs_train, y_train)
Explanation: Now, let's compare the computational efficiency of the HashingVectorizer to the CountVectorizer:
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
h_pipeline = Pipeline((
('vec', HashingVectorizer(encoding='latin-1')),
('clf', LogisticRegression(random_state=1)),
))
h_pipeline.fit(docs_train, y_train)
print('Train accuracy', h_pipeline.score(docs_train, y_train))
print('Validation accuracy', h_pipeline.score(docs_valid, y_valid))
import gc
del count_vec
del h_pipeline
gc.collect()
Explanation: As we can see, the HashingVectorizer is much faster than the Countvectorizer in this case.
Finally, let us train a LogisticRegression classifier on the IMDb training subset:
End of explanation
train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')
train_pos = os.path.join(train_path, 'pos')
train_neg = os.path.join(train_path, 'neg')
fnames = [os.path.join(train_pos, f) for f in os.listdir(train_pos)] +\
[os.path.join(train_neg, f) for f in os.listdir(train_neg)]
fnames[:3]
Explanation: Out-of-Core learning
Out-of-Core learning is the task of training a machine learning model on a dataset that does not fit into memory or RAM. This requires the following conditions:
a feature extraction layer with fixed output dimensionality
knowing the list of all classes in advance (in this case we only have positive and negative tweets)
a machine learning algorithm that supports incremental learning (the partial_fit method in scikit-learn).
In the following sections, we will set up a simple batch-training function to train an SGDClassifier iteratively.
But first, let us load the file names into a Python list:
End of explanation
y_train = np.zeros((len(fnames), ), dtype=int)
y_train[:12500] = 1
np.bincount(y_train)
Explanation: Next, let us create the target label array:
End of explanation
from sklearn.base import clone
def batch_train(clf, fnames, labels, iterations=25, batchsize=1000, random_seed=1):
vec = HashingVectorizer(encoding='latin-1')
idx = np.arange(labels.shape[0])
c_clf = clone(clf)
rng = np.random.RandomState(seed=random_seed)
for i in range(iterations):
rnd_idx = rng.choice(idx, size=batchsize)
documents = []
for i in rnd_idx:
with open(fnames[i], 'r') as f:
documents.append(f.read())
X_batch = vec.transform(documents)
batch_labels = labels[rnd_idx]
c_clf.partial_fit(X=X_batch,
y=batch_labels,
classes=[0, 1])
return c_clf
Explanation: Now, we implement the batch_train function as follows:
End of explanation
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(loss='log', random_state=1)
sgd = batch_train(clf=sgd,
fnames=fnames,
labels=y_train)
Explanation: Note that we are not using LogisticRegression as in the previous section, but we will use a SGDClassifier with a logistic cost function instead. SGD stands for stochastic gradient descent, an optimization alrogithm that optimizes the weight coefficients iteratively sample by sample, which allows us to feed the data to the classifier chunk by chuck.
And we train the SGDClassifier; using the default settings of the batch_train function, it will train the classifier on 25*1000=25000 documents. (Depending on your machine, this may take >2 min)
End of explanation
vec = HashingVectorizer(encoding='latin-1')
sgd.score(vec.transform(docs_test), y_test)
Explanation: Eventually, let us evaluate its performance:
End of explanation
# %load solutions/27_B-batchtrain.py
Explanation: Limitations of the Hashing Vectorizer
Using the Hashing Vectorizer makes it possible to implement streaming and parallel text classification but can also introduce some issues:
The collisions can introduce too much noise in the data and degrade prediction quality,
The HashingVectorizer does not provide "Inverse Document Frequency" reweighting (lack of a use_idf=True option).
There is no easy way to inverse the mapping and find the feature names from the feature index.
The collision issues can be controlled by increasing the n_features parameters.
The IDF weighting might be reintroduced by appending a TfidfTransformer instance on the output of the vectorizer. However computing the idf_ statistic used for the feature reweighting will require to do at least one additional pass over the training set before being able to start training the classifier: this breaks the online learning scheme.
The lack of inverse mapping (the get_feature_names() method of TfidfVectorizer) is even harder to workaround. That would require extending the HashingVectorizer class to add a "trace" mode to record the mapping of the most important features to provide statistical debugging information.
In the mean time to debug feature extraction issues, it is recommended to use TfidfVectorizer(use_idf=False) on a small-ish subset of the dataset to simulate a HashingVectorizer() instance that have the get_feature_names() method and no collision issues.
Exercise
In our implementation of the batch_train function above, we randomly draw k training samples as a batch in each iteration, which can be considered as a random subsampling with replacement. Can you modify the batch_train function so that it iterates over the documents without replacement, i.e., that it uses each document exactly once per iteration?
End of explanation |
13,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous Training with AutoML Vertex Pipelines with Batch Predictions
Learning Objectives
Step1: BigQuery Data
If you have not gone through the KFP Walkthrough lab, you will need to run the following cell to create a BigQuery dataset and table containing the data required for this lab.
NOTE If you already have the covertype data in a bigquery table at <PROJECT_ID>.covertype_dataset.covertype you may skip to Understanding the pipeline design.
Step3: Understanding the pipeline design
The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the pipeline_vertex/pipeline_vertex_automl_batch_preds.py file that we will generate below.
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
Building and deploying the pipeline
Let us write the pipeline to disk
Step4: Understanding the ModelBatchPredictOp
When working with an AutoML Tabular model, the ModelBatchPredictOp can take the following inputs
Step5: Compile the pipeline
Let's start by defining the environment variables that will be passed to the pipeline compiler
Step6: Let us make sure that the ARTIFACT_STORE has been created, and let us create it if not
Step7: Use the CLI compiler to compile the pipeline
We compile the pipeline from the Python file we generated into a JSON description using the following command
Step8: Note
Step9: Deploy the pipeline package | Python Code:
import os
from google.cloud import aiplatform
REGION = "us-central1"
PROJECT = !(gcloud config get-value project)
PROJECT = PROJECT[0]
os.environ["PROJECT"] = PROJECT
# Set `PATH` to include the directory containing KFP CLI
PATH = %env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
Explanation: Continuous Training with AutoML Vertex Pipelines with Batch Predictions
Learning Objectives:
1. Learn how to use Vertex AutoML pre-built components
1. Learn how to build a Vertex AutoML pipeline with these components using BigQuery as a data source
1. Learn how to compile, upload, and run the Vertex AutoML pipeline
1. Serve batch predictions with BigQuery source from the AutoML pipeline
In this lab, you will build, deploy, and run a Vertex AutoML pipeline that orchestrates the Vertex AutoML AI services to train, tune, and serve batch predictions to BigQuery with a model.
Setup
End of explanation
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT mk --dataset $DATASET_ID
bq --project_id=$PROJECT --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
Explanation: BigQuery Data
If you have not gone through the KFP Walkthrough lab, you will need to run the following cell to create a BigQuery dataset and table containing the data required for this lab.
NOTE If you already have the covertype data in a bigquery table at <PROJECT_ID>.covertype_dataset.covertype you may skip to Understanding the pipeline design.
End of explanation
%%writefile ./pipeline_vertex/pipeline_vertex_automl_batch_preds.py
Kubeflow Covertype Pipeline.
import os
from google_cloud_pipeline_components.aiplatform import (
AutoMLTabularTrainingJobRunOp,
TabularDatasetCreateOp,
ModelBatchPredictOp
)
from kfp.v2 import dsl
PIPELINE_ROOT = os.getenv("PIPELINE_ROOT")
PROJECT = os.getenv("PROJECT")
DATASET_SOURCE = os.getenv("DATASET_SOURCE")
PIPELINE_NAME = os.getenv("PIPELINE_NAME", "covertype")
DISPLAY_NAME = os.getenv("MODEL_DISPLAY_NAME", PIPELINE_NAME)
TARGET_COLUMN = os.getenv("TARGET_COLUMN", "Cover_Type")
BATCH_PREDS_SOURCE_URI = os.getenv("BATCH_PREDS_SOURCE_URI")
@dsl.pipeline(
name=f"{PIPELINE_NAME}-vertex-automl-pipeline-batch-preds",
description=f"AutoML Vertex Pipeline for {PIPELINE_NAME}",
pipeline_root=PIPELINE_ROOT,
)
def create_pipeline():
dataset_create_task = TabularDatasetCreateOp(
display_name=DISPLAY_NAME,
bq_source=DATASET_SOURCE,
project=PROJECT,
)
automl_training_task = AutoMLTabularTrainingJobRunOp(
project=PROJECT,
display_name=DISPLAY_NAME,
optimization_prediction_type="classification",
dataset=dataset_create_task.outputs["dataset"],
target_column=TARGET_COLUMN,
)
batch_predict_op = ModelBatchPredictOp(
project=PROJECT,
job_display_name="batch_predict_job",
model=automl_training_task.outputs["model"],
bigquery_source_input_uri=BATCH_PREDS_SOURCE_URI,
instances_format="bigquery",
predictions_format="bigquery",
bigquery_destination_output_uri=f'bq://{PROJECT}',
)
Explanation: Understanding the pipeline design
The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the pipeline_vertex/pipeline_vertex_automl_batch_preds.py file that we will generate below.
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
Building and deploying the pipeline
Let us write the pipeline to disk:
End of explanation
%%bigquery
CREATE OR REPLACE TABLE covertype_dataset.newdata AS
SELECT * EXCEPT(Cover_Type)
FROM covertype_dataset.covertype
LIMIT 10000
Explanation: Understanding the ModelBatchPredictOp
When working with an AutoML Tabular model, the ModelBatchPredictOp can take the following inputs:
* model: The model resource to serve batch predictions with
* bigquery_source_uri: A URI to a BigQuery table containing examples to serve batch predictions on in the format bq://PROJECT.DATASET.TABLE
* instances_format: "bigquery" to serve batch predictions on BigQuery data.
* predictions_format: "bigquery" to store the results of the batch prediction in BigQuery.
* bigquery_destination_output_uri: In the format bq://PROJECT_ID. This is the project that the results of the batch prediction will be stored. The ModelBatchPredictOp will create a dataset in this project.
Upon completion of the ModelBatchPredictOp you will see a new BigQuery dataset with name prediction_<model-display-name>_<job-create-time>. Inside this dataset you will see a predictions table, containing the batch prediction examples and predicted labels. If there were any errors in the batch prediction, you will also see an errors table. The errors table contains rows for which the prediction has failed.
Create BigQuery table with data for batch predictions
Before we compile and run the pipeline, let's create a BigQuery table with data we want to serve batch predictions on. To simulate "new" data we will simply query the existing table for all columns except the label and create a table called newdata. The URI to this table will be the bigquery_source_input_uri input to the ModelBatchPredictOp.
End of explanation
ARTIFACT_STORE = f"gs://{PROJECT}-kfp-artifact-store"
PIPELINE_ROOT = f"{ARTIFACT_STORE}/pipeline"
DATASET_SOURCE = f"bq://{PROJECT}.covertype_dataset.covertype"
BATCH_PREDS_SOURCE_URI = f"bq://{PROJECT}.covertype_dataset.newdata"
%env PIPELINE_ROOT={PIPELINE_ROOT}
%env PROJECT={PROJECT}
%env REGION={REGION}
%env DATASET_SOURCE={DATASET_SOURCE}
%env BATCH_PREDS_SOURCE_URI={BATCH_PREDS_SOURCE_URI}
Explanation: Compile the pipeline
Let's start by defining the environment variables that will be passed to the pipeline compiler:
End of explanation
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
Explanation: Let us make sure that the ARTIFACT_STORE has been created, and let us create it if not:
End of explanation
PIPELINE_JSON = "covertype_automl_vertex_pipeline_batch_preds.json"
!dsl-compile-v2 --py pipeline_vertex/pipeline_vertex_automl_batch_preds.py --output $PIPELINE_JSON
Explanation: Use the CLI compiler to compile the pipeline
We compile the pipeline from the Python file we generated into a JSON description using the following command:
End of explanation
!head {PIPELINE_JSON}
Explanation: Note: You can also use the Python SDK to compile the pipeline:
```python
from kfp.v2 import compiler
compiler.Compiler().compile(
pipeline_func=create_pipeline,
package_path=PIPELINE_JSON,
)
```
The result is the pipeline file.
End of explanation
aiplatform.init(project=PROJECT, location=REGION)
pipeline = aiplatform.PipelineJob(
display_name="automl_covertype_kfp_pipeline_batch_predictions",
template_path=PIPELINE_JSON,
enable_caching=True,
)
pipeline.run()
Explanation: Deploy the pipeline package
End of explanation |
13,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Weight Initialization
In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
Testing Weights
Dataset
To see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.
We'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.
Step1: Neural Network
<img style="float
Step2: Initialize Weights
Let's start looking at some initial weights.
All Zeros or Ones
If you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.
With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.
Let's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.
Run the cell below to see the difference between weights of all zeros against all ones.
Step3: As you can see the accuracy is close to guessing for both zeros and ones, around 10%.
The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.
A good solution for getting these random weights is to sample from a uniform distribution.
Uniform Distribution
A [uniform distribution](https
Step4: The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.
Now that you understand the tf.random_uniform function, let's apply it to some initial weights.
Baseline
Let's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0.
Step5: The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.
General rule for setting weights
The general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where
$y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron).
Let's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).
Step6: We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?
Too small
Let's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot.
Step7: Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$.
Step8: The range we found and $y=1/\sqrt{n}$ are really close.
Since the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.
Normal Distribution
Unlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram.
tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a normal distribution.
shape
Step9: Let's compare the normal distribution against the previous uniform distribution.
Step10: The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.
Truncated Normal Distribution
tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
shape
Step11: Again, let's compare the previous results with the previous distribution.
Step12: There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.
We've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now. | Python Code:
%matplotlib inline
import tensorflow as tf
import helper
from tensorflow.examples.tutorials.mnist import input_data
print('Getting MNIST Dataset...')
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
print('Data Extracted.')
Explanation: Weight Initialization
In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
Testing Weights
Dataset
To see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.
We'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.
End of explanation
# Save the shapes of weights for each layer
layer_1_weight_shape = (mnist.train.images.shape[1], 256)
layer_2_weight_shape = (256, 128)
layer_3_weight_shape = (128, mnist.train.labels.shape[1])
Explanation: Neural Network
<img style="float: left" src="images/neural_network.png"/>
For the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers.
End of explanation
all_zero_weights = [
tf.Variable(tf.zeros(layer_1_weight_shape)),
tf.Variable(tf.zeros(layer_2_weight_shape)),
tf.Variable(tf.zeros(layer_3_weight_shape))
]
all_one_weights = [
tf.Variable(tf.ones(layer_1_weight_shape)),
tf.Variable(tf.ones(layer_2_weight_shape)),
tf.Variable(tf.ones(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'All Zeros vs All Ones',
[
(all_zero_weights, 'All Zeros'),
(all_one_weights, 'All Ones')])
Explanation: Initialize Weights
Let's start looking at some initial weights.
All Zeros or Ones
If you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.
With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.
Let's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.
Run the cell below to see the difference between weights of all zeros against all ones.
End of explanation
helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([10000], -3, 3))
Explanation: As you can see the accuracy is close to guessing for both zeros and ones, around 10%.
The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.
A good solution for getting these random weights is to sample from a uniform distribution.
Uniform Distribution
A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution.
tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)
Outputs random values from a uniform distribution.
The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
minval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0.
maxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point.
dtype: The type of the output: float32, float64, int32, or int64.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
We can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3.
End of explanation
# Default for tf.random_uniform is minval=0 and maxval=1
basline_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape)),
tf.Variable(tf.random_uniform(layer_2_weight_shape)),
tf.Variable(tf.random_uniform(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'Baseline',
[(basline_weights, 'tf.random_uniform [0, 1)')])
Explanation: The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.
Now that you understand the tf.random_uniform function, let's apply it to some initial weights.
Baseline
Let's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0.
End of explanation
uniform_neg1to1_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))
]
helper.compare_init_weights(
mnist,
'[0, 1) vs [-1, 1)',
[
(basline_weights, 'tf.random_uniform [0, 1)'),
(uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])
Explanation: The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.
General rule for setting weights
The general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where
$y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron).
Let's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).
End of explanation
uniform_neg01to01_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))
]
uniform_neg001to001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))
]
uniform_neg0001to0001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))
]
helper.compare_init_weights(
mnist,
'[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',
[
(uniform_neg1to1_weights, '[-1, 1)'),
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(uniform_neg001to001_weights, '[-0.01, 0.01)'),
(uniform_neg0001to0001_weights, '[-0.001, 0.001)')],
plot_n_batches=None)
Explanation: We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?
Too small
Let's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot.
End of explanation
import numpy as np
general_rule_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))
]
helper.compare_init_weights(
mnist,
'[-0.1, 0.1) vs General Rule',
[
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(general_rule_weights, 'General Rule')],
plot_n_batches=None)
Explanation: Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$.
End of explanation
helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([10000]))
Explanation: The range we found and $y=1/\sqrt{n}$ are really close.
Since the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.
Normal Distribution
Unlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram.
tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a normal distribution.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution.
stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
End of explanation
normal_01_weights = [
tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Uniform [-0.1, 0.1) vs Normal stddev 0.1',
[
(uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),
(normal_01_weights, 'Normal stddev 0.1')])
Explanation: Let's compare the normal distribution against the previous uniform distribution.
End of explanation
helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))
Explanation: The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.
Truncated Normal Distribution
tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.
stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
End of explanation
trunc_normal_01_weights = [
tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Normal vs Truncated Normal',
[
(normal_01_weights, 'Normal'),
(trunc_normal_01_weights, 'Truncated Normal')])
Explanation: Again, let's compare the previous results with the previous distribution.
End of explanation
helper.compare_init_weights(
mnist,
'Baseline vs Truncated Normal',
[
(basline_weights, 'Baseline'),
(trunc_normal_01_weights, 'Truncated Normal')])
Explanation: There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.
We've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.
End of explanation |
13,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python example using Spark SQL over Cloudant as a source
This sample notebook is written in Python and expects the Python 2.7.5 runtime. Make sure the kernel is started and you are connect to it when executing this notebook.
The data source for this example can be found at
Step1: 1. Work with the Spark Context
A Spark Context handle sc is available with every notebook create in the Spark Service. Use it to understand the Spark version used, the environment settings, and create a Spark SQL Context object off of it.
Step2: 2. Work with a Cloudant database
A Dataframe object can be created directly from a Cloudant database. To configure the database as source, pass these options
Step3: 3. Work with a Dataframe
At this point all transformations and functions should behave as specified with Spark SQL. (http
Step4: Here we filter only those documents where the naturecode is crime is a disturbance with naturecode DISTRB
Step5: Finally we write the disturbance crimes back to another Cloudant database 'crimes_filtered'
Step6: 4. Visualization
Step7: I'm converting a Apache Spark Data Frame to a Panda Data Frame first. Matplotlib simply seems to have better support for Pandas today.
Let's also sort the DF by count first and naturecode second to produce a sorted graph. | Python Code:
# Import Python stuff
import pprint
from collections import Counter
# Import PySpark stuff
from pyspark.sql import *
from pyspark.sql.functions import udf, asc, desc
from pyspark import SparkContext, SparkConf
from pyspark.sql.types import IntegerType
Explanation: Python example using Spark SQL over Cloudant as a source
This sample notebook is written in Python and expects the Python 2.7.5 runtime. Make sure the kernel is started and you are connect to it when executing this notebook.
The data source for this example can be found at: http://examples.cloudant.com/crimes/
Replicate the database into your own Cloudant account before you execute this script.
End of explanation
sc.version
# sc is an existing SparkContext.
sqlContext = SQLContext(sc)
Explanation: 1. Work with the Spark Context
A Spark Context handle sc is available with every notebook create in the Spark Service. Use it to understand the Spark version used, the environment settings, and create a Spark SQL Context object off of it.
End of explanation
cloudantdata = sqlContext.read.format("com.cloudant.spark").\
option("cloudant.host","examples.cloudant.com").\
option("cloudant.username","examples").\
option("cloudant.password","xxxxx").\
load("crimes")
Explanation: 2. Work with a Cloudant database
A Dataframe object can be created directly from a Cloudant database. To configure the database as source, pass these options:
1 - package name that provides the classes (like CloudantDataSource) implemented in the connector to extend BaseRelation. For the Cloudant Spark connector this will be com.cloudant.spark
2 - cloudant.host parameter to pass the Cloudant account name
3 - cloudant.user parameter to pass the Cloudant user name
4 - cloudant.password parameter to pass the Cloudant account password
End of explanation
cloudantdata.printSchema()
cloudantdata.count()
cloudantdata.select("properties.naturecode").show()
Explanation: 3. Work with a Dataframe
At this point all transformations and functions should behave as specified with Spark SQL. (http://spark.apache.org/sql/)
There are, however, a number of things the Cloudant Spark connector does not support yet, or things that are simply not working. For that reason we call this connector a BETA release and are only gradually improving it towards GA.
Please direct your any change requests at [email protected]
End of explanation
disturbDf = cloudantdata.filter("properties.naturecode = 'DISTRB'")
disturbDf.show()
Explanation: Here we filter only those documents where the naturecode is crime is a disturbance with naturecode DISTRB
End of explanation
disturbDf.select("properties").write.format("com.cloudant.spark").\
option("cloudant.host","kache.cloudant.com").\
option("cloudant.username","kache").\
option("cloudant.password","xxxxx").\
save("crimes_filtered")
Explanation: Finally we write the disturbance crimes back to another Cloudant database 'crimes_filtered'
End of explanation
reducedValue = cloudantdata.groupBy("properties.naturecode").count()
reducedValue.printSchema()
Explanation: 4. Visualization
End of explanation
import pandas as pd
pandaDf = reducedValue.orderBy(desc("count"), asc("naturecode")).toPandas()
print(pandaDf)
# This is needed to actually see the plots
%matplotlib inline
# Additional imports frm matplotlib
import matplotlib.pyplot as plt
# The data
values = pandaDf['count']
labels = pandaDf['naturecode']
# The format
plt.gcf().set_size_inches(16, 12, forward=True)
plt.title('Number of crimes by type')
# Barh is a horizontal bar chart with values (x axis) and labels (y axis)
plt.barh(range(len(values)), values)
plt.yticks(range(len(values)), labels)
# Print the plot
plt.show()
Explanation: I'm converting a Apache Spark Data Frame to a Panda Data Frame first. Matplotlib simply seems to have better support for Pandas today.
Let's also sort the DF by count first and naturecode second to produce a sorted graph.
End of explanation |
13,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy. | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 300, activation='ReLU')
net = tflearn.fully_connected(net, 150, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1,
loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=10)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy.
End of explanation |
13,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in line.split()] + [target_vocab_to_int[ '<EOS>']] for line in target_text.split('\n')]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return input, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers)
RNN_output, RNN_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype = tf.float32)
return RNN_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
# Create RNN cell for decoding using rnn_size and num_layers
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers)
# Create the output fuction using lambda to transform it's input, logits, to class logits
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
# sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
with tf.variable_scope('decoding') as decoding_scope:
training_logits = decoding_layer_train(encoder_state, cell, dec_embed_input, sequence_length,
decoding_scope, output_fn, keep_prob)
# Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
# maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
inference_logits = decoding_layer_infer(encoder_state, cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, sequence_length, vocab_size, decoding_scope,
output_fn, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Apply embedding to the input data for the encoder.
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Apply embedding to the target data for the decoder.
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size,
# sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
training_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 8
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
word_list = []
lower_words = sentence.lower().split()
for word in lower_words:
if(word in vocab_to_int):
word_list.append(vocab_to_int[word])
else:
word_list.append(vocab_to_int['<UNK>'])
return word_list
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
13,519 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Is it possible to delete or insert a step in a sklearn.pipeline.Pipeline object? | Problem:
import numpy as np
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.preprocessing import PolynomialFeatures
estimators = [('reduce_poly', PolynomialFeatures()), ('dim_svm', PCA()), ('sVm_233', SVC())]
clf = Pipeline(estimators)
clf.steps.insert(0, ('reduce_dim', PCA())) |
13,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Theano
For a Theano tutorial please see
Step1: Now you can invoke f and pass the input values, i.e. f(1,1), f(10,-3) and the result for this operation is returned.
Step2: Printing of the graph
You can print the graph for the above value of z. For details see
Step3: The graph fo z
Step4: Next we define some NumPy-Array with data and let Theano compute the result for $f(x,W,b)$
Step5: Don't confuse x,W, b with inputX, inputW, inputB. x,W,b contain pointer to your symbols in the compute graph. inputX,inputW,inputB contains your data.
Shared Variables and Updates
See
Step6: Shared Variables
We use theano.shared() to share a variable (i.e. make it internally available for Theano)
Internal state variables are passed by compile time via the parameter givens. So to compute the ouput z, use the shared variable state for the input variable x
For information on the borrow=True parameter see | Python Code:
import theano
import theano.tensor as T
x = T.dscalar('x') #First input variable to the compute graph
y = T.dscalar('y') #Second input variable to the compute graph
z = 3*x + x*y + 3*y #Our formula we like to compute
#Compile for the output z, given the inputs x and y
f = theano.function(inputs=[x,y], outputs=z)
Explanation: Introduction to Theano
For a Theano tutorial please see: http://deeplearning.net/software/theano/tutorial/index.html
Basic Operations
For more details see: http://deeplearning.net/software/theano/tutorial/adding.html
Task: Use Theano to compute a simple polynomial function $$f(x,y) = 3x+xy+3y$$
Hints:
- First define two input variables with the correct type (http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors)
- Define the computation of the function and store it in a variable
- Use the theano.function() to compile your computation graph
End of explanation
print f(1,1)
print f(10,-3)
Explanation: Now you can invoke f and pass the input values, i.e. f(1,1), f(10,-3) and the result for this operation is returned.
End of explanation
#Graph for z
theano.printing.pydotprint(z, outfile="pics/z_graph.png", var_with_name_simple=True)
#Graph for function f (after optimization)
theano.printing.pydotprint(f, outfile="pics/f_graph.png", var_with_name_simple=True)
Explanation: Printing of the graph
You can print the graph for the above value of z. For details see:
http://deeplearning.net/software/theano/library/printing.html
http://deeplearning.net/software/theano/tutorial/printing_drawing.html
To print the graph, futher libraries must be installed. In 99% of your development time you don't need the graph printing function. Feel free to skip this section
End of explanation
import theano
import theano.tensor as T
import numpy as np
x = T.fvector('x')
W = T.fmatrix('W')
b = T.fvector('b')
activation = T.dot(x,W)+b
z = T.tanh(activation)
f = theano.function(inputs=[x,W,b], outputs=[activation,z])
Explanation: The graph fo z:
<img src="files/pics/z_graph.png">
The graph for f:
<img src="files/pics/f_graph.png">
Simple matrix multiplications
The following types for input variables are typically used:
byte: bscalar, bvector, bmatrix, btensor3, btensor4
16-bit integers: wscalar, wvector, wmatrix, wtensor3, wtensor4
32-bit integers: iscalar, ivector, imatrix, itensor3, itensor4
64-bit integers: lscalar, lvector, lmatrix, ltensor3, ltensor4
float: fscalar, fvector, fmatrix, ftensor3, ftensor4
double: dscalar, dvector, dmatrix, dtensor3, dtensor4
complex: cscalar, cvector, cmatrix, ctensor3, ctensor4
scalar: One element (one number)
vector: 1-dimension
matrix: 2-dimensions
tensor3: 3-dimensions
tensor4: 4-dimensions
As we do not need perfect precision we use mainly float instead of double. Most GPUs are also not able to handle doubles.
So in practice you need: iscalar, ivector, imatrix and fscalar, fvector, vmatrix.
Task: Implement the function $$f(x,W,b) = \tanh(xW+b)$$ with $x \in \mathbb{R}^n, b \in \mathbb{R}^k, W \in \mathbb{R}^{n \times k}$.
$n$ input dimension and $k$ output dimension
End of explanation
inputX = np.asarray([0.1, 0.2, 0.3], dtype='float32')
inputW = np.asarray([[0.1,-0.2],[-0.4,0.5],[0.6,-0.7]], dtype='float32')
inputB = np.asarray([0.1,0.2], dtype='float32')
print "inputX.shape",inputX.shape
print "inputW.shape",inputW.shape
f(inputX, inputW, inputB)
Explanation: Next we define some NumPy-Array with data and let Theano compute the result for $f(x,W,b)$
End of explanation
import theano
import theano.tensor as T
import numpy as np
#Define my internal state
init_value = 1
state = theano.shared(value=init_value, name='state')
#Define my operation f(x) = 2*x
x = T.lscalar('x')
z = 2*x
accumulator = theano.function(inputs=[], outputs=z, givens={x: state})
print accumulator()
print accumulator()
Explanation: Don't confuse x,W, b with inputX, inputW, inputB. x,W,b contain pointer to your symbols in the compute graph. inputX,inputW,inputB contains your data.
Shared Variables and Updates
See: http://deeplearning.net/software/theano/tutorial/examples.html#using-shared-variables
Using shared variables, we can create an internal state.
Creation of a accumulator:
At the beginning initialize the state to 0
With each function call update the state by certain value
Later, in your neural networks, the weight matrices $W$ and the bias values $b$ will be stored as internal state / as shared variable.
Shared variables improve performance, as you need less transfer between your Python code and the execution of the compute graph (which is written & compiled from C code)
Shared variables can also be store on your graphic card
End of explanation
#New accumulator function, now with an update
inc = T.lscalar('inc')
accumulator = theano.function(inputs=[inc], outputs=(state,z), givens={x: state}, updates=[(state,state+inc)])
print accumulator(1)
print accumulator(1)
print accumulator(1)
Explanation: Shared Variables
We use theano.shared() to share a variable (i.e. make it internally available for Theano)
Internal state variables are passed by compile time via the parameter givens. So to compute the ouput z, use the shared variable state for the input variable x
For information on the borrow=True parameter see: http://deeplearning.net/software/theano/tutorial/aliasing.html
In most cases we can set it to true and increase by this the performance.
Updating Shared Variables
Using the updates-parameter, we can specify how our shared variables should be updated
This is useful to create a train function for a neural network.
We create a function train(data) which computes the error and gradient
The computed gradient is then used in the same call to update the shared weights
Training just becomes: for mini_batch in mini_batches: train(mini_batch)
End of explanation |
13,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: CAS
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
13,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Heads-up
The following code models an inverted pendulum, and uses a GP model to determine the safe region of attraction (ROA). The following is inteded to illustrate the algorithm, not to be high-performance code. As such, the code will run very slowly, with the main bottleneck being the repeated GP predictions of the dynamics. There are several obvious points that could make the code run faster
* A less conservative Lipschitz constant will allow coarser discretizations and therefore faster computations.
* Only evaluating states close to the boundary of the safe set, since those are the only states that are able to expand the ROA over time.
* Only partially update the GP predictions of the model where needed, rather than everywhere (lots of predictions are at states that are either unsafe and too far away from the current level set, or are already safe and evaluations there have no hope of expanding the ROA
Dynamics model
We define the dynamics of an inverted pendulum
$$\ddot{\theta}(t) = \frac{mgl \sin(\theta(t)) + u(t)}{m l^2},$$
where $m$ is the mass, $g$ the gravitational constant, $l$ the length of the pendulum, $u$ the control input (torque), and $\theta$ the angle.
The prior model that we use considers no friction, as well as a mass that is $0.5\,$kg lighter.
Step5: Normalization
In order for the LQR to return meaningful results, as well as for the GP model to have simpler kernel parameters, we normalize the system dynamics (all dimensions have similar magnitudes).
$\theta$ is normalized withing the maximum controllable angle, $\dot{\theta}$ is normalized with the eigenfrequency of the dynamics, and $u$ is normlized with the maximum allowed control input.
Step10: Dynamics functions
Here we define the physical dynamics, as well as the prior dynamics, which are a linearization of the true, nonlinear model with wrong parameters.
Step11: Discretization
We discretize the state into a grid world. Since we will use the conservative, theoretical Lipschitz constant of $\dot{V}(x)$ from Lemma 5, we have to discretize very finely. In practice, one may be tempted to pick larger values.
Step13: Gaussian process model of the error
We define the state vector $\mathbf{x} = [\mathbf{x}_1, \mathbf{x}_2] = [\theta, \dot{\theta}]$, so that the dynamics can be written as
$$
\dot{\mathbf{x}} =
\left[
\begin{matrix}
\mathbf{x}_2 \
\frac{mgl \sin(\mathbf{x}_1) + \tau}{m l^2}
\end{matrix} \right]
$$
The first part of this equation says that the angle is equal to the integrated angular velocity. This is a intuitively true, irrespective of model errors. As such, we only learn the model error of the second part of the dynamics. That is
$$\dot{\mathbf{x}} =
\left[
\begin{matrix}
\mathbf{x}2 \
\frac{mgl \sin(\mathbf{x}_1) + \tau}{m l^2} + g\pi(\mathbf{x})
\end{matrix} \right]
$$
As a kernel we choose $k(x,x') = k_{\mathrm{linear}}(x, x') * k_{\mathrm{Matern}}(x, x')$, the product of a linear and a Matern kernel. This encodes nonlinear functions with linearly increasing amplitude. For more details what this kernel encodes, see the one-dimensional example.
Step14: Lipschitz constant
The Lipschitz constant is defined via the high-probability Lipschitz constant of the GP model, as well as the linear dynamics. Importantly, here we use the local Lipschitz constants. Since the kernel we have choosen implies increasing Lipschitz constants with distance from the origin. the worst-case Lipschitz constant would be too conservative.
Step15: True safe levelset
To get an intuition about the task at hand, we compute the maximum, safe level set (ROA) according to the true and prior dynamics. The learning algorithm only has access to the prior dynamics model, not the true model!
The plot shows the maximum level set (orange), and the region where $\dot{V}$ is sufficiently small (red). It can be seen that the prior model estimates a safe region that is too large, since it considers a lighter mass. Also, the third plot shows that we cannot recover the maximum level set with the learning method, since it considers $\dot{V}(x) < -L\tau$, rather than $\dot{V}(x) < 0$. For finer discretizations the two sets will get closer and closer to each other.
Step16: Online learning
Now let us see how the learning algorithm performs. We compute the maximum level set based on the GP estimate of the dynamics, and sample the most uncertain state within for 100 iterations.
Step17: Warning
Step19: Plot results
We plot the resulting estimate. By restricting ourselves to the levelset $\dot{V} \leq -L \tau$, we cannot reach the true safe set. However, if we pick a less conservative Lipschitz constant and discretize at a finer rate, the two will approach each other. | Python Code:
n = 2
m = 1
# 'Wrong' model parameters
mass = 0.1
friction = 0.
length = 0.5
gravity = 9.81
inertia = mass * length ** 2
# True model parameters
true_mass = 0.15
true_friction = 0.05
true_length = length
true_inertia = true_mass * true_length ** 2
# Input saturation
x_max = np.deg2rad(30)
u_max = gravity * true_mass * true_length * np.sin(x_max)
# LQR cost matrices
Q = np.array([[1, 0], [0, 1]], dtype=np.float)
R = np.array([[0.1]], dtype=np.float)
Explanation: Heads-up
The following code models an inverted pendulum, and uses a GP model to determine the safe region of attraction (ROA). The following is inteded to illustrate the algorithm, not to be high-performance code. As such, the code will run very slowly, with the main bottleneck being the repeated GP predictions of the dynamics. There are several obvious points that could make the code run faster
* A less conservative Lipschitz constant will allow coarser discretizations and therefore faster computations.
* Only evaluating states close to the boundary of the safe set, since those are the only states that are able to expand the ROA over time.
* Only partially update the GP predictions of the model where needed, rather than everywhere (lots of predictions are at states that are either unsafe and too far away from the current level set, or are already safe and evaluations there have no hope of expanding the ROA
Dynamics model
We define the dynamics of an inverted pendulum
$$\ddot{\theta}(t) = \frac{mgl \sin(\theta(t)) + u(t)}{m l^2},$$
where $m$ is the mass, $g$ the gravitational constant, $l$ the length of the pendulum, $u$ the control input (torque), and $\theta$ the angle.
The prior model that we use considers no friction, as well as a mass that is $0.5\,$kg lighter.
End of explanation
# Normalize the cost functions for the LQR computation
# x_normalized = inv(Tx) * x
Tx = np.diag([x_max, np.sqrt(gravity / length)])
Tu = np.array([[u_max]])
Tx_inv = np.diag(np.diag(Tx)**(-1))
Tu_inv = np.diag(np.diag(Tu)**(-1))
def normalize_x(x):
Normalize x vector
x = np.asarray(x)
return x.dot(Tx_inv)
def denormalize_x(x):
Denormalize x vector
x = np.asarray(x)
return x.dot(Tx)
def normalize_u(u):
Normalize u vector
u = np.asarray(u)
return u.dot(Tu_inv)
def denormalize_u(u):
Denormalize u vector
u = np.asarray(u)
return u.dot(Tu)
Explanation: Normalization
In order for the LQR to return meaningful results, as well as for the GP model to have simpler kernel parameters, we normalize the system dynamics (all dimensions have similar magnitudes).
$\theta$ is normalized withing the maximum controllable angle, $\dot{\theta}$ is normalized with the eigenfrequency of the dynamics, and $u$ is normlized with the maximum allowed control input.
End of explanation
# Nonlinear dynamics
def ode(x, u):
True ode of the dynamics.
Parameters
----------
x: np.array
2D array with one, normalized state
at each column
u: np.array
2D array with one, normalized input
at each column
Returns
-------
x_dot: np.array
The normalized derivative of the dynamics
# Denormalize
x = denormalize_x(np.atleast_2d(x))
u = denormalize_u(np.asarray(u))
# Physical dynamics
x_dot = np.hstack([x[:, [1]],
(gravity / true_length * np.sin(x[:, [0]]) +
u / true_inertia
- true_friction / true_inertia * x[:, [1]])])
# Normalize
return normalize_x(x_dot)
# Linearized dynamics
A = np.array([[0, 1],
[gravity / length, -friction / inertia]])
B = np.array([[0],
[1 / inertia]])
# Normalize linear dynamics
An = Tx_inv.dot(A.dot(Tx))
Bn = Tx_inv.dot(B.dot(Tu))
# Obtain LQR controlelr gain and cost-to-go matrix
Kn, Pn = lqr(An, Bn, Q, R)
u_max_norm = normalize_u(u_max)
def control_law(x):
LQR controller with bounded (normalized) inputs.
Parameters
----------
x: np.array
2D array with one normalized state on each column
Returns
-------
u: np.array
2D array with normalized inputs on each column
x = np.asarray(x)
u = -x.dot(Kn.T)
np.clip(u, -u_max_norm, u_max_norm, out=u)
return u
def true_dynamics(x):
Return the true closed-loop, normalized dynamics.
Parameters
----------
x: np.array
2D array with one normalized state on each column
Returns
-------
x_dot: np.array
2D array with normalized derivative states on each column
x = np.asarray(x)
u = control_law(x)
return ode(x, u)
def prior_dynamics(x):
Return the linearized, closed-loop, prior, normalized dynamics.
Parameters
----------
x: np.array
2D array with one normalized state on each column
Returns
-------
x_dot: np.array
2D array with normalized derivative states on each column
x = np.asarray(x)
u = control_law(x)
return x.dot(An.T) + u.dot(Bn.T)
Explanation: Dynamics functions
Here we define the physical dynamics, as well as the prior dynamics, which are a linearization of the true, nonlinear model with wrong parameters.
End of explanation
# Discretization constant
tau = 0.002
# x_min, x_max, accuracy
grid_param = [(-0.5, 0.5, tau),
(-0.5, 0.5, tau)]
# Used to plot the safe set later
extent = np.array([grid_param[0][0], grid_param[0][1],
grid_param[1][0], grid_param[1][1]])
# Define a grid with combinations of states
grid = [np.arange(*x) for x in grid_param]
num_samples = [len(x) for x in grid]
grid = combinations(grid)
# Initial safe set
grid_true = denormalize_x(grid)
S0 = np.logical_and(np.abs(grid_true[:, 0]) < np.deg2rad(5),
np.abs(grid_true[:, 1]) < np.deg2rad(10))
if not np.any(S0):
print('No initial safe points!')
print('Grid size: {0} combinations in {1}x{2} discretized with tau={3}'
.format(len(grid), extent[:2], extent[2:], tau))
Explanation: Discretization
We discretize the state into a grid world. Since we will use the conservative, theoretical Lipschitz constant of $\dot{V}(x)$ from Lemma 5, we have to discretize very finely. In practice, one may be tempted to pick larger values.
End of explanation
# Mean function for the GP with the prior dynamics
mf = GPy.core.Mapping(2, 1)
mf.f = lambda x: prior_dynamics(x)[:, [1]]
mf.update_gradients = lambda a,b: None
# Matern kernel multiplied with linear kernel
kernel = (GPy.kern.Matern32(input_dim=2, lengthscale=.2, variance=5, name='radial') *
GPy.kern.Linear(input_dim=2, name='linear', variances=1))
# Measurement model
likelihood = GPy.likelihoods.Gaussian(variance=0.05**2)
# GP with initial measurement at (0, 0), 0
gp = GPy.core.GP(np.array([[0, 0]]), np.array([[0]]),
kernel, likelihood, mean_function=mf)
def predict_model(gp, x):
Predict the model using the gp dynamics
Given that the model error only affects the second derivative,
the first state has zero variance and is equal to the prior model.
Parameters
----------
gp: GPy.core.GP
The GP model of the dynamics (including prior)
x: np.array
2D array. Each column has one state at which
to predict the dynamics
Returns
-------
mean: np.array
The mean dynamics at x
var: np.array
Variance of the dynamics at x
gp_mean, gp_var = gp._raw_predict(x)
# Augment with deterministic model for first state
gp_mean = np.hstack([prior_dynamics(x)[:, [0]], gp_mean])
gp_var = np.hstack([np.zeros_like(gp_var), gp_var])
return gp_mean, gp_var
Explanation: Gaussian process model of the error
We define the state vector $\mathbf{x} = [\mathbf{x}_1, \mathbf{x}_2] = [\theta, \dot{\theta}]$, so that the dynamics can be written as
$$
\dot{\mathbf{x}} =
\left[
\begin{matrix}
\mathbf{x}_2 \
\frac{mgl \sin(\mathbf{x}_1) + \tau}{m l^2}
\end{matrix} \right]
$$
The first part of this equation says that the angle is equal to the integrated angular velocity. This is a intuitively true, irrespective of model errors. As such, we only learn the model error of the second part of the dynamics. That is
$$\dot{\mathbf{x}} =
\left[
\begin{matrix}
\mathbf{x}2 \
\frac{mgl \sin(\mathbf{x}_1) + \tau}{m l^2} + g\pi(\mathbf{x})
\end{matrix} \right]
$$
As a kernel we choose $k(x,x') = k_{\mathrm{linear}}(x, x') * k_{\mathrm{Matern}}(x, x')$, the product of a linear and a Matern kernel. This encodes nonlinear functions with linearly increasing amplitude. For more details what this kernel encodes, see the one-dimensional example.
End of explanation
# Lyapunov function:
V, dV = quadratic_lyapunov_function(grid, Pn)
V_max = np.max(V)
accuracy = V_max / 1e10
# Lipschitz constants of Lyapunov function
B_dV = L_V = np.max(np.abs(dV), axis=1)
L_dV = np.max(Pn)
# Kernel parameters
kernel_lengthscale = np.min(gp.kern.radial.lengthscale).squeeze()
kernel_var = gp.kern.radial.variance.values.squeeze()
linear_var = gp.kern.linear.Kdiag(grid).squeeze()
# Dynamics Lipschitz constants
L_g = 2 * np.sqrt(kernel_var * linear_var) / kernel_lengthscale
L_f = np.max(np.abs(An - Bn.dot(Kn)))
# Function bounds
B_g = 2 * np.sqrt(kernel_var * linear_var)
B_f = prior_dynamics(grid)[:, 1]
L = (B_g + B_f) * L_dV + B_dV * (L_g + L_f)
Explanation: Lipschitz constant
The Lipschitz constant is defined via the high-probability Lipschitz constant of the GP model, as well as the linear dynamics. Importantly, here we use the local Lipschitz constants. Since the kernel we have choosen implies increasing Lipschitz constants with distance from the origin. the worst-case Lipschitz constant would be too conservative.
End of explanation
V_dot_true = compute_v_dot_upper_bound(dV, true_dynamics(grid), None)
V_dot_prior = compute_v_dot_upper_bound(dV, prior_dynamics(grid), None)
fig, axes = plt.subplots(1, 3, figsize=(10, 20))
S_true = get_safe_set(V_dot_true, 0, S0=None)
axes[0].imshow(np.reshape(S_true, num_samples).T, extent=extent, origin='lower')
c_true = find_max_levelset(S_true, V, accuracy)
axes[0].imshow(np.reshape(V <= c_true, num_samples).T, extent=extent, origin='lower', alpha=0.3, cmap='viridis')
axes[0].set_title('True safe set (V_dot < 0)')
S_prior = get_safe_set(V_dot_prior, 0, S0=S0)
c_prior = find_max_levelset(S_prior, V, accuracy)
axes[1].imshow(np.reshape(S_prior, num_samples).T, extent=extent, origin='lower')
axes[1].set_title('Prior safe set (V_dot < 0)')
axes[1].imshow(np.reshape(V < c_prior, num_samples).T, extent=extent, origin='lower', alpha=0.3, cmap='viridis')
S_true_L = get_safe_set(V_dot_true, -L*tau, S0=S0)
c_true_L = find_max_levelset(S_true_L, V, accuracy)
axes[2].imshow(np.reshape(S_true_L, num_samples).T, extent=extent, origin='lower')
axes[2].set_title('True safe set (V_dot < -L*tau)')
axes[2].imshow(np.reshape(V < c_true_L, num_samples).T, extent=extent, origin='lower', alpha=0.3, cmap='viridis')
plt.show()
print('Number of true safe points: {0}/{3}\n'
'Number of prior safe points: {1}/{3}\n'
'Number of finite safe points: {2}/{3}\n'.format(np.count_nonzero(V < c_true),
np.count_nonzero(V < c_prior),
np.count_nonzero(V < c_true_L),
grid.shape[0]))
Explanation: True safe levelset
To get an intuition about the task at hand, we compute the maximum, safe level set (ROA) according to the true and prior dynamics. The learning algorithm only has access to the prior dynamics model, not the true model!
The plot shows the maximum level set (orange), and the region where $\dot{V}$ is sufficiently small (red). It can be seen that the prior model estimates a safe region that is too large, since it considers a lighter mass. Also, the third plot shows that we cannot recover the maximum level set with the learning method, since it considers $\dot{V}(x) < -L\tau$, rather than $\dot{V}(x) < 0$. For finer discretizations the two sets will get closer and closer to each other.
End of explanation
V, dV = quadratic_lyapunov_function(grid, Pn)
def update_gp():
dynamics_mean, dynamics_var = predict_model(gp, grid)
V_dot = compute_v_dot_upper_bound(dV, dynamics_mean, dynamics_var, beta=2.)
S = get_safe_set(V_dot, -L*tau, S0=S0)
c = find_max_levelset(S, V, accuracy)
S[:] = V <= c
max_id = np.argmax(dynamics_var[S, 1])
max_state = grid[S][[max_id], :].copy()
gp.set_XY(np.vstack([gp.X, max_state]),
np.vstack([gp.Y, true_dynamics(max_state)[:, [1]]]))
return S
Explanation: Online learning
Now let us see how the learning algorithm performs. We compute the maximum level set based on the GP estimate of the dynamics, and sample the most uncertain state within for 100 iterations.
End of explanation
# Try to import a nice progress bar
try:
from tqdm import tqdm
except:
tqdm = lambda x: x
# Update the GP model 100 times
for i in tqdm(range(100)):
S = update_gp()
print('Number of estimated safe points: {0}% relative to true dynamics with V_dot < 0'
.format(np.count_nonzero(S) / np.count_nonzero(V < c_true)))
Explanation: Warning: This is non-optimized, academic code. Executing the following cell may take roughly a minute on a decent laptop.
End of explanation
def denorm_ellipse(P, level):
Return the ellipse _bounds, but denormalized.
x0, x1_u, x1_l = ellipse_bounds(P, level)
return Tx[0,0] * x0, Tx[1,1] * x1_u, Tx[1,1] * x1_l
c_est = find_max_levelset(S, V, accuracy)
colors = ['b', 'm', 'r']
plt.fill_between(*denorm_ellipse(Pn, c_prior), color=colors[0], alpha=0.5)
plt.fill_between(*denorm_ellipse(Pn, c_true), color=colors[1], alpha=0.5)
plt.fill_between(*denorm_ellipse(Pn, c_est), color=colors[2], alpha=0.5)
patch0 = patches.Patch(color=colors[0], alpha=0.5, label='Prior safe set')
patch1 = patches.Patch(color=colors[1], alpha=0.5, label='True safe set')
patch2 = patches.Patch(color=colors[2], alpha=0.5, label='Estimated safe set')
legs = [patch0, patch1, patch2]
labels = [x.get_label() for x in legs]
leg = plt.legend(legs, labels, loc=3, borderaxespad=0)
data = denormalize_x(gp.X[1:, :])
plt.plot(data[:, 0], data[:, 1], 'x')
plt.xlabel(r'Angle $\theta$')
plt.ylabel(r'Angular velocity $\dot{\theta}$')
plt.show()
Explanation: Plot results
We plot the resulting estimate. By restricting ourselves to the levelset $\dot{V} \leq -L \tau$, we cannot reach the true safe set. However, if we pick a less conservative Lipschitz constant and discretize at a finer rate, the two will approach each other.
End of explanation |
13,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MPI-M
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
13,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
More SQL
Let's grab a fairly large dataset, load it into a database, and work with it.
Getting your data
Capital Bikeshare trip data is a fun source of transactional data. We can work with one quarter's data to show a few key concepts.
The following few cells should be feeling like old hat to you by now.
Step1: It's in a zip format, so unzip it
Step2: How big is it?
Step3: What are its columns?
Step4: Okay, let's have a look.
Step5: Ah, that's kinda wordy. Let's cut out that first column, which we can compute for ourselves later.
Step6: That's a little bit cleaner, and the rest of the data should be useful. Let's clean up the data by removing that column and renaming the headers so they're a little easier to query.
Step7: Make sure you haven't lost anything!
Step8: Prepping and loading data into the database
Alright, then, let's get loading.
Step9: NOTE
Step10: Here's how we connect the notebook up to the mysql database using a username and password. Remember that this shorthand version is possible thanks to the excellent ipython-sql Jupyter extension that we're using, otherwise you'd have to establish the connection, get a cursor, etc., like you've done explicitly in python in your other class.
Not that there's anything wrong with that.
Step11: Very easy, no?
First, clean up if we're not running this for the first time.
Step12: Next, create a table schema using DDL.
Step13: Just to verify it worked
Step14: It worked! We just don't have any data in there yet.
Now we load the data using LOAD DATA INFILE. You can do pretty much the same thing from the bash shell using mysqlimport and a bunch of options. It'll read better here in the notebook with the options spelled out.
Docs for LOAD DATA INFILE are available at https
Step15: Note
Step16: Looks good! Let's look at the data a little.
Step17: How does MySQL construct this query, or more specifically, what's its execution plan? We can find out with EXPLAIN.
For more about how to read MySQL 5.5's query plan, see https
Step18: This says "using no keys, we're going to just scan roughly 395,390 rows, sans indexes, to answer this query."
Step19: Pretty much the same thing. You can't get the max without looking at all of the values if there is no index.
Step20: Now we see "using where" under "extra", so we know there's a filter operation, but that's about the only change. What if we add more things to filter on?
Step21: Ah, some more info - it looks like it's using a temporary relation to store intermediate results, perhaps for the GROUP BY, then a sort to handle ORDER BY.
Still no indexes, though. Let's change that.
Step22: I changed the query a little bit to use the index, do you see the difference? It found search keys in the index, and the row count went down by an order of magnitude. That's the power of indexes.
It helps even on simple queries like this.
Step23: What's that 201 value for rows? Maybe the actual count of distinct values. We can test that
Step24: There you go, that's exactly the answer.
How about that MAX() query we tried a little while back?
Step25: Let's create another index on start_date to see what the effect on the query plan will be.
Step26: Same result, but...
Step27: That's new! In this case it doesn't have to look at any rows, it can just look at one end of the index. We've optimized away the need to even look at the table.
Let's go back to COUNT() and try a few more things before we move on.
Step28: Do you see what happened there?
Normalizing attributes
Let's look at a few tasks you might need to perform if you were normalizing this dataset. Remember that in normalization, we reduce redundancy with the goal of consistency.
What's redundant? Well, the station names for one.
Step29: Hmm, they're different. Let's put them together.
Step30: We'll create a table to hold the names of stations. Each station name should be represented once, and we'll assign a primary key to each in the form of a unique integer.
Step31: Looks good. Now we can load the data with an INSERT that draws from our previous query. We can skip specifying the id because MySQL will do that for us.
Note
Step32: It worked. Now we can update the bikeshare table to add columns for station identifiers.
Step33: Looks good. But what exactly just happened?
Step34: What just happened? Why are all the start_station_id values None?
Let's fill in those values with our new identifiers from the station table.
Step35: Great, now we can drop start_station from bikeshare and save a lot of space.
Step36: Worked!
And we can repeat the process for end_station.
Step37: A lot leaner, right?
JOINs and indexes
Now let's look at queries that return station names, thus requiring a JOIN across the two tables. Keep in mind our two table schema.
Step38: Let's try a basic query that looks for the most busy station pairs.
Step39: Now let's liven it up by joining to station and including station names. We'll need to join twice, using two aliases.
Worked just fine. Let's look under the hood, though.
Step40: Looks good, and it's in my neighborhood.
Step41: Not bad, but it's doing a full table scan on bikeshare. Let's see if some indexes would help with the two joins.
Step42: Well, it's hard to say how much better this will perform without a lot more data. A COUNT operation simply needs to be able to count everything, if the level of granularity it's counting doesn't already have an easy lookup like we saw before. Sometimes you just don't feel the pain of scale until you hit a scaling threshold that varies with the shape of your data.
But - see the possible_keys in the first row? That means the optimizer sees the indexes present and will attempt to use those to at least organize the query a little better than it would be able to do without them.
Let's try one more thing - we can create an index on multiple columns that matches our query more precisely. It's inefficient tot look up one column, then another, after all, we're looking for combinations of both. A multiple column index can precompute that. | Python Code:
!wget https://www.capitalbikeshare.com/assets/files/trip-history-data/2013-Q1-Trips-History-Data.zip
Explanation: More SQL
Let's grab a fairly large dataset, load it into a database, and work with it.
Getting your data
Capital Bikeshare trip data is a fun source of transactional data. We can work with one quarter's data to show a few key concepts.
The following few cells should be feeling like old hat to you by now.
End of explanation
!unzip 2013-Q1-Trips-History-Data.zip
Explanation: It's in a zip format, so unzip it:
End of explanation
!wc 2013-Q1-Trips-History-Data.csv
Explanation: How big is it?
End of explanation
!csvcut -n 2013-Q1-Trips-History-Data.csv
Explanation: What are its columns?
End of explanation
!head -5 2013-Q1-Trips-History-Data.csv | csvlook
Explanation: Okay, let's have a look.
End of explanation
!head 2013-Q1-Trips-History-Data.csv | csvcut -C1 | csvlook
Explanation: Ah, that's kinda wordy. Let's cut out that first column, which we can compute for ourselves later.
End of explanation
!csvcut -C1 2013-Q1-Trips-History-Data.csv | \
header -r "start_date,end_date,start_station,end_station,bike_id,sub_type" \
> bikeshare.csv
Explanation: That's a little bit cleaner, and the rest of the data should be useful. Let's clean up the data by removing that column and renaming the headers so they're a little easier to query.
End of explanation
!wc bikeshare.csv
Explanation: Make sure you haven't lost anything!
End of explanation
%load_ext sql
Explanation: Prepping and loading data into the database
Alright, then, let's get loading.
End of explanation
!echo "CREATE DATABASE bikedb" | mysql --user=mysqluser --password=mysqlpass
Explanation: NOTE: See a bunch of ShimWarnings with a pink background? That's normal. It's just a heads-up about ongoing changes to IPython/Jupyter code. You can keep going.
First, we create a database in mysql. Note: you can do the same thing on the command line by issuing the CREATE DATABASE command part before the pipe within the mysql shell, which you get to with the second part after the pipe. Here we'll pipe the one into the other so it reads well in the notebook.
End of explanation
%sql mysql://mysqluser:mysqlpass@localhost/bikedb
Explanation: Here's how we connect the notebook up to the mysql database using a username and password. Remember that this shorthand version is possible thanks to the excellent ipython-sql Jupyter extension that we're using, otherwise you'd have to establish the connection, get a cursor, etc., like you've done explicitly in python in your other class.
Not that there's anything wrong with that.
End of explanation
%%sql
DROP TABLE IF EXISTS bikeshare;
Explanation: Very easy, no?
First, clean up if we're not running this for the first time.
End of explanation
%%sql
CREATE TABLE bikeshare (
start_date DATETIME,
end_date DATETIME,
start_station VARCHAR(100),
end_station VARCHAR(100),
bike_id CHAR(7),
sub_type CHAR(10)
)
Explanation: Next, create a table schema using DDL.
End of explanation
%%sql
SELECT COUNT(*)
FROM bikeshare
Explanation: Just to verify it worked:
End of explanation
%%sql
LOAD DATA INFILE '/vagrant/bikeshare.csv'
REPLACE
INTO TABLE bikeshare
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
IGNORE 1 LINES
(@start_date, @end_date, start_station, end_station, bike_id, sub_type)
SET start_date = STR_TO_DATE(@start_date, '%c/%e/%Y %k:%i'),
end_date = STR_TO_DATE(@end_date, '%c/%e/%Y %k:%i')
Explanation: It worked! We just don't have any data in there yet.
Now we load the data using LOAD DATA INFILE. You can do pretty much the same thing from the bash shell using mysqlimport and a bunch of options. It'll read better here in the notebook with the options spelled out.
Docs for LOAD DATA INFILE are available at https://dev.mysql.com/doc/refman/5.1/en/load-data.html.
Note: this assumes you've placed your bikeshare file in the directory /vagrant.
Note also: I had to look up the mysql date formatting docs to get this date format conversion correct. It took me a few trials and errors before I got it right. This is an extremely common thing to have to do if you ever spend time wrangling data - every system handles dates in its own way.
End of explanation
%%sql
SELECT COUNT(*)
FROM bikeshare
Explanation: Note: if the above command fails for you with a "file not found" error, please read these notes about apparmor. Follow that advice, and add a line like it shows, e.g.:
/vagrant/* r
...to the file, or whatever path you have your data on, reload apparmor, and try again. I had to do this, and it worked perfectly after I made that change.
Exploring your data
Now that we've loaded our data, or we think we have, let's just verify it. Should be the same row count as what csvkit and wc gave us.
End of explanation
%%sql
SELECT *
FROM bikeshare
LIMIT 5
Explanation: Looks good! Let's look at the data a little.
End of explanation
%%sql
EXPLAIN SELECT COUNT(*)
FROM bikeshare
LIMIT 5
Explanation: How does MySQL construct this query, or more specifically, what's its execution plan? We can find out with EXPLAIN.
For more about how to read MySQL 5.5's query plan, see https://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html.
End of explanation
%%sql
SELECT MAX(start_date)
FROM bikeshare
%%sql
EXPLAIN SELECT MAX(start_date)
FROM bikeshare
Explanation: This says "using no keys, we're going to just scan roughly 395,390 rows, sans indexes, to answer this query."
End of explanation
%%sql
SELECT COUNT(*)
FROM bikeshare
WHERE start_station LIKE "%dupont%"
%%sql
EXPLAIN SELECT COUNT(*)
FROM bikeshare
WHERE start_station LIKE "%dupont%"
Explanation: Pretty much the same thing. You can't get the max without looking at all of the values if there is no index.
End of explanation
%%sql
EXPLAIN SELECT start_station, end_station, COUNT(*)
FROM bikeshare
WHERE start_station LIKE "%dupont%"
AND end_station LIKE "%21st%"
AND start_date LIKE "2013-02-14%"
GROUP BY start_station, end_station
ORDER BY start_station, end_station
Explanation: Now we see "using where" under "extra", so we know there's a filter operation, but that's about the only change. What if we add more things to filter on?
End of explanation
%%sql
CREATE INDEX idx_start_station ON bikeshare (start_station)
%%sql
EXPLAIN SELECT start_station, end_station, COUNT(*)
FROM bikeshare
WHERE start_station LIKE "21st%"
AND start_date LIKE "2013-02-14%"
GROUP BY start_station, end_station
ORDER BY start_station, end_station
Explanation: Ah, some more info - it looks like it's using a temporary relation to store intermediate results, perhaps for the GROUP BY, then a sort to handle ORDER BY.
Still no indexes, though. Let's change that.
End of explanation
%%sql
EXPLAIN SELECT DISTINCT start_station
FROM bikeshare
ORDER BY start_station
Explanation: I changed the query a little bit to use the index, do you see the difference? It found search keys in the index, and the row count went down by an order of magnitude. That's the power of indexes.
It helps even on simple queries like this.
End of explanation
%%sql
SELECT COUNT(*)
FROM (
SELECT DISTINCT start_station
FROM bikeshare
) made_up_subquery_alias_name
Explanation: What's that 201 value for rows? Maybe the actual count of distinct values. We can test that:
End of explanation
%%sql
SELECT MAX(start_date)
FROM bikeshare
%%sql
EXPLAIN SELECT MAX(start_date)
FROM bikeshare
Explanation: There you go, that's exactly the answer.
How about that MAX() query we tried a little while back?
End of explanation
%%sql
CREATE INDEX idx_start_date ON bikeshare (start_date)
%%sql
SELECT MAX(start_date)
FROM bikeshare
Explanation: Let's create another index on start_date to see what the effect on the query plan will be.
End of explanation
%%sql
EXPLAIN SELECT MAX(start_date)
FROM bikeshare
Explanation: Same result, but...
End of explanation
%%sql
EXPLAIN SELECT COUNT(*)
FROM bikeshare
%%sql
EXPLAIN SELECT COUNT(start_date)
FROM bikeshare
%%sql
EXPLAIN SELECT COUNT(end_date)
FROM bikeshare
Explanation: That's new! In this case it doesn't have to look at any rows, it can just look at one end of the index. We've optimized away the need to even look at the table.
Let's go back to COUNT() and try a few more things before we move on.
End of explanation
%%sql
SELECT COUNT(DISTINCT start_station)
FROM bikeshare
%%sql
SELECT COUNT(DISTINCT end_station)
FROM bikeshare
Explanation: Do you see what happened there?
Normalizing attributes
Let's look at a few tasks you might need to perform if you were normalizing this dataset. Remember that in normalization, we reduce redundancy with the goal of consistency.
What's redundant? Well, the station names for one.
End of explanation
%%sql
SELECT COUNT(DISTINCT station) FROM
(
SELECT start_station AS station FROM bikeshare
UNION
SELECT end_station AS station FROM bikeshare
) a
Explanation: Hmm, they're different. Let's put them together.
End of explanation
%%sql
CREATE TABLE station (
id SMALLINT NOT NULL AUTO_INCREMENT,
name VARCHAR(100),
PRIMARY KEY (id)
)
%%sql
SELECT COUNT(*)
FROM station
Explanation: We'll create a table to hold the names of stations. Each station name should be represented once, and we'll assign a primary key to each in the form of a unique integer.
End of explanation
%%sql
INSERT INTO station (name)
SELECT DISTINCT station AS name
FROM
(
SELECT start_station AS station FROM bikeshare
UNION
SELECT end_station AS station FROM bikeshare
) a
%%sql
SELECT *
FROM station
LIMIT 10
Explanation: Looks good. Now we can load the data with an INSERT that draws from our previous query. We can skip specifying the id because MySQL will do that for us.
Note: every database handles this issue its own way. This is a nice convenience in MySQL; other database backends require more work.
End of explanation
%%sql
ALTER TABLE bikeshare
ADD COLUMN start_station_id SMALLINT
AFTER start_station
Explanation: It worked. Now we can update the bikeshare table to add columns for station identifiers.
End of explanation
%%sql
DESCRIBE bikeshare
%%sql
SELECT *
FROM bikeshare
LIMIT 5
Explanation: Looks good. But what exactly just happened?
End of explanation
%%sql
UPDATE bikeshare
INNER JOIN station
ON bikeshare.start_station = station.name
SET bikeshare.start_station_id = station.id
%%sql
SELECT * FROM bikeshare LIMIT 5
%%sql
SELECT * FROM station WHERE id = 161
Explanation: What just happened? Why are all the start_station_id values None?
Let's fill in those values with our new identifiers from the station table.
End of explanation
%%sql
ALTER TABLE bikeshare
DROP COLUMN start_station
%%sql
DESCRIBE bikeshare
%%sql
SELECT * FROM bikeshare LIMIT 5
Explanation: Great, now we can drop start_station from bikeshare and save a lot of space.
End of explanation
%%sql
ALTER TABLE bikeshare
ADD COLUMN end_station_id SMALLINT
AFTER end_station
%%sql
UPDATE bikeshare
INNER JOIN station
ON bikeshare.end_station = station.name
SET bikeshare.end_station_id = station.id
%%sql
ALTER TABLE bikeshare
DROP COLUMN end_station
%%sql
SELECT * FROM bikeshare LIMIT 5
Explanation: Worked!
And we can repeat the process for end_station.
End of explanation
%%sql
DESCRIBE station
%%sql
DESCRIBE bikeshare
Explanation: A lot leaner, right?
JOINs and indexes
Now let's look at queries that return station names, thus requiring a JOIN across the two tables. Keep in mind our two table schema.
End of explanation
%%sql
SELECT COUNT(*) AS c, start_station_id, end_station_id
FROM bikeshare
GROUP BY start_station_id, end_station_id
ORDER BY c DESC
LIMIT 5
Explanation: Let's try a basic query that looks for the most busy station pairs.
End of explanation
%%sql
SELECT COUNT(*) AS c, station_1.name AS start_station, station_2.name AS end_station
FROM bikeshare, station AS station_1, station AS station_2
WHERE station_1.id = bikeshare.start_station_id
AND station_2.id = bikeshare.end_station_id
GROUP BY bikeshare.start_station_id, bikeshare.end_station_id
ORDER BY c DESC
LIMIT 5
Explanation: Now let's liven it up by joining to station and including station names. We'll need to join twice, using two aliases.
Worked just fine. Let's look under the hood, though.
End of explanation
%%sql
EXPLAIN SELECT COUNT(*) AS c, station_1.name AS start_station, station_2.name AS end_station
FROM station AS station_1, station AS station_2, bikeshare
WHERE bikeshare.start_station_id = station_1.id
AND bikeshare.end_station_id = station_2.id
GROUP BY bikeshare.start_station_id, bikeshare.end_station_id
ORDER BY c DESC
LIMIT 5
Explanation: Looks good, and it's in my neighborhood. :)
Let's look at the query plan for all this:
End of explanation
%%sql
CREATE INDEX idx_start_station_id ON bikeshare (start_station_id)
%%sql
CREATE INDEX idx_end_station_id ON bikeshare (end_station_id)
%%sql
EXPLAIN SELECT COUNT(*) AS c, station_1.name AS s1_name, station_2.name AS s2_name
FROM bikeshare, station AS station_1, station AS station_2
WHERE station_1.id = bikeshare.start_station_id
AND station_2.id = bikeshare.end_station_id
GROUP BY bikeshare.start_station_id, bikeshare.end_station_id
ORDER BY c DESC
LIMIT 5
Explanation: Not bad, but it's doing a full table scan on bikeshare. Let's see if some indexes would help with the two joins.
End of explanation
%%sql
CREATE INDEX idx_stations ON bikeshare (start_station_id, end_station_id)
%%sql
EXPLAIN SELECT COUNT(*) AS c, station_1.name AS s1_name, station_2.name AS s2_name
FROM bikeshare, station AS station_1, station AS station_2
WHERE station_1.id = bikeshare.start_station_id
AND station_2.id = bikeshare.end_station_id
GROUP BY bikeshare.start_station_id, bikeshare.end_station_id
ORDER BY c DESC
LIMIT 5
Explanation: Well, it's hard to say how much better this will perform without a lot more data. A COUNT operation simply needs to be able to count everything, if the level of granularity it's counting doesn't already have an easy lookup like we saw before. Sometimes you just don't feel the pain of scale until you hit a scaling threshold that varies with the shape of your data.
But - see the possible_keys in the first row? That means the optimizer sees the indexes present and will attempt to use those to at least organize the query a little better than it would be able to do without them.
Let's try one more thing - we can create an index on multiple columns that matches our query more precisely. It's inefficient tot look up one column, then another, after all, we're looking for combinations of both. A multiple column index can precompute that.
End of explanation |
13,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing functional near-infrared spectroscopy (fNIRS) data
This tutorial covers how to convert functional near-infrared spectroscopy
(fNIRS) data from raw measurements to relative oxyhaemoglobin (HbO) and
deoxyhaemoglobin (HbR) concentration.
Step1: View location of sensors over brain surface
Here we validate that the location of sources-detector pairs and channels
are in the expected locations. Source-detector pairs are shown as lines
between the optodes, channels (the mid point of source-detector pairs) are
optionally shown as orange dots. Source are optionally shown as red dots and
detectors as black.
Step2: Selecting channels appropriate for detecting neural responses
First we remove channels that are too close together (short channels) to
detect a neural response (less than 1 cm distance between optodes).
These short channels can be seen in the figure above.
To achieve this we pick all the channels that are not considered to be short.
Step3: Converting from raw intensity to optical density
The raw intensity values are then converted to optical density.
Step4: Evaluating the quality of the data
At this stage we can quantify the quality of the coupling
between the scalp and the optodes using the scalp coupling index. This
method looks for the presence of a prominent synchronous signal in the
frequency range of cardiac signals across both photodetected signals.
In this example the data is clean and the coupling is good for all
channels, so we will not mark any channels as bad based on the scalp
coupling index.
Step5: In this example we will mark all channels with a SCI less than 0.5 as bad
(this dataset is quite clean, so no channels are marked as bad).
Step6: At this stage it is appropriate to inspect your data
(for instructions on how to use the interactive data visualisation tool
see tut-visualize-raw)
to ensure that channels with poor scalp coupling have been removed.
If your data contains lots of artifacts you may decide to apply
artifact reduction techniques as described in ex-fnirs-artifacts.
Converting from optical density to haemoglobin
Next we convert the optical density data to haemoglobin concentration using
the modified Beer-Lambert law.
Step7: Removing heart rate from signal
The haemodynamic response has frequency content predominantly below 0.5 Hz.
An increase in activity around 1 Hz can be seen in the data that is due to
the person's heart beat and is unwanted. So we use a low pass filter to
remove this. A high pass filter is also included to remove slow drifts
in the data.
Step8: Extract epochs
Now that the signal has been converted to relative haemoglobin concentration,
and the unwanted heart rate component has been removed, we can extract epochs
related to each of the experimental conditions.
First we extract the events of interest and visualise them to ensure they are
correct.
Step9: Next we define the range of our epochs, the rejection criteria,
baseline correction, and extract the epochs. We visualise the log of which
epochs were dropped.
Step10: View consistency of responses across trials
Now we can view the haemodynamic response for our tapping condition.
We visualise the response for both the oxy- and deoxyhaemoglobin, and
observe the expected peak in HbO at around 6 seconds consistently across
trials, and the consistent dip in HbR that is slightly delayed relative to
the HbO peak.
Step11: We can also view the epoched data for the control condition and observe
that it does not show the expected morphology.
Step12: View consistency of responses across channels
Similarly we can view how consistent the response is across the optode
pairs that we selected. All the channels in this data are located over the
motor cortex, and all channels show a similar pattern in the data.
Step13: Plot standard fNIRS response image
Next we generate the most common visualisation of fNIRS data
Step14: View topographic representation of activity
Next we view how the topographic activity changes throughout the response.
Step15: Compare tapping of left and right hands
Finally we generate topo maps for the left and right conditions to view
the location of activity. First we visualise the HbO activity.
Step16: And we also view the HbR activity for the two conditions.
Step17: And we can plot the comparison at a single time point for two conditions.
Step18: Lastly, we can also look at the individual waveforms to see what is
driving the topographic plot above. | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from itertools import compress
import mne
fnirs_data_folder = mne.datasets.fnirs_motor.data_path()
fnirs_cw_amplitude_dir = os.path.join(fnirs_data_folder, 'Participant-1')
raw_intensity = mne.io.read_raw_nirx(fnirs_cw_amplitude_dir, verbose=True)
raw_intensity.load_data()
Explanation: Preprocessing functional near-infrared spectroscopy (fNIRS) data
This tutorial covers how to convert functional near-infrared spectroscopy
(fNIRS) data from raw measurements to relative oxyhaemoglobin (HbO) and
deoxyhaemoglobin (HbR) concentration.
:depth: 2
Here we will work with the fNIRS motor data <fnirs-motor-dataset>.
End of explanation
subjects_dir = mne.datasets.sample.data_path() + '/subjects'
fig = mne.viz.create_3d_figure(size=(800, 600), bgcolor='white')
fig = mne.viz.plot_alignment(raw_intensity.info, show_axes=True,
subject='fsaverage', coord_frame='mri',
trans='fsaverage', surfaces=['brain'],
fnirs=['channels', 'pairs',
'sources', 'detectors'],
subjects_dir=subjects_dir, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=20, elevation=60, distance=0.4,
focalpoint=(0., -0.01, 0.02))
Explanation: View location of sensors over brain surface
Here we validate that the location of sources-detector pairs and channels
are in the expected locations. Source-detector pairs are shown as lines
between the optodes, channels (the mid point of source-detector pairs) are
optionally shown as orange dots. Source are optionally shown as red dots and
detectors as black.
End of explanation
picks = mne.pick_types(raw_intensity.info, meg=False, fnirs=True)
dists = mne.preprocessing.nirs.source_detector_distances(
raw_intensity.info, picks=picks)
raw_intensity.pick(picks[dists > 0.01])
raw_intensity.plot(n_channels=len(raw_intensity.ch_names),
duration=500, show_scrollbars=False)
Explanation: Selecting channels appropriate for detecting neural responses
First we remove channels that are too close together (short channels) to
detect a neural response (less than 1 cm distance between optodes).
These short channels can be seen in the figure above.
To achieve this we pick all the channels that are not considered to be short.
End of explanation
raw_od = mne.preprocessing.nirs.optical_density(raw_intensity)
raw_od.plot(n_channels=len(raw_od.ch_names),
duration=500, show_scrollbars=False)
Explanation: Converting from raw intensity to optical density
The raw intensity values are then converted to optical density.
End of explanation
sci = mne.preprocessing.nirs.scalp_coupling_index(raw_od)
fig, ax = plt.subplots()
ax.hist(sci)
ax.set(xlabel='Scalp Coupling Index', ylabel='Count', xlim=[0, 1])
Explanation: Evaluating the quality of the data
At this stage we can quantify the quality of the coupling
between the scalp and the optodes using the scalp coupling index. This
method looks for the presence of a prominent synchronous signal in the
frequency range of cardiac signals across both photodetected signals.
In this example the data is clean and the coupling is good for all
channels, so we will not mark any channels as bad based on the scalp
coupling index.
End of explanation
raw_od.info['bads'] = list(compress(raw_od.ch_names, sci < 0.5))
Explanation: In this example we will mark all channels with a SCI less than 0.5 as bad
(this dataset is quite clean, so no channels are marked as bad).
End of explanation
raw_haemo = mne.preprocessing.nirs.beer_lambert_law(raw_od)
raw_haemo.plot(n_channels=len(raw_haemo.ch_names),
duration=500, show_scrollbars=False)
Explanation: At this stage it is appropriate to inspect your data
(for instructions on how to use the interactive data visualisation tool
see tut-visualize-raw)
to ensure that channels with poor scalp coupling have been removed.
If your data contains lots of artifacts you may decide to apply
artifact reduction techniques as described in ex-fnirs-artifacts.
Converting from optical density to haemoglobin
Next we convert the optical density data to haemoglobin concentration using
the modified Beer-Lambert law.
End of explanation
fig = raw_haemo.plot_psd(average=True)
fig.suptitle('Before filtering', weight='bold', size='x-large')
fig.subplots_adjust(top=0.88)
raw_haemo = raw_haemo.filter(0.05, 0.7, h_trans_bandwidth=0.2,
l_trans_bandwidth=0.02)
fig = raw_haemo.plot_psd(average=True)
fig.suptitle('After filtering', weight='bold', size='x-large')
fig.subplots_adjust(top=0.88)
Explanation: Removing heart rate from signal
The haemodynamic response has frequency content predominantly below 0.5 Hz.
An increase in activity around 1 Hz can be seen in the data that is due to
the person's heart beat and is unwanted. So we use a low pass filter to
remove this. A high pass filter is also included to remove slow drifts
in the data.
End of explanation
events, _ = mne.events_from_annotations(raw_haemo, event_id={'1.0': 1,
'2.0': 2,
'3.0': 3})
event_dict = {'Control': 1, 'Tapping/Left': 2, 'Tapping/Right': 3}
fig = mne.viz.plot_events(events, event_id=event_dict,
sfreq=raw_haemo.info['sfreq'])
fig.subplots_adjust(right=0.7) # make room for the legend
Explanation: Extract epochs
Now that the signal has been converted to relative haemoglobin concentration,
and the unwanted heart rate component has been removed, we can extract epochs
related to each of the experimental conditions.
First we extract the events of interest and visualise them to ensure they are
correct.
End of explanation
reject_criteria = dict(hbo=80e-6)
tmin, tmax = -5, 15
epochs = mne.Epochs(raw_haemo, events, event_id=event_dict,
tmin=tmin, tmax=tmax,
reject=reject_criteria, reject_by_annotation=True,
proj=True, baseline=(None, 0), preload=True,
detrend=None, verbose=True)
epochs.plot_drop_log()
Explanation: Next we define the range of our epochs, the rejection criteria,
baseline correction, and extract the epochs. We visualise the log of which
epochs were dropped.
End of explanation
epochs['Tapping'].plot_image(combine='mean', vmin=-30, vmax=30,
ts_args=dict(ylim=dict(hbo=[-15, 15],
hbr=[-15, 15])))
Explanation: View consistency of responses across trials
Now we can view the haemodynamic response for our tapping condition.
We visualise the response for both the oxy- and deoxyhaemoglobin, and
observe the expected peak in HbO at around 6 seconds consistently across
trials, and the consistent dip in HbR that is slightly delayed relative to
the HbO peak.
End of explanation
epochs['Control'].plot_image(combine='mean', vmin=-30, vmax=30,
ts_args=dict(ylim=dict(hbo=[-15, 15],
hbr=[-15, 15])))
Explanation: We can also view the epoched data for the control condition and observe
that it does not show the expected morphology.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15, 6))
clims = dict(hbo=[-20, 20], hbr=[-20, 20])
epochs['Control'].average().plot_image(axes=axes[:, 0], clim=clims)
epochs['Tapping'].average().plot_image(axes=axes[:, 1], clim=clims)
for column, condition in enumerate(['Control', 'Tapping']):
for ax in axes[:, column]:
ax.set_title('{}: {}'.format(condition, ax.get_title()))
Explanation: View consistency of responses across channels
Similarly we can view how consistent the response is across the optode
pairs that we selected. All the channels in this data are located over the
motor cortex, and all channels show a similar pattern in the data.
End of explanation
evoked_dict = {'Tapping/HbO': epochs['Tapping'].average(picks='hbo'),
'Tapping/HbR': epochs['Tapping'].average(picks='hbr'),
'Control/HbO': epochs['Control'].average(picks='hbo'),
'Control/HbR': epochs['Control'].average(picks='hbr')}
# Rename channels until the encoding of frequency in ch_name is fixed
for condition in evoked_dict:
evoked_dict[condition].rename_channels(lambda x: x[:-4])
color_dict = dict(HbO='#AA3377', HbR='b')
styles_dict = dict(Control=dict(linestyle='dashed'))
mne.viz.plot_compare_evokeds(evoked_dict, combine="mean", ci=0.95,
colors=color_dict, styles=styles_dict)
Explanation: Plot standard fNIRS response image
Next we generate the most common visualisation of fNIRS data: plotting
both the HbO and HbR on the same figure to illustrate the relation between
the two signals.
End of explanation
times = np.arange(-3.5, 13.2, 3.0)
topomap_args = dict(extrapolate='local')
epochs['Tapping'].average(picks='hbo').plot_joint(
times=times, topomap_args=topomap_args)
Explanation: View topographic representation of activity
Next we view how the topographic activity changes throughout the response.
End of explanation
times = np.arange(4.0, 11.0, 1.0)
epochs['Tapping/Left'].average(picks='hbo').plot_topomap(
times=times, **topomap_args)
epochs['Tapping/Right'].average(picks='hbo').plot_topomap(
times=times, **topomap_args)
Explanation: Compare tapping of left and right hands
Finally we generate topo maps for the left and right conditions to view
the location of activity. First we visualise the HbO activity.
End of explanation
epochs['Tapping/Left'].average(picks='hbr').plot_topomap(
times=times, **topomap_args)
epochs['Tapping/Right'].average(picks='hbr').plot_topomap(
times=times, **topomap_args)
Explanation: And we also view the HbR activity for the two conditions.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=4, figsize=(9, 5),
gridspec_kw=dict(width_ratios=[1, 1, 1, 0.1]))
vmin, vmax, ts = -8, 8, 9.0
evoked_left = epochs['Tapping/Left'].average()
evoked_right = epochs['Tapping/Right'].average()
evoked_left.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 0],
vmin=vmin, vmax=vmax, colorbar=False,
**topomap_args)
evoked_left.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 0],
vmin=vmin, vmax=vmax, colorbar=False,
**topomap_args)
evoked_right.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 1],
vmin=vmin, vmax=vmax, colorbar=False,
**topomap_args)
evoked_right.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 1],
vmin=vmin, vmax=vmax, colorbar=False,
**topomap_args)
evoked_diff = mne.combine_evoked([evoked_left, evoked_right], weights=[1, -1])
evoked_diff.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 2:],
vmin=vmin, vmax=vmax, colorbar=True,
**topomap_args)
evoked_diff.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 2:],
vmin=vmin, vmax=vmax, colorbar=True,
**topomap_args)
for column, condition in enumerate(
['Tapping Left', 'Tapping Right', 'Left-Right']):
for row, chroma in enumerate(['HbO', 'HbR']):
axes[row, column].set_title('{}: {}'.format(chroma, condition))
fig.tight_layout()
Explanation: And we can plot the comparison at a single time point for two conditions.
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6, 4))
mne.viz.plot_evoked_topo(epochs['Left'].average(picks='hbo'), color='b',
axes=axes, legend=False)
mne.viz.plot_evoked_topo(epochs['Right'].average(picks='hbo'), color='r',
axes=axes, legend=False)
# Tidy the legend
leg_lines = [line for line in axes.lines if line.get_c() == 'b'][:1]
leg_lines.append([line for line in axes.lines if line.get_c() == 'r'][0])
fig.legend(leg_lines, ['Left', 'Right'], loc='lower right')
Explanation: Lastly, we can also look at the individual waveforms to see what is
driving the topographic plot above.
End of explanation |
13,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automated Clustering of Similar Amendments
The Italian Senate is clogged by computer-generated amendments. This notebook aims to cluster similar amendments in an automated fashion, so that the appropriate Senate procedures can be used to get rid of them in one sweep.
We begin as usual with some imports, some Jupyter magic and some useful constants.
Step1: The problem we want to solve is an unsupervised clustering in an unknown number of clusters. The usual algorithm used to solve it is some variation of hierarchical clustering combined with some heuristics to "cut" the resulting dendrogram at a certain height to produce the predicted clusters.
All variations of hierarchical clustering require us to define some distance metric between elements. In our case, elements are free texts, so we use a distance related to Jaccard Similarity on the tokens of the text, where a token is a contiguous string of alphanumeric characters.
Step2: Using the XML data downloaded by the Scrapy spider, we build an array called amendments.
Each element of the array is a dictionary whose structure is exemplified by the following
Step3: To check if the algorithm is working correctly, we restrict ourselves to the first hundred amendments.
Step4: We now compute an hierarchical clustering on these first hundred elements, and we visualize the results as a dendrogram.
Step5: It appears that the algorithm found several clusters, highlighted by different colors. Let's inspect the last one
Step6: We see that, in fact, all amendments of this cluster are variations of a single one.
Let's now try with the second to last cluster
Step7: Again, all amendments in this cluster are variations of a single one. Moreover, they differ from the previous cluster for the addition of the last sentence, which is why the hierarchical clustering algorithm will eventually merge the two clusters.
To double check, let's try with amendments 6 and 97, which are not part of the same cluster
Step8: It appears that, in fact, the text of these two amendments is significantly different.
Finally, let's run the algorithm on all amendments at once. | Python Code:
import os
import re
from itertools import combinations
import xml.etree.ElementTree as ET
from matplotlib import pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
%matplotlib inline
DATA_FOLDER = 'data/cirinna'
NAMESPACE = {'an': 'http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD03'}
ALPHANUM_REGEX = re.compile('[\W+]', re.UNICODE)
Explanation: Automated Clustering of Similar Amendments
The Italian Senate is clogged by computer-generated amendments. This notebook aims to cluster similar amendments in an automated fashion, so that the appropriate Senate procedures can be used to get rid of them in one sweep.
We begin as usual with some imports, some Jupyter magic and some useful constants.
End of explanation
def to_tokens(s):
return set(ALPHANUM_REGEX.sub(' ', s).lower().split())
def jaccard_distance(x, y):
return 1 - (len(x['tokens'] & y['tokens']) / len(x['tokens'] | y['tokens']))
Explanation: The problem we want to solve is an unsupervised clustering in an unknown number of clusters. The usual algorithm used to solve it is some variation of hierarchical clustering combined with some heuristics to "cut" the resulting dendrogram at a certain height to produce the predicted clusters.
All variations of hierarchical clustering require us to define some distance metric between elements. In our case, elements are free texts, so we use a distance related to Jaccard Similarity on the tokens of the text, where a token is a contiguous string of alphanumeric characters.
End of explanation
amendments = []
for filename in sorted(os.listdir(DATA_FOLDER)):
if filename.startswith('.'):
continue
tree = ET.parse(os.path.join(DATA_FOLDER, filename))
_id = tree.find('.//an:FRBRnumber', NAMESPACE).get('value')
authors = [el.text for el in tree.findall('.//an:docProponent', NAMESPACE)]
raw = ' '.join(tree.find('.//an:amendmentContent', NAMESPACE).itertext())
tokens = to_tokens(raw)
amendments.append({'_id': _id, 'authors': authors, 'raw': raw, 'tokens': tokens})
Explanation: Using the XML data downloaded by the Scrapy spider, we build an array called amendments.
Each element of the array is a dictionary whose structure is exemplified by the following:
python
{
'_id': '1.100',
'authors': ['SACCONI', "D'ASCOLA", 'AIELLO', 'ALBERTINI', ..., 'DI BIAGIO'],
'raw': 'Sopprimere gli articoli da 1 a 10.',
'tokens': set(['1', '10', 'a', 'articoli', 'da', 'gli', 'sopprimere'])
}
End of explanation
first_amendments = amendments[:100]
first_distances = [jaccard_distance(x, y) for x, y in combinations(first_amendments, 2)]
Explanation: To check if the algorithm is working correctly, we restrict ourselves to the first hundred amendments.
End of explanation
Z_first = linkage(first_distances, method='complete')
plt.figure(figsize=(25, 50))
plt.title('Z_first')
dendrogram(
Z_first,
orientation='right',
leaf_font_size=12.,
)
plt.show()
Explanation: We now compute an hierarchical clustering on these first hundred elements, and we visualize the results as a dendrogram.
End of explanation
for i in [77, 72, 68, 64, 60, 56, 52, 48, 92, 89, 84, 80, 96]:
print('{i}: {snippet}'.format(i=i, snippet=first_amendments[i]['raw'][:76]))
Explanation: It appears that the algorithm found several clusters, highlighted by different colors. Let's inspect the last one:
End of explanation
for i in [78, 73, 69, 65, 61, 57, 53, 49, 93, 90, 85, 81]:
print('{i}: {snippet}'.format(i=i, snippet=first_amendments[i]['raw'][:76]))
Explanation: We see that, in fact, all amendments of this cluster are variations of a single one.
Let's now try with the second to last cluster:
End of explanation
for i in [6, 97]:
print('{i}: {snippet}'.format(i=i, snippet=first_amendments[i]['raw'][:76]))
Explanation: Again, all amendments in this cluster are variations of a single one. Moreover, they differ from the previous cluster for the addition of the last sentence, which is why the hierarchical clustering algorithm will eventually merge the two clusters.
To double check, let's try with amendments 6 and 97, which are not part of the same cluster:
End of explanation
distances = [jaccard_distance(x, y) for x, y in combinations(amendments, 2)]
Z_all = linkage(distances, method='complete')
plt.figure(figsize=(25, 10))
plt.title('Z_all')
dendrogram(
Z_all,
no_labels=True,
)
plt.show()
Explanation: It appears that, in fact, the text of these two amendments is significantly different.
Finally, let's run the algorithm on all amendments at once.
End of explanation |
13,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A recursive neural network that decides how many times to run itself
Produces variable-length outputs for static-length inputs.
Step1: The neural network accepts an input vector of length 2. It has 2 output nodes. One node is used to control whether or not to recursively run itself, the other is the real data output. We simply threshold > 0.5 to trigger a recursive call to itself.
Step2: Cost Function
Arbitrarily assign a high cost to mismatches in the length of the output, then also assess MSE
Step3: Genetic Algorithm to Solve Weights | Python Code:
import numpy as np
X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([[0],[0,0],[0,0,0],[0,0,0,0]])
def sigmoid(x):
return np.matrix(1.0 / (1.0 + np.exp(-x)))
def relu(x):
alpha = 0.01
return np.maximum(x, (alpha * x))
#initialize random weights
numIn, numHid, numOut = 2, 3, 2
theta1 = np.array( 0.5 * np.sqrt ( 6 / ( numIn + numHid) ) * np.random.randn( numIn + 1, numHid ), dtype="float32" )
theta2 = np.array( 0.5 * np.sqrt ( 6 / ( numHid + numOut ) ) * np.random.randn( numHid + 1, numOut ), dtype="float32" )
theta = np.append(theta1.flatten(), theta2.flatten()) #unroll vectors in a one long vector
def nn(x, theta):
i = 0
theta1 = np.array(theta[:9]).reshape(3,3)
theta2 = np.array(theta[9:]).reshape(4,2)
#print(theta1.shape)
#print(theta2.shape)
outputs = []
def comp(x):
#print(x)
a1 = np.array(np.concatenate((x.reshape(1,2), np.ones((1,1))), axis=1))
z2 = a1 @ theta1
a2 = np.concatenate((relu(z2), np.ones((1,1))), axis=1)
z3 = a2 @ theta2
a3 = sigmoid(z3)
return a3
a3 = comp(x)
outputs.append(a3[0,1])
while a3[0,0] > 0.5 and i < 3: #prevent an infinite loop; constrain output length
i += 1
input = np.array([[a3[0,1],0]])
a3 = comp(input)
outputs.append(a3[0,1])
return np.array(outputs)
Explanation: A recursive neural network that decides how many times to run itself
Produces variable-length outputs for static-length inputs.
End of explanation
###example output with random initial weights
print( nn(X[0], theta) )
print( nn(X[1], theta) )
print( nn(X[2], theta) )
print( nn(X[3], theta) )
Explanation: The neural network accepts an input vector of length 2. It has 2 output nodes. One node is used to control whether or not to recursively run itself, the other is the real data output. We simply threshold > 0.5 to trigger a recursive call to itself.
End of explanation
def costFunction(X, Y, theta):
cost = 0
for i in range(len(X)):
y = Y[i]
m = float(len(X[i]))
hThetaX = nn(X[i], theta)
if len(y) != len(hThetaX):
cost += 3
else:
cost += (1/m) * np.sum(np.abs(y - hThetaX)**2)
return cost
Explanation: Cost Function
Arbitrarily assign a high cost to mismatches in the length of the output, then also assess MSE
End of explanation
import random as rn, numpy as np
# [Initial population size, mutation rate (=1%), num generations (30), solution length (13), # winners/per gen]
initPop, mutRate, numGen, solLen, numWin = 100, 0.01, 500, 17, 20
#initialize current population to random values within range
curPop = np.random.choice(np.arange(-15,15,step=0.01),size=(initPop, solLen),replace=False)
nextPop = np.zeros((curPop.shape[0], curPop.shape[1]))
fitVec = np.zeros((initPop, 2)) #1st col is indices, 2nd col is cost
for i in range(numGen): #iterate through num generations
#Create vector of all errors from cost function for each solution
fitVec = np.array([np.array([x, np.sum(costFunction(X, y, curPop[x].T))]) for x in range(initPop)])
#plt.pyplot.scatter(i,np.sum(fitVec[:,1]))
winners = np.zeros((numWin, solLen))
for n in range(len(winners)): #for n in range(10)
selected = np.random.choice(range(len(fitVec)), numWin/2, replace=False)
wnr = np.argmin(fitVec[selected,1])
winners[n] = curPop[int(fitVec[selected[wnr]][0])]
nextPop[:len(winners)] = winners #populate new gen with winners
duplicWin = np.zeros((((initPop - len(winners))),winners.shape[1]))
for x in range(winners.shape[1]): #for each col in winners (3 cols)
#Duplicate winners (20x3 matrix) 3 times to create 80x3 matrix, then shuffle columns
numDups = ((initPop - len(winners))/len(winners)) #num times to duplicate to fill rest of nextPop
duplicWin[:, x] = np.repeat(winners[:, x], numDups, axis=0)#duplicate each col
duplicWin[:, x] = np.random.permutation(duplicWin[:, x]) #shuffle each col ("crossover")
#Populate the rest of the generation with offspring of mating pairs
nextPop[len(winners):] = np.matrix(duplicWin)
#Create a mutation matrix, mostly 1s, but some elements are random numbers from a normal distribution
mutMatrix = [np.float(np.random.normal(0,2,1)) if rn.random() < mutRate else 1 for x in range(nextPop.size)]
#randomly mutate part of the population by multiplying nextPop by our mutation matrix
nextPop = np.multiply(nextPop, np.matrix(mutMatrix).reshape(nextPop.shape))
curPop = nextPop
best_soln = curPop[np.argmin(fitVec[:,1])]
print("Best Sol'n:\n%s\nCost:%s" % (best_soln,np.sum(costFunction(X, y, best_soln.T))))
#Demonstrate variable output after training
print( np.round(nn(X[0], best_soln.reshape(17,1)), 2) )
print( np.round(nn(X[1], best_soln.reshape(17,1)), 2) )
print( np.round(nn(X[2], best_soln.reshape(17,1)), 2) )
print( np.round(nn(X[3], best_soln.reshape(17,1)), 2) )
Explanation: Genetic Algorithm to Solve Weights:
End of explanation |
13,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding recurrent neural networks
This notebook contains the code samples found in Chapter 6, Section 2 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures
Step1: There is just one minor difference
Step2: It is sometimes useful to stack several recurrent layers one after the other in order to increase the representational power of a network.
In such a setup, you have to get all intermediate layers to return full sequences
Step3: Now let's try to use such a model on the IMDB movie review classification problem. First, let's preprocess the data
Step4: Let's train a simple recurrent network using an Embedding layer and a SimpleRNN layer
Step5: Let's display the training and validation loss and accuracy
Step6: As a reminder, in chapter 3, our very first naive approach to this very dataset got us to 88% test accuracy. Unfortunately, our small
recurrent network doesn't perform very well at all compared to this baseline (only up to 85% validation accuracy). Part of the problem is
that our inputs only consider the first 500 words rather the full sequences --
hence our RNN has access to less information than our earlier baseline model. The remainder of the problem is simply that SimpleRNN isn't very good at processing long sequences, like text. Other types of recurrent layers perform much better. Let's take a look at some
more advanced layers.
[...]
A concrete LSTM example in Keras
Now let's switch to more practical concerns | Python Code:
from keras.layers import SimpleRNN
Explanation: Understanding recurrent neural networks
This notebook contains the code samples found in Chapter 6, Section 2 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
[...]
A first recurrent layer in Keras
The process we just naively implemented in Numpy corresponds to an actual Keras layer: the SimpleRNN layer:
End of explanation
from keras.models import Sequential
from keras.layers import Embedding, SimpleRNN
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32))
model.summary()
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.summary()
Explanation: There is just one minor difference: SimpleRNN processes batches of sequences, like all other Keras layers, not just a single sequence like
in our Numpy example. This means that it takes inputs of shape (batch_size, timesteps, input_features), rather than (timesteps,
input_features).
Like all recurrent layers in Keras, SimpleRNN can be run in two different modes: it can return either the full sequences of successive
outputs for each timestep (a 3D tensor of shape (batch_size, timesteps, output_features)), or it can return only the last output for each
input sequence (a 2D tensor of shape (batch_size, output_features)). These two modes are controlled by the return_sequences constructor
argument. Let's take a look at an example:
End of explanation
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32)) # This last layer only returns the last outputs.
model.summary()
Explanation: It is sometimes useful to stack several recurrent layers one after the other in order to increase the representational power of a network.
In such a setup, you have to get all intermediate layers to return full sequences:
End of explanation
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
maxlen = 500 # cut texts after this number of words (among top max_features most common words)
batch_size = 32
print('Loading data...')
(input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features)
print(len(input_train), 'train sequences')
print(len(input_test), 'test sequences')
print('Pad sequences (samples x time)')
input_train = sequence.pad_sequences(input_train, maxlen=maxlen)
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)
print('input_train shape:', input_train.shape)
print('input_test shape:', input_test.shape)
Explanation: Now let's try to use such a model on the IMDB movie review classification problem. First, let's preprocess the data:
End of explanation
from keras.layers import Dense
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
history = model.fit(input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
Explanation: Let's train a simple recurrent network using an Embedding layer and a SimpleRNN layer:
End of explanation
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
Explanation: Let's display the training and validation loss and accuracy:
End of explanation
from keras.layers import LSTM
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
Explanation: As a reminder, in chapter 3, our very first naive approach to this very dataset got us to 88% test accuracy. Unfortunately, our small
recurrent network doesn't perform very well at all compared to this baseline (only up to 85% validation accuracy). Part of the problem is
that our inputs only consider the first 500 words rather the full sequences --
hence our RNN has access to less information than our earlier baseline model. The remainder of the problem is simply that SimpleRNN isn't very good at processing long sequences, like text. Other types of recurrent layers perform much better. Let's take a look at some
more advanced layers.
[...]
A concrete LSTM example in Keras
Now let's switch to more practical concerns: we will set up a model using a LSTM layer and train it on the IMDB data. Here's the network,
similar to the one with SimpleRNN that we just presented. We only specify the output dimensionality of the LSTM layer, and leave every
other argument (there are lots) to the Keras defaults. Keras has good defaults, and things will almost always "just work" without you
having to spend time tuning parameters by hand.
End of explanation |
13,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Classification using TensorFlow and Google Cloud - Part 1
This bigquery-public-data
Step1: Importing libraries
Step2: 1. Source Query
Step3: 2. Raw metadata
Step4: 3. Preprocessing functions
Step5: 4. Beam Pipeline
Step6: 5. Run Pipeline | Python Code:
import os
class Params:
pass
# Set to run on GCP
Params.GCP_PROJECT_ID = 'ksalama-gcp-playground'
Params.REGION = 'europe-west1'
Params.BUCKET = 'ksalama-gcs-cloudml'
Params.PLATFORM = 'local' # local | GCP
Params.DATA_DIR = 'data/news' if Params.PLATFORM == 'local' else 'gs://{}/data/news'.format(Params.BUCKET)
Params.TRANSFORMED_DATA_DIR = os.path.join(Params.DATA_DIR, 'transformed')
Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX = os.path.join(Params.TRANSFORMED_DATA_DIR, 'train')
Params.TRANSFORMED_EVAL_DATA_FILE_PREFIX = os.path.join(Params.TRANSFORMED_DATA_DIR, 'eval')
Params.TEMP_DIR = os.path.join(Params.DATA_DIR, 'tmp')
Params.MODELS_DIR = 'models/news' if Params.PLATFORM == 'local' else 'gs://{}/models/news'.format(Params.BUCKET)
Params.TRANSFORM_ARTEFACTS_DIR = os.path.join(Params.MODELS_DIR,'transform')
Params.TRANSFORM = True
Explanation: Text Classification using TensorFlow and Google Cloud - Part 1
This bigquery-public-data:hacker_news contains all stories and comments from Hacker News from its launch in 2006. Each story contains a story id, url, the title of the story, tthe author that made the post, when it was written, and the number of points the story received.
The objective is, given the title of the story, we want to build an ML model that can predict the source of this story.
Data preparation with tf.Transform and DataFlow
This notebook illustrates how to build a Beam pipeline using tf.transform to prepare ML 'train' and 'eval' datasets.
The pipeline includes the following steps:
1. Read data from BigQuery
2. Extract and clean features from BQ rows
3. Use tf.transfrom to process the text and produce the following features for each entry
* title: Raw text - string
* bow: Bag of word indecies - sparse vector of integers
* weight: TF.IDF values - sparse vector of floats
* source: target feature - string
4. Save the data as .tfrecord files
Setting Global Parameters
End of explanation
import apache_beam as beam
import tensorflow as tf
import tensorflow_transform as tft
import tensorflow_transform.coders as tft_coders
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow_transform.beam import impl
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.tf_metadata import metadata_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.saved import saved_transform_io
Explanation: Importing libraries
End of explanation
bq_query = '''
SELECT
key,
REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ') AS title,
source
FROM
(
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
title,
ABS(FARM_FINGERPRINT(title)) AS Key
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
)
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
'''
def get_source_query(step):
if step == 'train':
source_query = 'SELECT * FROM ({}) WHERE MOD(key,100) <= 75'.format(bq_query)
else:
source_query = 'SELECT * FROM ({}) WHERE MOD(key,100) > 75'.format(bq_query)
return source_query
Explanation: 1. Source Query
End of explanation
RAW_HEADER = 'key,title,source'.split(',')
RAW_DEFAULTS = [['NA'],['NA'],['NA']]
TARGET_FEATURE_NAME = 'source'
TARGET_LABELS = ['github', 'nytimes', 'techcrunch']
TEXT_FEATURE_NAME = 'title'
KEY_COLUMN = 'key'
VOCAB_SIZE = 20000
TRAIN_SIZE = 73124
EVAL_SIZE = 23079
DELIMITERS = '.,!?() '
raw_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema({
KEY_COLUMN: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
TEXT_FEATURE_NAME: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
TARGET_FEATURE_NAME: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()),
}))
Explanation: 2. Raw metadata
End of explanation
def get_features(bq_row):
CSV_HEADER = 'key,title,source'.split(',')
input_features = {}
for feature_name in CSV_HEADER:
input_features[feature_name] = str(bq_row[feature_name]).lower()
return input_features
def preprocessing_fn(input_features):
text = input_features[TEXT_FEATURE_NAME]
text_tokens = tf.string_split(text, DELIMITERS)
text_tokens_indcies = tft.string_to_int(text_tokens, top_k=VOCAB_SIZE)
bag_of_words_indices, text_weight = tft.tfidf(text_tokens_indcies, VOCAB_SIZE + 1)
output_features = {}
output_features[TEXT_FEATURE_NAME] = input_features[TEXT_FEATURE_NAME]
output_features['bow'] = bag_of_words_indices
output_features['weight'] = text_weight
output_features[TARGET_FEATURE_NAME] = input_features[TARGET_FEATURE_NAME]
return output_features
Explanation: 3. Preprocessing functions
End of explanation
import apache_beam as beam
def run_pipeline(runner, opts):
print("Sink train data files: {}".format(Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX))
print("Sink data files: {}".format(Params.TRANSFORMED_EVAL_DATA_FILE_PREFIX))
print("Temporary directory: {}".format(Params.TEMP_DIR))
print("")
with beam.Pipeline(runner, options=opts) as pipeline:
with impl.Context(Params.TEMP_DIR):
###### analyze & transform train #########################################################
if(runner=='DirectRunner'):
print("")
print("Transform training data....")
print("")
step = 'train'
source_query = get_source_query(step)
# Read raw train data from BQ and cleanup
raw_train_data = (
pipeline
| '{} - Read Data from BigQuery'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=source_query, use_standard_sql=True))
| '{} - Extract Features'.format(step) >> beam.Map(get_features)
)
# create a train dataset from the data and schema
raw_train_dataset = (raw_train_data, raw_metadata)
# analyze and transform raw_train_dataset to produced transformed_train_dataset and transform_fn
transformed_train_dataset, transform_fn = (
raw_train_dataset
| '{} - Analyze & Transform'.format(step) >> impl.AnalyzeAndTransformDataset(preprocessing_fn)
)
# get data and schema separately from the transformed_train_dataset
transformed_train_data, transformed_metadata = transformed_train_dataset
# write transformed train data to sink
_ = (
transformed_train_data
| '{} - Write Transformed Data as tfrecords'.format(step) >> beam.io.tfrecordio.WriteToTFRecord(
file_path_prefix=Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX,
file_name_suffix=".tfrecords",
num_shards=25,
coder=tft_coders.example_proto_coder.ExampleProtoCoder(transformed_metadata.schema))
)
# #### TEST write transformed AS TEXT train data to sink
# _ = (
# transformed_train_data
# | '{} - Write Transformed Data as Text'.format(step) >> beam.io.textio.WriteToText(
# file_path_prefix=Params.TRANSFORMED_TRAIN_DATA_FILE_PREFIX,
# file_name_suffix=".csv")
# )
# ##################################################
###### transform eval ##################################################################
if(runner=='DirectRunner'):
print("")
print("Transform eval data....")
print("")
step = 'eval'
source_query = get_source_query(step)
# Read raw eval data from BQ and cleanup
raw_eval_data = (
pipeline
| '{} - Read Data from BigQuery'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=source_query, use_standard_sql=True))
| '{} - Extract Features'.format(step) >> beam.Map(get_features)
)
# create a eval dataset from the data and schema
raw_eval_dataset = (raw_eval_data, raw_metadata)
# transform eval data based on produced transform_fn (from analyzing train_data)
transformed_eval_dataset = (
(raw_eval_dataset, transform_fn)
| '{} - Transform'.format(step) >> impl.TransformDataset()
)
# get data from the transformed_eval_dataset
transformed_eval_data, _ = transformed_eval_dataset
# write transformed eval data to sink
_ = (
transformed_eval_data
| '{} - Write Transformed Data'.format(step) >> beam.io.tfrecordio.WriteToTFRecord(
file_path_prefix=Params.TRANSFORMED_EVAL_DATA_FILE_PREFIX,
file_name_suffix=".tfrecords",
num_shards=10,
coder=tft_coders.example_proto_coder.ExampleProtoCoder(transformed_metadata.schema))
)
###### write transformation metadata #######################################################
if(runner=='DirectRunner'):
print("")
print("Saving transformation artefacts ....")
print("")
# write transform_fn as tf.graph
_ = (
transform_fn
| 'Write Transform Artefacts' >> transform_fn_io.WriteTransformFn(Params.TRANSFORM_ARTEFACTS_DIR)
)
if runner=='DataflowRunner':
pipeline.run()
Explanation: 4. Beam Pipeline
End of explanation
from datetime import datetime
import shutil
job_name = 'preprocess-hackernews-data' + '-' + datetime.utcnow().strftime('%y%m%d-%H%M%S')
options = {
'region': Params.REGION,
'staging_location': os.path.join(Params.TEMP_DIR, 'staging'),
'temp_location': Params.TEMP_DIR,
'job_name': job_name,
'project': Params.GCP_PROJECT_ID
}
tf.logging.set_verbosity(tf.logging.ERROR)
opts = beam.pipeline.PipelineOptions(flags=[], **options)
runner = 'DirectRunner' if Params.PLATFORM == 'local' else 'DirectRunner'
if Params.TRANSFORM:
if Params.PLATFORM == 'local':
shutil.rmtree(Params.TRANSFORMED_DATA_DIR, ignore_errors=True)
shutil.rmtree(Params.TRANSFORM_ARTEFACTS_DIR, ignore_errors=True)
shutil.rmtree(Params.TEMP_DIR, ignore_errors=True)
print 'Launching {} job {} ... hang on'.format(runner, job_name)
run_pipeline(runner, opts)
print "Pipline completed."
else:
print "Transformation skipped!"
%%bash
echo "** transformed data:"
ls data/news/transformed
echo ""
echo "** transform artefacts:"
ls models/news/transform
echo ""
echo "** transform assets:"
ls models/news/transform/transform_fn/assets
echo ""
head models/news/transform/transform_fn/assets/vocab_string_to_int_uniques
Explanation: 5. Run Pipeline
End of explanation |
13,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unterricht zur Kammerprüfung
Step1: Sommer_2014
Step2: Frage 1
Erstellen Sie eine SQL-Abfrage, die alle Artikel auflistet, deren Artikelbezeichnungen die Zeichenketten "Schmerzmittel" oder "schmerzmittel" enthalten. Zu jedem Artikel sollen jeweils alle Attribute ausgeben werden.
Lösung
Step3: Frage 2
Erstellen Sie eine Abfrage, die alle Kunden und deren Umsätze auflistet. Zu jedem Kunden aollen alle Attribute ausgegeben werden. Die Liste soll nach Umsatz absteigend sortiert werden.
Lösung
Step4: Frage 3
Erstellen Sie eine SQL-Abfrage, die für jeden Artikel Folgendes ermittelt
Step5: Frage 4
Deutschland ist in 10 Postleitzahlregionen (0-9, 1. Stelle der PLZ) eingeteilt.
Erstellen Sie eine SQl-Abfrage für eine Liste, die für jede PLZ-Region (0-9) den Gesamtumsatz aufweist.
Die Liste soll nach Gesamtumsatz absteigend sortiert werden.
Lösung
Step6: Heiko Mader
O-Ton
Step7: Aufgabe 3
Step8: Aufgabe 4
Original von H.M ergibt fehler
Step9: Leichte Änderungen führen zu einem "fast richtigen" Ergebnis
er multipliziert dabei aber nur den jeweils ersten Datensatz aus der Rechnungsposition-Tabelle (siehe 2527,2) für PLZ 9
das wird auch bei der Aufgabe 3 ein möglicher fehler sein, der fällt aber da nicht evtl. auf ??? | Python Code:
%load_ext sql
Explanation: Unterricht zur Kammerprüfung
End of explanation
%sql mysql://steinam:steinam@localhost/sommer_2014
Explanation: Sommer_2014
End of explanation
%%sql
select * from artikel
where Art_Bezeichnung like '%Schmerzmittel%' or
Art_Bezeichnung like '%schmerzmittel%';
Explanation: Frage 1
Erstellen Sie eine SQL-Abfrage, die alle Artikel auflistet, deren Artikelbezeichnungen die Zeichenketten "Schmerzmittel" oder "schmerzmittel" enthalten. Zu jedem Artikel sollen jeweils alle Attribute ausgeben werden.
Lösung
End of explanation
%%sql
select k.Kd_firma, sum(rp.RgPos_Menge * rp.RgPos_Preis) as Umsatz
from Kunde k left join Rechnung r
on k.Kd_Id = r.Rg_Kd_ID
inner join Rechnungsposition rp
on r.Rg_ID = rp.RgPos_RgID
group by k.`Kd_Firma`
order by Umsatz desc;
%%sql
-- Originallösung bringt das gleiche Ergebnis
select k.`Kd_Firma`,
(select sum(RgPos_menge * RgPos_Preis)
from `rechnungsposition` rp, rechnung r
where r.`Rg_ID` = `rp`.`RgPos_RgID` and r.`Rg_Kd_ID` = k.`Kd_ID`) as Umsatz
from kunde k order by Umsatz desc
Explanation: Frage 2
Erstellen Sie eine Abfrage, die alle Kunden und deren Umsätze auflistet. Zu jedem Kunden aollen alle Attribute ausgegeben werden. Die Liste soll nach Umsatz absteigend sortiert werden.
Lösung
End of explanation
%%sql
-- meine Lösung
select artikel.*, sum(RgPos_Menge) as Menge, count(RgPos_ID) as Anzahl
from artikel inner join `rechnungsposition`
where `rechnungsposition`.`RgPos_ArtID` = `artikel`.`Art_ID`
group by artikel.`Art_ID`
%%sql
-- Leitungslösung
select artikel.* ,
(select sum(RgPOS_Menge) from Rechnungsposition rp
where rp.RgPos_ArtID = artikel.Art_ID) as Menge,
(select count(RgPOS_menge) from Rechnungsposition rp
where rp.RgPos_ArtID = artikel.Art_ID) as Anzahl
from Artikel
Explanation: Frage 3
Erstellen Sie eine SQL-Abfrage, die für jeden Artikel Folgendes ermittelt:
- Die Menge, die insgesamt verkauft wurde
- Die Anzahl der Rechnungspositionen
Lösung
End of explanation
%%sql
-- Original
select left(kunde.`Kd_PLZ`,1) as Region,
sum(`rechnungsposition`.`RgPos_Menge` * `rechnungsposition`.`RgPos_Preis`) as Summe
from kunde left join rechnung
on kunde.`Kd_ID` = rechnung.`Rg_Kd_ID`
left join rechnungsposition
on `rechnung`.`Rg_ID` = `rechnungsposition`.`RgPos_RgID`
group by Region
order by Summe;
%%sql
-- Inner join ändert nichts
select left(kunde.`Kd_PLZ`,1) as Region,
sum(`rechnungsposition`.`RgPos_Menge` * `rechnungsposition`.`RgPos_Preis`) as Summe
from kunde inner join rechnung
on kunde.`Kd_ID` = rechnung.`Rg_Kd_ID`
inner join rechnungsposition
on `rechnung`.`Rg_ID` = `rechnungsposition`.`RgPos_RgID`
group by Region
order by Summe;
Explanation: Frage 4
Deutschland ist in 10 Postleitzahlregionen (0-9, 1. Stelle der PLZ) eingeteilt.
Erstellen Sie eine SQl-Abfrage für eine Liste, die für jede PLZ-Region (0-9) den Gesamtumsatz aufweist.
Die Liste soll nach Gesamtumsatz absteigend sortiert werden.
Lösung
End of explanation
%%sql
select kunde.*, umsatz from kunde
inner join (
select (RgPos_menge * RgPos_Preis) as Umsatz, kd_id
from `rechnungsposition`
inner join rechnung on `rechnungsposition`.`RgPos_ID` = `rechnung`.`Rg_ID`
inner join kunde on `rechnung`.`Rg_Kd_ID` = Kunde.`Kd_ID`
group by `Kd_ID`
) a
on Kunde.`Kd_ID` = a.Kd_ID
order by umsatz desc;
Explanation: Heiko Mader
O-Ton: ich glaube es ist richtig :-)
Aufgabe 2
Syntax geht, aber Ergebnis stimmt nicht
End of explanation
%%sql
select a.*, mengeGesamt,anzahlRechPos
from artikel a
Inner join (
select SUM(RgPos_menge) as mengeGesamt, art_id
from `rechnungsposition` inner join artikel
on `rechnungsposition`.`RgPos_ArtID` = artikel.`Art_ID`
group by art_id
) b on a.`Art_ID` = b.art_id
Inner join
(select count(*) as anzahlRechPos, art_id
from `rechnungsposition` inner join artikel
on `rechnungsposition`.`RgPos_ArtID` = artikel.`Art_ID`
group by art_id
) c on a.`Art_ID` = c.art_id
Explanation: Aufgabe 3
End of explanation
%%sql
select gebiet, umsatz from `kunde`
inner join (
select kd_plz as gebiet, kd_id from `kunde`
where kd_plz in
(0%,1%,2%,3%,4%,5%,6%,7%,8%,9%)
group by kd_id
) a on kunde.`Kd_ID` = b.kd_id
inner join (
select rgPos_Menge * rgPos_Preis as Umsatz2, kd_id
from `rechnungsposition` inner join
rechnung on `rechnungsposition`.`RgPos_RgID` = rechnung.`Rg_ID`
inner join kunde on `rechnung`.`Rg_Kd_ID` = kunde.`Kd_ID`
group by kd_id
) b on `kunde`.`Kd_ID` = b.kd_id
order by umsatz desc;
Explanation: Aufgabe 4
Original von H.M ergibt fehler
End of explanation
%%sql
select gebiet, umsatz from `kunde`
inner join (
select kd_plz as gebiet, kd_id from `kunde`
where left(kd_plz,1) in
(0,1,2,3,4,5,6,7,8,9)
group by kd_id
) a on kunde.`Kd_ID` = a.kd_id
inner join (
select sum(rgPos_Menge * rgPos_Preis) as Umsatz, kd_id
from `rechnungsposition` inner join
rechnung on `rechnungsposition`.`RgPos_RgID` = rechnung.`Rg_ID`
inner join kunde on `rechnung`.`Rg_Kd_ID` = kunde.`Kd_ID`
group by kd_id
) b on `kunde`.`Kd_ID` = b.kd_id
order by umsatz desc;
Explanation: Leichte Änderungen führen zu einem "fast richtigen" Ergebnis
er multipliziert dabei aber nur den jeweils ersten Datensatz aus der Rechnungsposition-Tabelle (siehe 2527,2) für PLZ 9
das wird auch bei der Aufgabe 3 ein möglicher fehler sein, der fällt aber da nicht evtl. auf ???
End of explanation |
13,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Turning transaction data into dimensional data
Let's grab a fairly large dataset, load it into a database, and create a simple dimensional model with one fact table and one dimension from it.
Getting your data
Let's return to the Capital Bikeshare trip data. This time, though, we mean it - we'll get a whole year's worth of data.
Step1: These are in the zip format, so unzip
Step2: How big are they?
Step3: Yum, 2.5M records, this should be fun.
Do they all have the same columns?
Step4: Nope! Let's cut out the durations and rearrange each (especially Q4) to match columns.
Step5: And verify those counts
Step6: Looks right, so now let's stack them up into one file.
Step7: If that worked correctly, our count should go down by three header lines.
Step8: That also looks right! Now let's just check to make sure things like station names and subscriber status look consistent.
Step9: Not bad... though there's at least one issue in the station names that indicates further cleanup is needed. Do you see it?
Let's ignore it for now, but with real projects, you can't just skip details like this if you want reliable results. This is why "80% of the work is in data wrangling."
Prepping and loading data into the database
Alright, then, let's get loading.
Step10: NOTE
Step11: And connect to it
Step12: First, clean up if we're not running this for the first time.
Step13: Next, create a table schema using DDL. You can do a little sampling on your data to get domain and range information for use here, like here, where we take roughly a 10% sample to get a max length for the bike_id column.
We take a sample because it's faster. The risk of sampling is that if there's one bad record, there's only a 1 in 10 chance that a 10% sample will spot it.
If you need to know for sure, don't sample. But it'll cost you time.
Step14: Just to verify it worked
Step15: It worked! We just don't have any data in there yet.
Now we load the data using LOAD DATA INFILE. You can do pretty much the same thing from the bash shell using mysqlimport and a bunch of options. It'll read better here in the notebook with the options spelled out.
Docs for LOAD DATA INFILE are available at https
Step16: Note
Step17: Okay, a good start. Let's define the dimension table.
Step18: Note
Step19: Wait, 367 days, how's that?
Step20: Oh! So there must be rides that ended one and two days after the new year.
Looks like our dimension is all together, let's have a look at a sampling of values.
Step21: Building out the Ride fact table
Now we follow a similar process to generate the fact table. In this case, though, we'll be careful to use the newly generated day_key values.
Step22: Let's start by punting on the date lookups (though you could fit it in if you wanted!), as there's a lot of other stuff to pull out at first. We'll just set the date_key values to 1 and 2 for now and go back to update them later.
Step23: Okay then, let's go back and fix those dates.
Step24: Oof, that was sloooow. Maybe an index would help before the next one.
Step25: So far so good. Did the updated day_key references come out right?
Step26: Looks right! Weird, but right. If you think about it, it makes sense that a handful of rides start on one day and finish on the next, so most rides that start on day 1 should end on day 2 (presuming the days are in key order). But look at day_key 6
Step27: Exploring the data dimensionally
Explore the new tables, taking advantage of the structure you've set up. What queries are now very easy that were more complicated before?
Here's an example of looking at average ride length by day. You could do this query on our original table, but it would require a bunch of date functions mixed in with the query logic. Now we've already done all that, so the query is cleaner and simpler to write.
Step28: How do rides vary by length on weekends?
Step29: And how does that compare to weekdays? | Python Code:
!wget https://www.capitalbikeshare.com/assets/files/trip-history-data/2013-Q1-Trips-History-Data.zip
!wget https://www.capitalbikeshare.com/assets/files/trip-history-data/2013-Q2-Trips-History-Data.zip
!wget https://www.capitalbikeshare.com/assets/files/trip-history-data/2013-Q3-Trips-History-Data.zip
!wget https://www.capitalbikeshare.com/assets/files/trip-history-data/2013-4th-quarter.zip
Explanation: Turning transaction data into dimensional data
Let's grab a fairly large dataset, load it into a database, and create a simple dimensional model with one fact table and one dimension from it.
Getting your data
Let's return to the Capital Bikeshare trip data. This time, though, we mean it - we'll get a whole year's worth of data.
End of explanation
!for f in 2013-*.zip; do unzip $f; done
Explanation: These are in the zip format, so unzip:
End of explanation
!wc 2013-Q*.csv
Explanation: How big are they?
End of explanation
!for f in 2013-Q*.csv; do echo $f; csvcut -n $f; done
Explanation: Yum, 2.5M records, this should be fun.
Do they all have the same columns?
End of explanation
!csvcut -C1 2013-Q1-Trips-History-Data.csv | \
header -r "start_date,end_date,start_station,end_station,bike_id,sub_type" \
> bikeshare-q1.csv
!csvcut -C1 2013-Q2-Trips-History-Data.csv | \
header -r "start_date,end_date,start_station,end_station,bike_id,sub_type" \
> bikeshare-q2.csv
!csvcut -C1 2013-Q3-Trips-History-Data.csv | \
header -r "start_date,end_date,start_station,end_station,bike_id,sub_type" \
> bikeshare-q3.csv
!csvcut -c2,4,3,5,6,7 2013-Q4-Trips-History-Data2.csv | \
header -r "start_date,end_date,start_station,end_station,bike_id,sub_type" \
> bikeshare-q4.csv
Explanation: Nope! Let's cut out the durations and rearrange each (especially Q4) to match columns.
End of explanation
!wc bikeshare-q*.csv
Explanation: And verify those counts:
End of explanation
!csvstack bikeshare-q*.csv > bikeshare-2013.csv
Explanation: Looks right, so now let's stack them up into one file.
End of explanation
!wc bikeshare-2013.csv
Explanation: If that worked correctly, our count should go down by three header lines.
End of explanation
!shuf -n 1000 bikeshare-2013.csv | csvcut -c3 | sort | uniq | head -25
!shuf -n 1000 bikeshare-2013.csv | csvcut -c6 | sort | uniq -c
Explanation: That also looks right! Now let's just check to make sure things like station names and subscriber status look consistent.
End of explanation
%load_ext sql
Explanation: Not bad... though there's at least one issue in the station names that indicates further cleanup is needed. Do you see it?
Let's ignore it for now, but with real projects, you can't just skip details like this if you want reliable results. This is why "80% of the work is in data wrangling."
Prepping and loading data into the database
Alright, then, let's get loading.
End of explanation
!echo "DROP DATABASE bikedb; CREATE DATABASE bikedb" | mysql --user=mysqluser --password=mysqlpass
Explanation: NOTE: See a bunch of ShimWarnings with a pink background? That's normal. It's just a heads-up about ongoing changes to IPython/Jupyter code. You can keep going.
First, we create a database in mysql. Note: you can do the same thing on the command line by issuing the CREATE DATABASE command part before the pipe within the mysql shell, which you get to with the second part after the pipe. Here we'll pipe the one into the other so it reads well in the notebook.
End of explanation
%sql mysql://mysqluser:mysqlpass@localhost/bikedb
Explanation: And connect to it:
End of explanation
%%sql
DROP TABLE IF EXISTS bikeshare;
Explanation: First, clean up if we're not running this for the first time.
End of explanation
!shuf -n 250000 bikeshare-2013.csv | csvcut -c5 | csvstat
%%sql
CREATE TABLE bikeshare (
start_date DATETIME,
end_date DATETIME,
start_station VARCHAR(100),
end_station VARCHAR(100),
bike_id CHAR(7),
sub_type CHAR(10)
)
Explanation: Next, create a table schema using DDL. You can do a little sampling on your data to get domain and range information for use here, like here, where we take roughly a 10% sample to get a max length for the bike_id column.
We take a sample because it's faster. The risk of sampling is that if there's one bad record, there's only a 1 in 10 chance that a 10% sample will spot it.
If you need to know for sure, don't sample. But it'll cost you time.
End of explanation
%%sql
SELECT COUNT(*)
FROM bikeshare
Explanation: Just to verify it worked:
End of explanation
!cp bikeshare-2013.csv /vagrant/bikeshare.csv
%%sql
LOAD DATA INFILE '/vagrant/bikeshare.csv'
REPLACE
INTO TABLE bikeshare
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
IGNORE 1 LINES
(@start_date, @end_date, start_station, end_station, bike_id, sub_type)
SET start_date = STR_TO_DATE(@start_date, '%c/%e/%Y %k:%i'),
end_date = STR_TO_DATE(@end_date, '%c/%e/%Y %k:%i')
Explanation: It worked! We just don't have any data in there yet.
Now we load the data using LOAD DATA INFILE. You can do pretty much the same thing from the bash shell using mysqlimport and a bunch of options. It'll read better here in the notebook with the options spelled out.
Docs for LOAD DATA INFILE are available at https://dev.mysql.com/doc/refman/5.1/en/load-data.html.
Note: this assumes you've placed your bikeshare file in the directory /vagrant.
Note also: I had to look up the mysql date formatting docs to get this date format conversion correct. It took me a few trials and errors before I got it right. This is an extremely common thing to have to do if you ever spend time wrangling data - every system handles dates in its own way.
End of explanation
%%sql
SELECT DISTINCT date
FROM
(SELECT DATE(start_date) AS date FROM bikeshare
UNION
SELECT DATE(end_date) AS date FROM bikeshare) b
ORDER BY date
LIMIT 5
Explanation: Note: if the above command fails for you with a "file not found" error, please read these notes about apparmor. Follow that advice, and add a line like it shows, e.g.:
/vagrant/* r
...to the file, or whatever path you have your data on, reload apparmor, and try again. I had to do this, and it worked perfectly after I made that change.
Remember what we said before about sampling value ranges? It looks like there are a few bad values in the bike_id column, but again, we'll ignore them for now, but for a real project, you'd want to be sure the issue was either just a few bad records or something you could correct for in your table design.
Facts and dimensions
Think through what we might want to measure here, and what context we might want to measure it within. Design two new tables: a fact table and a dimension, and migrate the data from this base table into them both.
There is a good example we can follow on page 44 of Star Schema, with an Orders fact table and a Day dimension. Let's do something similar, using a Rides fact table and a Day dimension. We could normalize out a Station dimension as we did before, and add details like lat/lon, zip code, census block/group, and neighborhood, but we'll hold off on that. If we had more customer information, we'd probably have a Customer dimension as well, and we can assume the bike_id might be a key to a Bike dimension. (Wouldn't it be fun if the bikes all had names, and colors, or some such, and the bikes themselves might be interesting to study? Oh well, we only have the ids, so let's do no more for that for now.)
Looking at the Rides fact table, we'll want to capture several surrogate keys and additional values, such as:
day_key_start
time_start
hour_start
day_key_stop
time_stop
hour_stop
duration_minutes
station_start (we'd use a key if we made Station a dimension)
station_stop (same here)
sub_type
bike_id (a degenerate dimension here, but could also be a key to a Bike dimension)
And this implies a Day dimension that should include attributes like these (taken largely from p. 44):
day_key
full_date
day_of_week_number
day_of_week_name
day_of_week_abbr
day_of_month
holiday_flag
weekday_flag
weekend_flag
month_number
month_name
month_abbr
quarter
quarter_month
year
year_month
year_quarter
Note that the Rides fact table will need to reference the Day dimension by foreign (surrogate) key, so we'll create Day first.
Building out the Day dimension
Let's start by building up a full_date column, then define and add the key values query bits for the full table.
(Pop quiz: why take a union of the two columns?)
End of explanation
%%sql
DROP TABLE IF EXISTS day_dim;
CREATE TABLE day_dim (
day_key INT NOT NULL AUTO_INCREMENT,
full_date DATE,
day_of_week_number SMALLINT(1),
day_of_week_name CHAR(9),
day_of_week_abbr CHAR(3),
day_of_month SMALLINT(1),
holiday_flag BOOLEAN,
weekday_flag BOOLEAN,
weekend_flag BOOLEAN,
month_number SMALLINT(2),
month_name CHAR(9),
month_abbr CHAR(3),
quarter SMALLINT(1),
year YEAR,
PRIMARY KEY (day_key)
)
Explanation: Okay, a good start. Let's define the dimension table.
End of explanation
%%sql
DELETE FROM day_dim;
INSERT INTO day_dim (full_date,
day_of_week_number, day_of_week_name, day_of_week_abbr,
day_of_month, holiday_flag, weekday_flag, weekend_flag,
month_number, month_name, month_abbr,
quarter, year)
SELECT DISTINCT date,
DAYOFWEEK(date), DAYNAME(date), DATE_FORMAT(date, "%a"),
DAYOFMONTH(date), 0, WEEKDAY(date) <= 4, WEEKDAY(date) > 4,
MONTH(date), MONTHNAME(date), DATE_FORMAT(date, "%b"),
QUARTER(date), YEAR(DATE)
FROM
(SELECT DATE(start_date) AS date FROM bikeshare
UNION
SELECT DATE(end_date) AS date FROM bikeshare) b
ORDER BY date
Explanation: Note: for some reason, year_month CHAR(6) caused an error I can't figure out.
Okay, let's start loading that up with a query on our source table. We'll have to reach into the MySQL manual for some details here.
End of explanation
%%sql
SELECT COUNT(*) FROM day_dim
%%sql
SELECT MIN(full_date), MAX(full_date) FROM day_dim
Explanation: Wait, 367 days, how's that?
End of explanation
%%sql
SELECT *
FROM day_dim
ORDER BY RAND()
LIMIT 20
Explanation: Oh! So there must be rides that ended one and two days after the new year.
Looks like our dimension is all together, let's have a look at a sampling of values.
End of explanation
%%sql
DROP TABLE IF EXISTS ride_fact;
CREATE TABLE ride_fact (
id INT NOT NULL AUTO_INCREMENT,
full_date_start DATE,
day_key_start INT,
time_start TIME,
hour_start SMALLINT(2),
full_date_stop DATE,
day_key_stop INT,
time_stop TIME,
hour_stop SMALLINT(2),
duration_minutes INT,
station_start VARCHAR(100),
station_stop VARCHAR(100),
sub_type CHAR(10),
bike_id CHAR(7),
PRIMARY KEY (id)
)
Explanation: Building out the Ride fact table
Now we follow a similar process to generate the fact table. In this case, though, we'll be careful to use the newly generated day_key values.
End of explanation
%%sql
DELETE FROM ride_fact;
INSERT INTO ride_fact (full_date_start, day_key_start, time_start, hour_start,
full_date_stop, day_key_stop, time_stop, hour_stop,
duration_minutes,
station_start, station_stop,
sub_type, bike_id)
SELECT DATE(start_date), 1, TIME(start_date), HOUR(start_date),
DATE(end_date), 2, TIME(end_date), HOUR(end_date),
TIMESTAMPDIFF(MINUTE, start_date, end_date),
start_station, end_station,
sub_type, bike_id
FROM bikeshare
Explanation: Let's start by punting on the date lookups (though you could fit it in if you wanted!), as there's a lot of other stuff to pull out at first. We'll just set the date_key values to 1 and 2 for now and go back to update them later.
End of explanation
%%sql
UPDATE ride_fact
INNER JOIN day_dim
ON ride_fact.full_date_start = day_dim.full_date
SET ride_fact.day_key_start = day_dim.day_key
Explanation: Okay then, let's go back and fix those dates.
End of explanation
%%sql
CREATE INDEX idx_full_date_stop
ON ride_fact (full_date_stop)
%%sql
UPDATE ride_fact
INNER JOIN day_dim
ON ride_fact.full_date_stop = day_dim.full_date
SET ride_fact.day_key_stop = day_dim.day_key
Explanation: Oof, that was sloooow. Maybe an index would help before the next one.
End of explanation
%%sql
SELECT day_key_start, day_key_stop, COUNT(*)
FROM ride_fact
GROUP BY day_key_start, day_key_stop
ORDER BY day_key_start, day_key_stop
LIMIT 20
Explanation: So far so good. Did the updated day_key references come out right?
End of explanation
%%sql
CREATE INDEX idx_full_date_start
ON ride_fact (full_date_start)
Explanation: Looks right! Weird, but right. If you think about it, it makes sense that a handful of rides start on one day and finish on the next, so most rides that start on day 1 should end on day 2 (presuming the days are in key order). But look at day_key 6: one ride was returned two days later, and another was returned 23 days later! Maybe it was a zombie, they probably ride slowly.
Let's go back and create that other index before we forget.
End of explanation
%%sql
SELECT AVG(duration_minutes), day_of_week_name
FROM ride_fact, day_dim
WHERE day_dim.day_key = ride_fact.day_key_start
GROUP BY day_of_week_name
ORDER BY day_of_week_number
%matplotlib inline
result = _
result.bar()
Explanation: Exploring the data dimensionally
Explore the new tables, taking advantage of the structure you've set up. What queries are now very easy that were more complicated before?
Here's an example of looking at average ride length by day. You could do this query on our original table, but it would require a bunch of date functions mixed in with the query logic. Now we've already done all that, so the query is cleaner and simpler to write.
End of explanation
%%sql
SELECT hour_start, AVG(duration_minutes)
FROM ride_fact, day_dim
WHERE weekend_flag = 1
AND day_dim.day_key = ride_fact.day_key_start
GROUP BY hour_start
ORDER BY hour_start
_.bar()
Explanation: How do rides vary by length on weekends?
End of explanation
%%sql
SELECT hour_start, AVG(duration_minutes)
FROM ride_fact, day_dim
WHERE weekday_flag = 1
AND day_dim.day_key = ride_fact.day_key_start
GROUP BY hour_start
ORDER BY hour_start
_.bar()
Explanation: And how does that compare to weekdays?
End of explanation |
13,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: 3. Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step3: Step 2
Step4: 5. Show a sample of the augmented dataset
Step6: 6. Pre-process functions
Step7: 7. Show a sample of the preprocess functions outputs
Step8: 8. Preprocess the Dataset
Step9: 9. Model Architecture
| Layer | Description | Input | Output |
|
Step10: 10. Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Step11: 11. Features and Labels
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
Step12: 12. Training Pipeline
Create a training pipeline that uses the model to classify German Traffic Sign Benchmarks data.
Step13: 13. Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
Step14: 14. Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
Step15: 15. Evaluate accuracy of the different data sets
Step16: Step 3
Step17: 17. Predict the Sign Type for Each Image
Step18: 18. Analyze Performance
Step19: 19. Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability
Step20: Step 4 | Python Code:
# Load pickled data
import pickle
from keras.datasets import cifar10
from sklearn.model_selection import train_test_split
(X_train_temp, y_train_temp), (X_test, y_test) = cifar10.load_data()
# y_train.shape is 2d, (50000, 1). While Keras is smart enough to handle this
# it's a good idea to flatten the array.
y_train_temp = y_train_temp.reshape(-1)
y_test = y_test.reshape(-1)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_temp, y_train_temp, test_size=0.33, random_state=0)
assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test))
print("Loading done!")
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
1. Load The CIFAR10 Data
End of explanation
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# Number of training examples
n_train = len(X_train)
# Number of testing examples.
n_test = len(X_test)
# Number of validation examples
n_valid = len(X_valid)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).size
print("Number of training examples =", n_train)
print("Number of validation examples =", n_valid)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
2. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
End of explanation
import matplotlib.pyplot as plt
import random
import numpy as np
import csv
import pandas as pd
# Visualizations will be shown in the notebook.
%matplotlib inline
def show_sample(features, labels, histogram = 1, sample_num = 1, sample_index = -1, color_map ='brg'):
if histogram == 1 :
col_num = 2
#Create training sample + histogram plot
f, axarr = plt.subplots(sample_num+1, col_num, figsize=(col_num*4,(sample_num+1)*3))
else:
if sample_num <= 4:
col_num = sample_num
else:
col_num = 4
if sample_num%col_num == 0:
row_num = int(sample_num/col_num)
else:
row_num = int(sample_num/col_num)+1
if sample_num == 1:
#Create training sample plot
f, ax = plt.subplots(row_num, col_num)
else:
#Create training sample plot
f, axarr = plt.subplots(row_num, col_num, figsize=(col_num*4,(row_num+1)*2))
signnames = pd.read_csv('signnames.csv')
index = sample_index - 1
for i in range(0, sample_num, 1):
if sample_index < -1:
index = random.randint(0, len(features))
else:
index = index + 1
if histogram == 1 :
image = features[index].squeeze()
axarr[i,0].set_title('%s' % signnames.iloc[labels[index], 1])
axarr[i,0].imshow(image,color_map)
hist,bins = np.histogram(image.flatten(),256, normed =1 )
cdf = hist.cumsum()
cdf_normalized = cdf * hist.max()/ cdf.max()
axarr[i,1].plot(cdf_normalized, color = 'b')
axarr[i,1].hist(image.flatten(),256, normed =1, color = 'r')
axarr[i,1].legend(('cdf','histogram'), loc = 'upper left')
axarr[i,0].axis('off')
axarr[sample_num,0].axis('off')
axarr[sample_num,1].axis('off')
else:
image = features[index].squeeze()
if row_num > 1:
axarr[int(i/col_num),i%col_num].set_title('%s' % signnames.iloc[labels[index], 1])
axarr[int(i/col_num),i%col_num].imshow(image,color_map)
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
elif sample_num == 1:
ax.set_title('%s' % signnames.iloc[labels[index], 1])
ax.imshow(image,color_map)
ax.axis('off')
ax.axis('off')
ax.axis('off')
else:
axarr[i%col_num].set_title('%s' % signnames.iloc[labels[index], 1])
axarr[i%col_num].imshow(image,color_map)
axarr[i%col_num].axis('off')
axarr[i%col_num].axis('off')
axarr[i%col_num].axis('off')
# Tweak spacing to prevent clipping of title labels
f.tight_layout()
plt.show()
def show_training_dataset_histogram(labels_train,labels_valid,labels_test):
fig, ax = plt.subplots(figsize=(15,5))
temp = [labels_train,labels_valid,labels_test]
n_classes = np.unique(y_train).size
# the histogram of the training data
n, bins, patches = ax.hist(temp, n_classes, label=["Train","Valid","Test"])
ax.set_xlabel('Classes')
ax.set_ylabel('Number of occurence')
ax.set_title(r'Histogram of the data sets')
ax.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
plt.show()
show_training_dataset_histogram(y_train,y_valid,y_test)
show_sample(X_train, y_train, sample_num = 6)
Explanation: 3. Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
End of explanation
import cv2
from tqdm import tqdm
from sklearn.utils import shuffle
def random_transform_image(dataset, index):
# Hyperparameters
# Values inspired from Pierre Sermanet and Yann LeCun Paper : Traffic Sign Recognition with Multi-Scale Convolutional Networks
Scale_change_max = 0.1
Translation_max = 2 #pixels
Rotation_max = 15 #degrees
Brightness_max = 0.1
# Generate random transformation values
trans_x = np.random.uniform(-Translation_max,Translation_max)
trans_y = np.random.uniform(-Translation_max,Translation_max)
angle = np.random.uniform(-Rotation_max,Rotation_max)
scale = np.random.uniform(1-Scale_change_max,1+Scale_change_max)
bright = np.random.uniform(-Brightness_max,Brightness_max)
#Brightness
#create white image
white_img = 255*np.ones((32,32,3), np.uint8)
black_img = np.zeros((32,32,3), np.uint8)
if bright >= 0:
img = cv2.addWeighted(dataset[index].squeeze(),1-bright,white_img,bright,0)
else:
img = cv2.addWeighted(dataset[index].squeeze(),bright+1,black_img,bright*-1,0)
# Scale
img = cv2.resize(img,None,fx=scale, fy=scale, interpolation = cv2.INTER_CUBIC)
# Get image shape afeter scaling
rows,cols,chan = img.shape
# Pad with zeroes before rotation if image shape is less than 32*32*3
if rows < 32:
offset = int((32-img.shape[0])/2)
# If shape is an even number
if img.shape[0] %2 == 0:
img = cv2.copyMakeBorder(img,offset,offset,offset,offset,cv2.BORDER_CONSTANT,value=[0,0,0])
else:
img = cv2.copyMakeBorder(img,offset,offset+1,offset+1,offset,cv2.BORDER_CONSTANT,value=[0,0,0])
# Update image shape after padding
rows,cols,chan = img.shape
# Rotate
M = cv2.getRotationMatrix2D((cols/2,rows/2),angle,1)
img = cv2.warpAffine(img,M,(cols,rows))
# Translation
M = np.float32([[1,0,trans_x],[0,1,trans_y]])
img = cv2.warpAffine(img,M,(cols,rows))
# Crop centered if image shape is greater than 32*32*3
if rows > 32:
offset = int((img.shape[0]-32)/2)
img = img[offset: 32 + offset, offset: 32 + offset]
return img
# Parameters
# Max example number per class
num_example_per_class = np.bincount(y_train)
min_example_num = max(num_example_per_class)
for i in range(len(num_example_per_class)):
# Update number of examples by class
num_example_per_class = np.bincount(y_train)
# If the class lacks examples...
if num_example_per_class[i] < min_example_num:
# Locate where pictures of this class are located in the training set..
pictures = np.array(np.where(y_train == i)).T
# Compute the number of pictures to be generated
num_example_to_generate = min_example_num - num_example_per_class[i]
# Compute the number of iteration necessary on the real data
num_iter = int( num_example_to_generate/len(pictures) ) + 1
# Compute the pool of real data necessary to fill the classes
if num_iter == 1 :
num_pictures = num_example_to_generate
else:
num_pictures = len(pictures)
# # Limit the number of iteration to 10
# num_iter = min(num_iter, 10)
# Create empty list
more_X = []
more_y = []
for k in range(num_iter):
# if we are in the last iteration, num_pictures is adjusted to fit the min_example_num
if (k == num_iter - 1) and (num_iter > 1):
num_pictures = min_example_num - num_iter * len(pictures)
# For each pictures of this class, generate 1 more synthetic image
pbar = tqdm(range(num_pictures), desc='Iter {:>2}/{}'.format(i+1, len(num_example_per_class)), unit='examples')
for j in pbar:
# Append the transformed picture
more_X.append(random_transform_image(X_train,pictures[j]))
# Append the class number
more_y.append(i)
# Append the synthetic images to the training set
X_train = np.append(X_train, np.array(more_X), axis=0)
y_train = np.append(y_train, np.array(more_y), axis=0)
print("New training feature shape",X_train.shape)
print("New training label shape",y_train.shape)
print("Data augmentation done!")
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
Neural network architecture (is the network over or underfitting?)
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
4. Augment the Data Set
End of explanation
# Visualization
show_training_dataset_histogram(y_train,y_valid,y_test)
show_sample(X_train, y_train, histogram = 0, sample_num = 8, sample_index = 35000)
Explanation: 5. Show a sample of the augmented dataset
End of explanation
import cv2
from numpy import newaxis
def equalize_Y_histogram(features):
images = []
for image in features:
# Convert RGB to YUV
temp = cv2.cvtColor(image, cv2.COLOR_BGR2YUV);
# Equalize Y histogram in order to get better contrast accross the dataset
temp[:,:,0] = cv2.equalizeHist(temp[:,:,0])
# Convert back YUV to RGB
temp = cv2.cvtColor(temp, cv2.COLOR_YUV2BGR)
images.append(temp)
return np.array(images)
def CLAHE_contrast_normalization(features):
images = []
for image in features:
# create a CLAHE object
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(4,4))
temp = clahe.apply(image)
images.append(temp)
return np.array(images)
def convert_to_grayscale(features):
gray_images = []
for image in features:
# Convert RGB to grayscale
temp = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray_images.append(temp)
return np.array(gray_images)
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
a = 0.1
b = 0.9
image_data_norm = a + ((image_data - np.amin(image_data))*(b-a))/(np.amax(image_data) - np.amin(image_data))
return image_data_norm
Explanation: 6. Pre-process functions
End of explanation
index = 255
X_temp1 = convert_to_grayscale(X_train)
X_temp2 = CLAHE_contrast_normalization(X_temp1)
X_temp3 = normalize_grayscale(X_temp2)
show_sample(X_train, y_train, histogram = 1, sample_num = 1, sample_index = index)
show_sample(X_temp1, y_train, histogram = 1, sample_num = 1, sample_index = index, color_map ='gray')
show_sample(X_temp2, y_train, histogram = 1, sample_num = 1, sample_index = index, color_map ='gray')
print(X_temp2[index])
print(X_temp3[index])
Explanation: 7. Show a sample of the preprocess functions outputs
End of explanation
#Preprocessing pipeline
print('Preprocessing training features...')
X_train = convert_to_grayscale(X_train)
X_train = CLAHE_contrast_normalization(X_train)
X_train = normalize_grayscale(X_train)
X_train = X_train[..., newaxis]
print("Processed shape =", X_train.shape)
print('Preprocessing validation features...')
X_valid = convert_to_grayscale(X_valid)
X_valid = CLAHE_contrast_normalization(X_valid)
X_valid = normalize_grayscale(X_valid)
X_valid = X_valid[..., newaxis]
print("Processed shape =", X_valid.shape)
print('Preprocessing test features...')
X_test = convert_to_grayscale(X_test)
X_test = CLAHE_contrast_normalization(X_test)
X_test = normalize_grayscale(X_test)
X_test = X_test[..., newaxis]
print("Processed shape =", X_test.shape)
# Shuffle the training dataset
X_train, y_train = shuffle(X_train, y_train)
print("Pre-processing done!")
Explanation: 8. Preprocess the Dataset
End of explanation
import tensorflow as tf
from tensorflow.contrib.layers import flatten
def model(x, keep_prob):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Network Parameters
n_classes = 10 # MNIST total classes (0-9 digits)
filter_size = 5
# Store layers weight & bias
weights = {
'wc1' : tf.Variable(tf.truncated_normal([filter_size, filter_size, 1, 100], mean = mu, stddev = sigma)),
'wc2' : tf.Variable(tf.truncated_normal([filter_size, filter_size, 100, 200], mean = mu, stddev = sigma)),
'wfc1': tf.Variable(tf.truncated_normal([9900, 100], mean = mu, stddev = sigma)),
'out' : tf.Variable(tf.truncated_normal([100, n_classes], mean = mu, stddev = sigma))}
biases = {
'bc1' : tf.Variable(tf.zeros([100])),
'bc2' : tf.Variable(tf.zeros([200])),
'bfc1': tf.Variable(tf.zeros([100])),
'out' : tf.Variable(tf.zeros([n_classes]))}
def conv2d(x, W, b, strides=1., padding='SAME'):
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding=padding)
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2, padding='SAME'):
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding=padding)
# Layer 1: Convolution 1 - 32*32*1 to 28*28*100
conv1 = conv2d(x, weights['wc1'], biases['bc1'], padding='VALID')
# Max Pool - 28*28*100 to 14*14*100
conv1 = maxpool2d(conv1, k=2)
# Layer 2: Convolution 2 - 14*14*100 to 10*10*200
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'], padding='VALID')
# Max Pool - 10*10*200 to 5*5*200
conv2 = maxpool2d(conv2, k=2)
#Fork second max pool - 14*14*100 to 7*7*100
conv1 = maxpool2d(conv1, k=2)
#Flatten conv1. Input = 7*7*100, Output = 4900
conv1 = tf.contrib.layers.flatten(conv1)
# Flatten conv2. Input = 5x5x200. Output = 5000.
conv2 = tf.contrib.layers.flatten(conv2)
# Concatenate
flat = tf.concat(1,[conv1,conv2])
# Layer 3 : Fully Connected. Input = 9900. Output = 100.
fc1 = tf.add(tf.matmul(flat, weights['wfc1']), biases['bfc1'])
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, keep_prob)
# Layer 4: Fully Connected. Input = 100. Output = n_classes.
logits = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return logits
Explanation: 9. Model Architecture
| Layer | Description | Input | Output |
|:-------------:|:---------------------------------------------:|:-----------------:|:---------------------------:|
| Input | 32x32x1 Grayscale image | Image | Convolution 1 |
| Convolution 1 | 1x1 stride, valid padding, outputs 28x28x100 | Input | RELU |
| RELU 1 | | Convolution 1 | Max Pooling 1 |
| Max pooling 1 | 2x2 stride, outputs 14x14x100 | RELU 1 | Convolution 2, Max Pooling 3|
| Convolution 2 | 1x1 stride, valid padding, outputs 10x10x200 | Max pooling 1 | RELU 2 |
| RELU 2 | | Convolution 2 | Max pooling 2 |
| Max pooling 2 | 2x2 stride, outputs 5x5x200 | RELU 2 | Flatten 2 |
| Max pooling 3 | 2x2 stride, outputs 7x7x100 | Max pooling 1 | Flatten 1 |
| Flatten 1 | Input = 7x7x100, Output = 4900 | Max pooling 3 | Concatenate 1 |
| Flatten 2 | Input = 5x5x200, Output = 5000 | Max pooling 2 | Concatenate 1 |
| Concatenate 1 | Input1 = 4900, Input1 = 5000, Output = 9900 | Max pooling 2 and 3 |Fully connected |
| Fully connected | Fully Connected. Input = 9900, Output = 100 | Concatenate 1 | Dropout |
| Dropout | Keep prob = 0.75 | Fully connected | Softmax |
| Softmax | Fully Connected. Input = 100, Output = 43 | Dropout | Probabilities |
End of explanation
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
#Hyperparameters
EPOCHS = 100 #Max EPOCH number, if ever early stopping doesn't kick in
BATCH_SIZE = 256 #Max batch size
rate = 0.001 #Base learning rate
keep_probability = 0.75 #Keep probability for dropout..
max_iter_wo_improvmnt = 3000 #For early stopping
Explanation: 10. Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
End of explanation
#Declare placeholder tensors
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
keep_prob = tf.placeholder(tf.float32)
one_hot_y = tf.one_hot(y, n_classes)
Explanation: 11. Features and Labels
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
End of explanation
logits = model(x, keep_prob)
probabilities = tf.nn.softmax(logits)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
Explanation: 12. Training Pipeline
Create a training pipeline that uses the model to classify German Traffic Sign Benchmarks data.
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: 13. Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
End of explanation
from sklearn.utils import shuffle
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
# Max iteration number without improvement
max_interation_num_wo_improv = 1000
print("Training...")
iteration = 0
best_valid_accuracy = 0
best_accuracy_iter = 0
stop = 0
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
iteration = iteration + 1
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: keep_probability})
# After 10 Epochs, for every 200 iterations validation accuracy is checked
if (iteration % 200 == 0 and i > 10):
validation_accuracy = evaluate(X_valid, y_valid)
if validation_accuracy > best_valid_accuracy:
best_valid_accuracy = validation_accuracy
best_accuracy_iter = iteration
saver = tf.train.Saver()
saver.save(sess, './best_model')
print("Improvement found, model saved!")
stop = 0
# Stopping criteria : if not improvement since 1000 iterations stop training
if (iteration - best_accuracy_iter) > max_iter_wo_improvmnt:
print("Stopping criteria met..")
stop = 1
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
if stop == 1:
break
# saver.save(sess, './lenet')
# print("Model saved")
Explanation: 14. Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
End of explanation
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
print("Evaluating..")
train_accuracy = evaluate(X_train, y_train)
print("Train Accuracy = {:.3f}".format(train_accuracy))
valid_accuracy = evaluate(X_valid, y_valid)
print("Valid Accuracy = {:.3f}".format(valid_accuracy))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: 15. Evaluate accuracy of the different data sets
End of explanation
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
test_images = os.listdir('traffic-signs-data/web_found_signs/')
X_web = []
for file in test_images:
image = mpimg.imread('traffic-signs-data/web_found_signs/' + file)
plt.imshow(image)
plt.show()
print("Loaded ", file)
X_web.append(image)
X_web = np.array(X_web)
# Preprocess images
print('Preprocessing features...')
X_web = equalize_Y_histogram(X_web)
X_web = convert_to_grayscale(X_web)
X_web = normalize_grayscale(X_web)
X_web = X_web[..., newaxis]
print("Processed shape =", X_web.shape)
Explanation: Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
16. Load and Show the Images
End of explanation
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
import tensorflow as tf
# hardcoded..
y_web = [9,22,2,18,1,17,4,10,38,4,4,23]
#We have to set the keep probability to 1.0 in the model..
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
logits_web = sess.run(tf.argmax(logits,1), feed_dict={x: X_web, keep_prob: 1.0})
print("Prediction =", logits_web)
# show_sample(X_web, logits_web, histogram = 0, sample_num = len(test_images), sample_index = 0, color_map = 'gray')
#Number of column to show
sample_num = len(test_images)
col_num = 4
if sample_num%col_num == 0:
row_num = int(sample_num/col_num)
else:
row_num = int(sample_num/col_num)+1
#Create training sample plot
f, axarr = plt.subplots(row_num, col_num, figsize=(col_num*4,(row_num+1)*2))
signnames = pd.read_csv('signnames.csv')
for i in range(0, sample_num, 1):
image = X_web[i].squeeze()
if logits_web[i] != y_web[i]:
color_str = 'red'
else:
color_str = 'green'
title_str = 'Predicted : %s \n Real: %s' % (signnames.iloc[logits_web[i], 1],signnames.iloc[y_web[i], 1])
axarr[int(i/col_num),i%col_num].set_title(title_str, color = color_str)
axarr[int(i/col_num),i%col_num].imshow(image,'gray')
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
f.tight_layout()
plt.show()
Explanation: 17. Predict the Sign Type for Each Image
End of explanation
### Calculate the accuracy for these 5 new images.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_web, y_web)
print("Web images Accuracy = {:.3f}".format(test_accuracy))
Explanation: 18. Analyze Performance
End of explanation
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
import matplotlib.gridspec as gridspec
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
softmax_prob = sess.run(tf.nn.top_k(probabilities,k = 5), feed_dict={x: X_web, keep_prob: 1.0})
signnames = pd.read_csv('signnames.csv')
for i in range(len(test_images)):
plt.figure(figsize = (6,2))
gs = gridspec.GridSpec(1, 2,width_ratios=[2,3])
plt.subplot(gs[0])
plt.imshow(X_web[i].squeeze(),cmap="gray")
plt.axis('off')
plt.subplot(gs[1])
plt.barh(6-np.arange(5),softmax_prob[0][i], align='center')
if logits_web[i] != y_web[i]:
color_str = 'red'
else:
color_str = 'green'
for i_label in range(5):
temp_string = "%.1f %% : %s" % (softmax_prob[0][i][i_label]*100, str(signnames.iloc[softmax_prob[1][i][i_label], 1]))
plt.text(softmax_prob[0][i][0]*1.1,6-i_label-.15, temp_string, color = color_str)
plt.show()
Explanation: 19. Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
End of explanation
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
Explanation: Step 4: Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
End of explanation |
13,533 | Given the following text description, write Python code to implement the functionality described.
Description:
Find sum of digits in factorial of a number
Function to multiply x with large number stored in vector v . Result is stored in v . ; Calculate res + prev carry ; updation at ith position ; Returns sum of digits in n ! ; One by one multiply i to current vector and update the vector . ; Find sum of digits in vector v [ ] ; Driver code
| Python Code:
def multiply(v , x ) :
carry = 0
size = len(v )
for i in range(size ) :
res = carry + v[i ] * x
v[i ] = res % 10
carry = res // 10
while(carry != 0 ) :
v . append(carry % 10 )
carry //= 10
def findSumOfDigits(n ) :
for i in range(1 , n + 1 ) :
multiply(v , i )
sum = 0
size = len(v )
for i in range(size ) :
sum += v[i ]
return sum
if __name__== "__main __":
n = 1000
print(findSumOfDigits(n ) )
|
13,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to LAPM
LAPM is a python package for the analysis of linear autonomous pool (compartmental) models. It can be used to obtain a large set of different system-level diagnostics of compartmental models. This can be done either in symbolic or numeric form. In this notebook, we will introduce the basics of the package.
We assume that you have already installed the package following the instructions provided in the download page. After the package is installed, you can import it following the commands
Step1: In the second line above, we imported also the linear_autonomous_pool_model module which contains most of the functions required for the examples in this notebook.
We will create now a simple two-compartment model with the following syntax
Step2: Notice that we created a symblic version of the model with no assignment of values to the parameters. This is useful because we can make computations in symbolic form only, or we can assign parameter values later.
With the compartmental matrix and the input vector created above, we can now create a compartmental model as
Step3: Symbolic computations
We can use now the set of functions available in LAPM with this model. For example, we can compute the mean system age as
Step4: For an output easier to read on the screen
Step5: Latex output can be obtained as
Step6: You can copy and paste this Latex ouput to a markdown or latex document
$$
\frac{\frac{\alpha u_{1}}{\lambda_{2}} + \frac{u_{2}}{\lambda_{2}}}{\lambda_{2} \left(\frac{\alpha u_{1}}{\lambda_{2}} + \frac{u_{2}}{\lambda_{2}} + \frac{u_{1}}{\lambda_{1}}\right)} + \frac{u_{1} \left(\frac{\alpha}{\lambda_{2}} + \frac{1}{\lambda_{1}}\right)}{\lambda_{1} \left(\frac{\alpha u_{1}}{\lambda_{2}} + \frac{u_{2}}{\lambda_{2}} + \frac{u_{1}}{\lambda_{1}}\right)}
$$
Numerical calculations
In most cases, we actually want to perform numerical computations of system-level diagnostics. In this case, we need first to assign values to the elements of the compartmental system. For instance, we can use the subs function to assign numerical values to existing matrices and vectors
Step7: We take now these numerical arguments and create a new compartmental system and compute the mean system age as above
Step8: Other useful system diagnostics are | Python Code:
from sympy import *
from LAPM import *
from LAPM.linear_autonomous_pool_model import LinearAutonomousPoolModel
Explanation: Introduction to LAPM
LAPM is a python package for the analysis of linear autonomous pool (compartmental) models. It can be used to obtain a large set of different system-level diagnostics of compartmental models. This can be done either in symbolic or numeric form. In this notebook, we will introduce the basics of the package.
We assume that you have already installed the package following the instructions provided in the download page. After the package is installed, you can import it following the commands:
End of explanation
lambda_1, lambda_2, alpha, u_1, u_2 = symbols('lambda_1 lambda_2 alpha u_1 u_2', positive=True)
A = Matrix([[ -lambda_1, 0],
[alpha*lambda_1, -lambda_2]])
u = Matrix(2, 1, [u_1, u_2])
Explanation: In the second line above, we imported also the linear_autonomous_pool_model module which contains most of the functions required for the examples in this notebook.
We will create now a simple two-compartment model with the following syntax
End of explanation
M=LinearAutonomousPoolModel(u, A)
Explanation: Notice that we created a symblic version of the model with no assignment of values to the parameters. This is useful because we can make computations in symbolic form only, or we can assign parameter values later.
With the compartmental matrix and the input vector created above, we can now create a compartmental model as
End of explanation
M.A_expected_value
Explanation: Symbolic computations
We can use now the set of functions available in LAPM with this model. For example, we can compute the mean system age as
End of explanation
pprint(M.A_expected_value)
Explanation: For an output easier to read on the screen
End of explanation
print(latex(M.A_expected_value))
Explanation: Latex output can be obtained as
End of explanation
u1=u.subs({u_1: 2, u_2: 4})
A1=A.subs({lambda_1: 0.8, lambda_2: 0.01, alpha: 0.13})
Explanation: You can copy and paste this Latex ouput to a markdown or latex document
$$
\frac{\frac{\alpha u_{1}}{\lambda_{2}} + \frac{u_{2}}{\lambda_{2}}}{\lambda_{2} \left(\frac{\alpha u_{1}}{\lambda_{2}} + \frac{u_{2}}{\lambda_{2}} + \frac{u_{1}}{\lambda_{1}}\right)} + \frac{u_{1} \left(\frac{\alpha}{\lambda_{2}} + \frac{1}{\lambda_{1}}\right)}{\lambda_{1} \left(\frac{\alpha u_{1}}{\lambda_{2}} + \frac{u_{2}}{\lambda_{2}} + \frac{u_{1}}{\lambda_{1}}\right)}
$$
Numerical calculations
In most cases, we actually want to perform numerical computations of system-level diagnostics. In this case, we need first to assign values to the elements of the compartmental system. For instance, we can use the subs function to assign numerical values to existing matrices and vectors:
End of explanation
M1=LinearAutonomousPoolModel(u1, A1)
M1.A_expected_value
Explanation: We take now these numerical arguments and create a new compartmental system and compute the mean system age as above
End of explanation
M1.A_standard_deviation # standard deviation of the system age distribution
M1.A_quantile(0.5) # Median (50% quantile) of the system age distribution
M1.T_expected_value #Mean transit time
M1.T_standard_deviation # standard deviation of the transit time distribution
M1.T_quantile(0.5) # Median (50% quantile) of the transit time distribution
M1.a_expected_value # Mean age vector of individual pools
M1.a_quantile(0.5) # Median age of individual pools
M1.T_laplace
M1.A_laplace
M1.r_compartments # release flux of individual compartments
M1.r_total # Total release flux
M1.entropy_per_jump
M1.entropy_per_cycle
M1.entropy_rate
Explanation: Other useful system diagnostics are:
End of explanation |
13,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
Step2: Convolution
Step4: Aside
Step5: Convolution
Step6: Max pooling
Step7: Max pooling
Step8: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory
Step9: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
Step13: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug
Step14: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note
Step15: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
Step16: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting
Step17: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set
Step18: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following
Step19: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization
Step20: Spatial batch normalization | Python Code:
# As usual, a bit of setup
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]])
# Compare your output to ours; difference should be around 2e-8
print('Testing conv_forward_naive')
print('difference: ', rel_error(out, correct_out))
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d//2:-d//2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
np.random.seed(231)
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-8'
print('Testing conv_backward_naive function')
print('dx error: ', rel_error(dx, dx_num))
print('dw error: ', rel_error(dw, dw_num))
print('db error: ', rel_error(db, db_num))
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print('Testing max_pool_forward_naive function:')
print('difference: ', rel_error(out, correct_out))
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
np.random.seed(231)
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print('Testing max_pool_backward_naive function:')
print('dx error: ', rel_error(dx, dx_num))
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
np.random.seed(231)
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print('Testing conv_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting conv_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
print('dw difference: ', rel_error(dw_naive, dw_fast))
print('db difference: ', rel_error(db_naive, db_fast))
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
np.random.seed(231)
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print('Testing pool_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting pool_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('speedup: %fx' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
np.random.seed(231)
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print('Testing conv_relu_pool')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print('Testing conv_relu:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
class ThreeLayerConvNet(object):
A three-layer convolutional network with the following architecture:
conv - relu - 2x2 max pool - affine - relu - affine - softmax
The network operates on minibatches of data that have shape (N, C, H, W)
consisting of N images, each with height H and width W and with C input
channels.
def __init__(self, input_dim=(3, 32, 32), num_filters=32, filter_size=7,
hidden_dim=100, num_classes=10, weight_scale=1e-3, reg=0.0,
dtype=np.float32):
Initialize a new network.
Inputs:
- input_dim: Tuple (C, H, W) giving size of input data
- num_filters: Number of filters to use in the convolutional layer
- filter_size: Size of filters to use in the convolutional layer
- hidden_dim: Number of units to use in the fully-connected hidden layer
- num_classes: Number of scores to produce from the final affine layer.
- weight_scale: Scalar giving standard deviation for random initialization
of weights.
- reg: Scalar giving L2 regularization strength
- dtype: numpy datatype to use for computation.
self.params = {}
self.reg = reg
self.dtype = dtype
############################################################################
# TODO: Initialize weights and biases for the three-layer convolutional #
# network. Weights should be initialized from a Gaussian with standard #
# deviation equal to weight_scale; biases should be initialized to zero. #
# All weights and biases should be stored in the dictionary self.params. #
# Store weights and biases for the convolutional layer using the keys 'W1' #
# and 'b1'; use keys 'W2' and 'b2' for the weights and biases of the #
# hidden affine layer, and keys 'W3' and 'b3' for the weights and biases #
# of the output affine layer. #
############################################################################
# def _init_model(self, D, C, H):
# D, H, C = input_dim, hidden_dim, num_classes
# The output size of convolution
# 32 - 7 + 1 with pad=0 and stride=1
# (32 + 2*p - 7)/ stride + 1
# 32-7+1 * 32-7+1 * 32
# 32-6 * 32-6 * 32
# 26 * 26 * 32
# CxHxW
in_C, in_H, in_W = input_dim
stride, pad = 1, (filter_size - 1) // 2
H = ((in_H + (2*pad) - filter_size) / stride) + 1
W = ((in_W + (2*pad) - filter_size) / stride) + 1
pool_H, pool_W, pool_stride, pool_pad = 2, 2, 2, 0
H = ((H + (2*pool_pad) - pool_H) / pool_stride) + 1
W = ((W + (2*pool_pad) - pool_W) / pool_stride) + 1
# print('H, W', H, W)
# print(int(H * W * num_filters))
# print(16 * 16 * 32)
self.params = dict(
W1=np.random.randn(num_filters, 3, filter_size, filter_size) * weight_scale,
W2=np.random.randn(int(H * W * num_filters), hidden_dim) * weight_scale,
W3=np.random.randn(hidden_dim, num_classes) * weight_scale,
b1=np.zeros((num_filters, 1)),
b2=np.zeros((1, hidden_dim)),
b3=np.zeros((1, num_classes))
)
pass
############################################################################
# END OF YOUR CODE #
############################################################################
for k, v in self.params.items():
self.params[k] = v.astype(dtype)
def loss(self, X, y=None):
Evaluate loss and gradient for the three-layer convolutional network.
Input / output: Same API as TwoLayerNet in fc_net.py.
W1, b1 = self.params['W1'], self.params['b1']
W2, b2 = self.params['W2'], self.params['b2']
W3, b3 = self.params['W3'], self.params['b3']
# pass conv_param to the forward pass for the convolutional layer
filter_size = W1.shape[2]
conv_param = {'stride': 1, 'pad': (filter_size - 1) // 2}
# # resulting output size
# # conv stride = 1
# # conv pad = (7-1)//2== 6//2==6/2==3
# # (32 + 2*3 - 7)/ 1 + 1 =
# # 32 + 6 -7 +1 =
# # 32 -1 +1 = 32
# stride, pad = 1, (filter_size - 1) // 2
# # CxHxW
# in_C, in_H, in_W = input_dim
# print('input_dim', input_dim)
# H = ((in_H + (2*pad) - filter_size) / stride) + 1
# W = ((in_W + (2*pad) - filter_size) / stride) + 1
# print('H, W', H, W)
# pass pool_param to the forward pass for the max-pooling layer
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
# # output size
# # pad=0, stride=2
# # (32 + 2*0 - 2)/2 +1
# # (32 -2)/2 +1
# # 30/2 +1
# # 15 +1
# # 16
# pool_H, pool_W, pool_stride, pool_pad = 2, 2, 2, 0
# H = ((H + (2*pool_pad) - pool_H) / pool_stride) + 1
# W = ((W + (2*pool_pad) - pool_W) / pool_stride) + 1
# print('H, W', H, W)
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer convolutional net, #
# computing the class scores for X and storing them in the scores #
# variable. #
############################################################################
# Input layer
h1, conv_cache = conv_forward_naive(b=b1, conv_param=conv_param, w=W1, x=X)
# print('h1.shape', h1.shape)
h1, nl_cache1 = relu_forward(x=h1)
h1, pool_cache = max_pool_forward_naive(pool_param=pool_param, x=h1)
# print('h1.shape', h1.shape)
# Hidden layer
# h1 = h1.reshape(X.shape[0], -1) # WHY not this one?
h2 = h1.ravel().reshape(X.shape[0], -1)
# print('h1.shape', h1.shape)
h2, affine_cache = affine_forward(b=b2, w=W2, x=h2)
h2, nl_cache2 = relu_forward(x=h2)
# print('h2.shape', h2.shape)
# Output layer
scores, scores_cache = affine_forward(b=b3, w=W3, x=h2)
# print('scores.shape', scores.shape)
pass
############################################################################
# END OF YOUR CODE #
############################################################################
if y is None:
return scores
loss, grads = 0, {}
############################################################################
# TODO: Implement the backward pass for the three-layer convolutional net, #
# storing the loss and gradients in the loss and grads variables. Compute #
# data loss using softmax, and make sure that grads[k] holds the gradients #
# for self.params[k]. Don't forget to add L2 regularization! #
############################################################################
reg_loss = regularization(lam=self.reg, model=self.params, reg_type='l2')
loss, dy = softmax_loss(x=scores, y=y)
loss += reg_loss
# Output layer
dh2, dW3, db3 = affine_backward(cache=scores_cache, dout=dy)
# print('dh2.shape', dh2.shape)
# Hidden layer
dh2 = relu_backward(cache=nl_cache2, dout=dh2)
dh2, dW2, db2 = affine_backward(cache=affine_cache, dout=dh2)
# print('dh1.shape', dh1.shape)
dh1 = dh2.reshape(h1.shape)
# print('dh1.shape', dh1.shape)
# Input layer
dh1 = max_pool_backward_naive(cache=pool_cache, dout=dh1)
dh1 = relu_backward(cache=nl_cache1, dout=dh1)
_, dW1, db1 = conv_backward_naive(cache=conv_cache, dout=dh1)
# Gradients
grads = dict(W1 = dW1, b1 = db1,
W2 = dW2, b2 = db2,
W3 = dW3, b3 = db3)
pass
############################################################################
# END OF YOUR CODE #
############################################################################
return loss, grads
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss)
model.reg = 0.5
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss)
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
np.random.seed(231)
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2.
End of explanation
np.random.seed(231)
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=15, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
np.random.seed(231)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print('Before spatial batch normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x.mean(axis=(0, 2, 3)))
print(' Stds: ', x.std(axis=(0, 2, 3)))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization (nontrivial gamma, beta):')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
np.random.seed(231)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in range(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After spatial batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=(0, 2, 3)))
print(' stds: ', a_norm.std(axis=(0, 2, 3)))
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
np.random.seed(231)
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation |
13,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview of artifact detection
This tutorial covers the basics of artifact detection, and introduces the
artifact detection tools available in MNE-Python.
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>
Step1: What are artifacts?
Artifacts are parts of the recorded signal that arise from sources other than
the source of interest (i.e., neuronal activity in the brain). As such,
artifacts are a form of interference or noise relative to the signal of
interest. There are many possible causes of such interference, for example
Step2: Low-frequency drifts
Low-frequency drifts are most readily detected by visual inspection using the
basic
Step3: Low-frequency drifts are readily removed by high-pass filtering at a fairly
low cutoff frequency (the wavelength of the drifts seen above is probably
around 20 seconds, so in this case a cutoff of 0.1 Hz would probably suppress
most of the drift).
Power line noise
Power line artifacts are easiest to see on plots of the spectrum, so we'll
use
Step4: Here we see narrow frequency peaks at 60, 120, 180, and 240 Hz — the power
line frequency of the USA (where the sample data was recorded) and its 2nd,
3rd, and 4th harmonics. Other peaks (around 25 to 30 Hz, and the second
harmonic of those) are probably related to the heartbeat, which is more
easily seen in the time domain using a dedicated heartbeat detection function
as described in the next section.
Heartbeat artifacts (ECG)
MNE-Python includes a dedicated function
Step5: The horizontal streaks in the magnetometer image plot reflect the fact that
the heartbeat artifacts are superimposed on low-frequency drifts like the one
we saw in an earlier section; to avoid this you could pass
baseline=(-0.5, -0.2) in the call to
Step6: Here again we can visualize the spatial pattern of the associated field at
various times relative to the peak of the EOG response
Step7: Or, we can get an ERP/F plot with
Step8: Ocular artifacts (EOG)
Similar to the ECG detection and epoching methods described above, MNE-Python
also includes functions for detecting and extracting ocular artifacts | Python Code:
import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(0, 60).load_data() # just use a fraction of data for speed here
Explanation: Overview of artifact detection
This tutorial covers the basics of artifact detection, and introduces the
artifact detection tools available in MNE-Python.
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>:
End of explanation
ssp_projectors = raw.info['projs']
raw.del_proj()
Explanation: What are artifacts?
Artifacts are parts of the recorded signal that arise from sources other than
the source of interest (i.e., neuronal activity in the brain). As such,
artifacts are a form of interference or noise relative to the signal of
interest. There are many possible causes of such interference, for example:
Environmental artifacts
Persistent oscillations centered around the AC power line frequency_
(typically 50 or 60 Hz)
Brief signal jumps due to building vibration (such as a door slamming)
Electromagnetic field noise from nearby elevators, cell phones, the
geomagnetic field, etc.
Instrumentation artifacts
Electromagnetic interference from stimulus presentation (such as EEG
sensors picking up the field generated by unshielded headphones)
Continuous oscillations at specific frequencies used by head position
indicator (HPI) coils
Random high-amplitude fluctuations (or alternatively, constant zero
signal) in a single channel due to sensor malfunction (e.g., in surface
electrodes, poor scalp contact)
Biological artifacts
Periodic QRS_-like signal patterns (especially in magnetometer
channels) due to electrical activity of the heart
Short step-like deflections (especially in frontal EEG channels) due to
eye movements
Large transient deflections (especially in frontal EEG channels) due to
blinking
Brief bursts of high frequency fluctuations across several channels due
to the muscular activity during swallowing
There are also some cases where signals from within the brain can be
considered artifactual. For example, if a researcher is primarily interested
in the sensory response to a stimulus, but the experimental paradigm involves
a behavioral response (such as button press), the neural activity associated
with the planning and executing the button press could be considered an
artifact relative to signal of interest (i.e., the evoked sensory response).
<div class="alert alert-info"><h4>Note</h4><p>Artifacts of the same genesis may appear different in recordings made by
different EEG or MEG systems, due to differences in sensor design (e.g.,
passive vs. active EEG electrodes; axial vs. planar gradiometers, etc).</p></div>
What to do about artifacts
There are 3 basic options when faced with artifacts in your recordings:
Ignore the artifact and carry on with analysis
Exclude the corrupted portion of the data and analyze the remaining data
Repair the artifact by suppressing artifactual part of the recording
while (hopefully) leaving the signal of interest intact
There are many different approaches to repairing artifacts, and MNE-Python
includes a variety of tools for artifact repair, including digital filtering,
independent components analysis (ICA), Maxwell filtering / signal-space
separation (SSS), and signal-space projection (SSP). Separate tutorials
demonstrate each of these techniques for artifact repair. Many of the
artifact repair techniques work on both continuous (raw) data and on data
that has already been epoched (though not necessarily equally well); some can
be applied to memory-mapped_ data while others require the data to be
copied into RAM. Of course, before you can choose any of these strategies you
must first detect the artifacts, which is the topic of the next section.
Artifact detection
MNE-Python includes a few tools for automated detection of certain artifacts
(such as heartbeats and blinks), but of course you can always visually
inspect your data to identify and annotate artifacts as well.
We saw in the introductory tutorial <tut-overview> that the example
data includes :term:SSP projectors <projector>, so before we look at
artifacts let's set aside the projectors in a separate variable and then
remove them from the :class:~mne.io.Raw object using the
:meth:~mne.io.Raw.del_proj method, so that we can inspect our data in it's
original, raw state:
End of explanation
mag_channels = mne.pick_types(raw.info, meg='mag')
raw.plot(duration=60, order=mag_channels, n_channels=len(mag_channels),
remove_dc=False)
Explanation: Low-frequency drifts
Low-frequency drifts are most readily detected by visual inspection using the
basic :meth:~mne.io.Raw.plot method, though it is helpful to plot a
relatively long time span and to disable channel-wise DC shift correction.
Here we plot 60 seconds and show all the magnetometer channels:
End of explanation
fig = raw.plot_psd(tmax=np.inf, fmax=250, average=True)
# add some arrows at 60 Hz and its harmonics:
for ax in fig.axes[1:]:
freqs = ax.lines[-1].get_xdata()
psds = ax.lines[-1].get_ydata()
for freq in (60, 120, 180, 240):
idx = np.searchsorted(freqs, freq)
ax.arrow(x=freqs[idx], y=psds[idx] + 18, dx=0, dy=-12, color='red',
width=0.1, head_width=3, length_includes_head=True)
Explanation: Low-frequency drifts are readily removed by high-pass filtering at a fairly
low cutoff frequency (the wavelength of the drifts seen above is probably
around 20 seconds, so in this case a cutoff of 0.1 Hz would probably suppress
most of the drift).
Power line noise
Power line artifacts are easiest to see on plots of the spectrum, so we'll
use :meth:~mne.io.Raw.plot_psd to illustrate.
End of explanation
ecg_epochs = mne.preprocessing.create_ecg_epochs(raw)
ecg_epochs.plot_image(combine='mean')
Explanation: Here we see narrow frequency peaks at 60, 120, 180, and 240 Hz — the power
line frequency of the USA (where the sample data was recorded) and its 2nd,
3rd, and 4th harmonics. Other peaks (around 25 to 30 Hz, and the second
harmonic of those) are probably related to the heartbeat, which is more
easily seen in the time domain using a dedicated heartbeat detection function
as described in the next section.
Heartbeat artifacts (ECG)
MNE-Python includes a dedicated function
:func:~mne.preprocessing.find_ecg_events in the :mod:mne.preprocessing
submodule, for detecting heartbeat artifacts from either dedicated ECG
channels or from magnetometers (if no ECG channel is present). Additionally,
the function :func:~mne.preprocessing.create_ecg_epochs will call
:func:~mne.preprocessing.find_ecg_events under the hood, and use the
resulting events array to extract epochs centered around the detected
heartbeat artifacts. Here we create those epochs, then show an image plot of
the detected ECG artifacts along with the average ERF across artifacts. We'll
show all three channel types, even though EEG channels are less strongly
affected by heartbeat artifacts:
End of explanation
avg_ecg_epochs = ecg_epochs.average().apply_baseline((-0.5, -0.2))
Explanation: The horizontal streaks in the magnetometer image plot reflect the fact that
the heartbeat artifacts are superimposed on low-frequency drifts like the one
we saw in an earlier section; to avoid this you could pass
baseline=(-0.5, -0.2) in the call to
:func:~mne.preprocessing.create_ecg_epochs.
You can also get a quick look at the
ECG-related field pattern across sensors by averaging the ECG epochs together
via the :meth:~mne.Epochs.average method, and then using the
:meth:mne.Evoked.plot_topomap method:
End of explanation
avg_ecg_epochs.plot_topomap(times=np.linspace(-0.05, 0.05, 11))
Explanation: Here again we can visualize the spatial pattern of the associated field at
various times relative to the peak of the EOG response:
End of explanation
avg_ecg_epochs.plot_joint(times=[-0.25, -0.025, 0, 0.025, 0.25])
Explanation: Or, we can get an ERP/F plot with :meth:~mne.Evoked.plot or a combined
scalp field maps and ERP/F plot with :meth:~mne.Evoked.plot_joint. Here
we've specified the times for scalp field maps manually, but if not provided
they will be chosen automatically based on peaks in the signal:
End of explanation
eog_epochs = mne.preprocessing.create_eog_epochs(raw, baseline=(-0.5, -0.2))
eog_epochs.plot_image(combine='mean')
eog_epochs.average().plot_joint()
Explanation: Ocular artifacts (EOG)
Similar to the ECG detection and epoching methods described above, MNE-Python
also includes functions for detecting and extracting ocular artifacts:
:func:~mne.preprocessing.find_eog_events and
:func:~mne.preprocessing.create_eog_epochs. Once again we'll use the
higher-level convenience function that automatically finds the artifacts and
extracts them in to an :class:~mne.Epochs object in one step. Unlike the
heartbeat artifacts seen above, ocular artifacts are usually most prominent
in the EEG channels, but we'll still show all three channel types. We'll use
the baseline parameter this time too; note that there are many fewer
blinks than heartbeats, which makes the image plots appear somewhat blocky:
End of explanation |
13,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An object-oriented, containerized model of music notation
Abjad extends the Python programming language with an object-oriented, containerized model of common practice music notation. Let's explore the notes, rests and chords that make up the simplest part of Abjad's object model.
Notes
First we'll make a note. It will be a D5 with the duration of a dotted half
Step1: Now we can inspect different attributes of our note.
Notice how a dot chains together our note and its attributes
Step2: We can change our note's pitch like this
Step3: We can do the same with our note's duration
Step4: Abjad 3.2 implements 311 classes, all documented in the Abjad API.
To look up abjad.Note in the API, open the API.
Then use your browser's search function to look up "Note."
This will bring you to the reference page for abjad.Note.
Rest and chords
We can make rests and chords the same way we made our note
Step5: We get a default value of a quarter rest.
We can pass in a different duration as a string
Step6: Or as a pair of numbers equal to a fraction
Step7: Chords work similarly
Step8: LilyPond skips (invisible rests) are also available
Step9: And remember, we can get back to the attributes of our leaves using dot chaining syntax.
Step10: A containerized model of music notation
Abjad implements a containerized model of music notation
Step11: All other elements of notation are modeled as indicators. We create phrasing slurs, for example, by attaching an abjad.StartPhrasingSlur to one note and a matching abjad.StopPhrasinSlur to another note. Both classes are indicators
Step12: Some indicators are really simple, like abjad.Articulation. Here we use indexing to attach a staccato to the note "at index 1" in staff two. Because Python's indexing starts at 0, it is the second note in staff two that gets the staccato
Step13: It might be the case that we want to apply the same indicator to a consecutive series of leaves. Note that we can use iteration to do the same thing to multiple components sequentially. Here we use slice notation to attach a staccato to every leaf in staff one, starting with the note "to the right of slice index 1"
Step14: Making many notes
We can use anything in Python -- from built-in libraries to any of its thousands of external libraries -- to make notes, rests and chords. Python's list comprehensions allow us to describe collections of objects easily. The following list comprehension means "a list of eighth notes with pitch values from 0 through 24"
Step15: We can use external libraries, too. Here we generate a hundred random numbers and use them as pitch values. Import Python's built-in random module if you haven't already
Step16: That's a lot of numbers. Let's just use a few of them. We can use list slicing syntax to take the portion of our list. The slice [
Step17: Then we can turn numbers into notes with a list comprehension once again
Step18: Quartertones are possible, too | Python Code:
note = abjad.Note("d''2.")
abjad.show(note)
Explanation: An object-oriented, containerized model of music notation
Abjad extends the Python programming language with an object-oriented, containerized model of common practice music notation. Let's explore the notes, rests and chords that make up the simplest part of Abjad's object model.
Notes
First we'll make a note. It will be a D5 with the duration of a dotted half:
End of explanation
note.written_pitch
note.written_duration
Explanation: Now we can inspect different attributes of our note.
Notice how a dot chains together our note and its attributes:
End of explanation
note.written_pitch = "cs''"
abjad.show(note)
Explanation: We can change our note's pitch like this:
End of explanation
note.written_duration = (1, 2)
abjad.show(note)
Explanation: We can do the same with our note's duration:
End of explanation
rest = abjad.Rest()
abjad.show(rest)
Explanation: Abjad 3.2 implements 311 classes, all documented in the Abjad API.
To look up abjad.Note in the API, open the API.
Then use your browser's search function to look up "Note."
This will bring you to the reference page for abjad.Note.
Rest and chords
We can make rests and chords the same way we made our note:
End of explanation
rest = abjad.Rest("r8..")
abjad.show(rest)
Explanation: We get a default value of a quarter rest.
We can pass in a different duration as a string:
End of explanation
rest = abjad.Rest((7, 32))
abjad.show(rest)
Explanation: Or as a pair of numbers equal to a fraction:
End of explanation
chord = abjad.Chord("<c' e' g'>8.")
abjad.show(chord)
Explanation: Chords work similarly:
End of explanation
skip = abjad.Skip("s8.")
skip
Explanation: LilyPond skips (invisible rests) are also available:
End of explanation
rest.written_duration
chord.written_duration
skip.written_duration
Explanation: And remember, we can get back to the attributes of our leaves using dot chaining syntax.
End of explanation
staff_1 = abjad.Staff("c' d' e' f'")
staff_2 = abjad.Staff("f' e' d' c'")
group = abjad.StaffGroup([staff_1, staff_2])
score = abjad.Score([group])
abjad.show(score)
Explanation: A containerized model of music notation
Abjad implements a containerized model of music notation:
https://abjad.github.io/core_concepts/containerized_model.html
In summary: notes, rests and chords are contained in tuplets, voices, staves and scores; containers may contain each other; and the other elements of music notation are modeled as indicators that attach to notes, rests and chords.
Let's explore.
We'll start by looking at the way notes can be contained in staves, which themselves can be contained in a staff group, itself contained in a score:
End of explanation
start_slur = abjad.StartPhrasingSlur()
stop_slur = abjad.StopPhrasingSlur()
abjad.attach(start_slur, staff_1[0])
abjad.attach(stop_slur, staff_1[3])
abjad.show(score)
Explanation: All other elements of notation are modeled as indicators. We create phrasing slurs, for example, by attaching an abjad.StartPhrasingSlur to one note and a matching abjad.StopPhrasinSlur to another note. Both classes are indicators:
End of explanation
staccato = abjad.Articulation("staccato")
abjad.attach(staccato, staff_2[1])
abjad.show(score)
Explanation: Some indicators are really simple, like abjad.Articulation. Here we use indexing to attach a staccato to the note "at index 1" in staff two. Because Python's indexing starts at 0, it is the second note in staff two that gets the staccato:
End of explanation
for note in staff_1[1:]:
staccato = abjad.Articulation("staccato")
abjad.attach(staccato, note)
abjad.show(score)
Explanation: It might be the case that we want to apply the same indicator to a consecutive series of leaves. Note that we can use iteration to do the same thing to multiple components sequentially. Here we use slice notation to attach a staccato to every leaf in staff one, starting with the note "to the right of slice index 1":
End of explanation
notes = [abjad.Note(x, (1, 8)) for x in range(24 + 1)]
staff = abjad.Staff(notes)
abjad.show(staff)
Explanation: Making many notes
We can use anything in Python -- from built-in libraries to any of its thousands of external libraries -- to make notes, rests and chords. Python's list comprehensions allow us to describe collections of objects easily. The following list comprehension means "a list of eighth notes with pitch values from 0 through 24":
End of explanation
numbers = [random.randrange(0,25) for x in range(100)]
Explanation: We can use external libraries, too. Here we generate a hundred random numbers and use them as pitch values. Import Python's built-in random module if you haven't already:
End of explanation
numbers = numbers[:10]
numbers
Explanation: That's a lot of numbers. Let's just use a few of them. We can use list slicing syntax to take the portion of our list. The slice [:10] means "from the beginning up to slice index 10":
End of explanation
notes = [abjad.Note(x, (1, 8)) for x in numbers]
staff = abjad.Staff(notes)
abjad.show(staff)
Explanation: Then we can turn numbers into notes with a list comprehension once again:
End of explanation
half_numbers = [x / 2.0 for x in numbers]
half_numbers
notes = [abjad.Note(x, (1, 8)) for x in half_numbers]
staff = abjad.Staff(notes)
abjad.show(staff)
Explanation: Quartertones are possible, too:
End of explanation |
13,538 | Given the following text description, write Python code to implement the functionality described.
Description:
Compress a Binary Tree from top to bottom with overlapping condition
Structure of a node of th tree ; Function to compress all the nodes on the same vertical line ; Stores node by compressing all nodes on the current vertical line ; Check if i - th bit of current bit set or not ; Iterate over the range [ 0 , 31 ] ; Stores count of set bits at i - th positions ; Stores count of clear bits at i - th positions ; Traverse the array ; If i - th bit of current element is set ; Update S ; Update NS ; If count of set bits at i - th position is greater than count of clear bits ; Update ans ; Update getBit ; Function to compress all the nodes on the same vertical line with a single node that satisfies the condition ; Map all the nodes on the same vertical line ; Function to traverse the tree and map all the nodes of same vertical line to vertical distance ; Storing the values in the map ; Recursive calls on left and right subtree ; Getting the range of horizontal distances ; Driver Code ; Function Call
| Python Code:
class TreeNode :
def __init__(self , val = ' ' , left = None , right = None ) :
self . val = val
self . left = left
self . right = right
def evalComp(arr ) :
ans = 0
getBit = 1
for i in range(32 ) :
S = 0
NS = 0
for j in arr :
if getBit & j :
S += 1
else :
NS += 1
if S > NS :
ans += 2 ** i
getBit <<= 1
print(ans , end = "▁ ")
def compressTree(root ) :
mp = { }
def Trav(root , hd ) :
if not root :
return
if hd not in mp :
mp[hd ] =[root . val ]
else :
mp[hd ] . append(root . val )
Trav(root . left , hd - 1 )
Trav(root . right , hd + 1 )
Trav(root , 0 )
lower = min(mp . keys() )
upper = max(mp . keys() )
for i in range(lower , upper + 1 ) :
evalComp(mp[i ] )
if __name__== ' __main __' :
root = TreeNode(5 )
root . left = TreeNode(3 )
root . right = TreeNode(2 )
root . left . left = TreeNode(1 )
root . left . right = TreeNode(4 )
root . right . left = TreeNode(1 )
root . right . right = TreeNode(2 )
compressTree(root )
|
13,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Business Feasibility Overview
The purpose of this notebook is to analyze the feasibility of a business based on its intrinsic probabilities of loss/gain and return on investment in the cases of loss/gain.
This type of analysis refers to a very specific type of bussiness in which you have defined iterations. As far as we can think in a first approach there are 2 types of bussinessess
Step1: Define Common Parameters
Step2: Question 1.
Starting with a principal P, after N iterations, what is the probability to see that capital become O for each possible O that is allowed by the binomial process.
Define the functions that will evolve the principal capital P a Binomial process.
Step3: Run the simulation using the Binomial process which is equivalent to performing a very large (~1000's) Bernoulli processes and grouping their results. Since the order in which 1's and 0's occur in the sequence does not affect the final result.
Step4: Question 2.
Plot the time evolution of the principal P through the Binomial process. Where a more intense color means a higher probability and a less intense color means a lower probability.
Step5: The previous plot shows the evolution of the capital throughout the Binomial process, alongside we show the mean and the most probable value of the possible outcomes. As one increases the number of iterations the mean surpassess the most probable value for good while maintaining a very close gap.
Question 4.
We want to see how likely it is to have a capital decline of "X" percent over the next "n" iterations.
The plot we want is obtained by selecting a subset of the evolution curve. The subset of the values correspond to those where the multiplying factors are less than 1. After such values are selected one applies the transformation
Step6: Question 5.
Obtain the probability of bankrupcty after N iterations, bankruptcy is defined for the purposes of this notebook as the event in which the principal perceives a capital decline bigger than or equal to X percent | Python Code:
# Numpy
import numpy as np
# Scipy
from scipy import stats
from scipy import linspace
# Plotly
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
init_notebook_mode(connected=True) # Offline plotting
Explanation: Business Feasibility Overview
The purpose of this notebook is to analyze the feasibility of a business based on its intrinsic probabilities of loss/gain and return on investment in the cases of loss/gain.
This type of analysis refers to a very specific type of bussiness in which you have defined iterations. As far as we can think in a first approach there are 2 types of bussinessess:
One starts with a principal P, bussiness has a defined madurity time T, and at the end of such maturity time the capital becomes O, in which, O = P + G, where G corresponds to the gain which can be positive or negative, each possible value of the range of G has a certain specific probability.
One starts with a principal P, which is composed of a "sunken capital" S and a "working capital" W bussiness should in principle go on forever, however if bussiness does not adapt correctly to market conditions it will have an expiration date, which usually occurs, be it 100 years or 10 years, there is also a probability of initial kickstart success or failure Pk, this type of bussiness gives periodically a profit or loss G in periods of time T which are smaller than the expiration date, which is uncertain. The sunken part of the principal S devaluates (due to devaluation of assets) or valuates in time (due to brand awareness). With regard to the expiration date it is uncertain but one could assume a range in which it could take values with increasing probability of expiration as the time increases, asymptotically reaching 1 (this is the assumption that no bussiness lives forever, think universe imploding).
The questions to solve in this Notebook refer to the first type of bussiness.
Questions to solve:
Given the parameters of the business, namely:
The return on investment when a gain event occurs ROI_G.
The return on investment when a loss event occurs ROI_L.
The probability that a gain event occurs P_G.
Where we have made simplifying assumptions given that the ROI_G, ROI_L are continuous variable P_G(ROI_G) is actually a single continuous real function. Also, we have made the simplifying assumption that the madurity time T is always the same. Which is also not absolutely true.
Starting with a principal P, after N iterations, what is the probability to see that capital become O for each possible O that is allowed by the binomial process.
On would also like to see how the capital P evolves through the Bernoulli process. However since at iteration N regardless of the specific Bernoulli process what matters is where this process falls in the Binomial distribution. Each Bernoulli process has equal probability of ocurring as another which has the same amount of YES/NO Bernoulli trials in it. A graph of different timelines for each possible Bernoulli trial would be inadequate at best. Instead it would be interesting to see how the probability spreads out over the possible range of values of the Binomial process once the number of iterations increases. One would require a color plot. (Something similar to a Choropleth). This would be the time evolution of the projection to the x axis of the figure obtained in question 1.
Obtain a single parameter that indicates whether a business is feasible in this sense or not. The definition of feasibility to use is to have X percent of the mass of the pmf above a certain ROI after n iterations. e.g. having 80% of the mass of the pmf above a factor of 2 or 200% ROI (profit) after 10 iterations. i.e. to have a 80% probability of earning a 200% profit after 10 iterations. According to this criteria one would determine if a business is feasible or not. To define it after n=1 iterations would just result in the original parameters. This is a special case in which the answer of the questions is simplified and does not require numerical computations.
Get probability of seeing a capital decline of X percent over the next n iterations. It does not matter the nominal value of capital you start at. Produce a plot where each curve represents the decline probability vs iterations for each cutoff percentage.
Based on the results of question 4 obtain the probability of bankruptcy in n iterations. The probability of bankruptcy should be defined as seeing the capital decline over X percent i.e. it would be the probability attained by performing a sum over all curves that see a capital decline bigger than the cutoff value.
Import Modules
End of explanation
# Probabilities
P_G = 0.8
# Return on investment rates
ROI_G = 1
ROI_L = -0.2
# Principal (initial capital)
P = 1
Explanation: Define Common Parameters
End of explanation
# Takes the principal P and performs the evolution of the capital using
# the result x of the random binomial variable after n trials
def evolve_with_binomial(P, x, n):
return P * ((1 + ROI_G) ** x) * ((1 + ROI_L) ** (n - x))
Explanation: Question 1.
Starting with a principal P, after N iterations, what is the probability to see that capital become O for each possible O that is allowed by the binomial process.
Define the functions that will evolve the principal capital P a Binomial process.
End of explanation
# Number of iterations
years = 5
iterations_per_year = 2
n = iterations_per_year * (years)
# Sorted array of unique values ocurring in instance of Binomial process
x_binomial = linspace(0,n,n+1)
# Arrays of data to plot
data_dict = { 'x': [], 'y': []}
data_dict['x'] = [evolve_with_binomial(P, x, max(x_binomial)) for x in x_binomial]
data_dict['y'] = stats.binom.pmf(x_binomial,max(x_binomial),P_G)
# Plot data variable. It contains the trace objects
fig_data = [
go.Bar(
x=data_dict['x'],
y=data_dict['y'],
name="Probabilities"
),
go.Scatter(
x=data_dict['x'],
y=data_dict['y'],
mode='lines+markers',
name="Fitting",
line=dict(
shape='spline'
)
)
]
# Set layout for figure
layout = go.Layout(
title='Binomial Distribution of Capital at N Iterations',
font=dict(
family='Arial, sans-serif;',
size=12,
color='#000'
),
xaxis = dict(title='Capital Multiplier'),
yaxis = dict(title='Event Probability'),
orientation=0,
autosize=True,
annotations=[
dict(
x=max(data_dict['x'])/2,
y=max(data_dict['y']),
text='N: {0} | P_G: {1}'.format(n, P_G),
showarrow=False
)
]
)
# Plot figure
#iplot({"data": fig_data, "layout": layout})
Explanation: Run the simulation using the Binomial process which is equivalent to performing a very large (~1000's) Bernoulli processes and grouping their results. Since the order in which 1's and 0's occur in the sequence does not affect the final result.
End of explanation
# Number of iterations
years = 5
iterations_per_year = 2
n = iterations_per_year * (years)
# Arrays of data to plot
data_dict = { 'values': [], 'probs': np.array([]), 'iterations': [], 'mean': [], 'most_prob': [], 'uniq_iterations': []}
# For each iteration less than the maximun number of iterations
i = 1
while i <= n:
x_i = linspace(0,i,i+1) # Possible values of success event in "i" trials
values = [evolve_with_binomial(P, x, max(x_i)) for x in x_i] # Capital evolution according to Binomial process
probs = stats.binom.pmf(x_i,max(x_i),P_G) # Probabilities of Binomial process
# Set values in dictionary
data_dict['values'] = data_dict['values'] + values
data_dict['mean'].append(np.mean(values))
data_dict['most_prob'].append(values[np.argmax(probs)])
data_dict['uniq_iterations'].append(i)
data_dict['probs'] = np.concatenate((data_dict['probs'], probs), axis=0)
data_dict['iterations'] = data_dict['iterations'] + [i]*len(x_i)
i += 1
# Plot data variable. It contains the trace objects
fig_data = [
go.Scatter(
x=data_dict['iterations'],
y=data_dict['values'],
mode='markers',
name="Evolution",
marker=dict(
cmin = 0,
cmax = 1,
color = data_dict['probs'],
size = 16
)
),
go.Scatter(
x=data_dict['uniq_iterations'],
y=data_dict['mean'],
mode='lines+markers',
name="Mean",
line=dict(
shape='spline'
)
),
go.Scatter(
x=data_dict['uniq_iterations'],
y=data_dict['most_prob'],
mode='lines+markers',
name="Most Probable",
line=dict(
shape='spline'
)
)
]
# Set layout for figure
layout = go.Layout(
title='Evolution of Capital Through Binomial Process',
font=dict(
family='Arial, sans-serif;',
size=12,
color='#000'
),
xaxis = dict(title='Iteration Number'),
yaxis = dict(title='Capital Multiplier'),
orientation=0,
autosize=True,
annotations=[
dict(
x=n/2,
y=max(data_dict['values']),
text='P_G: {0}'.format(P_G),
showarrow=False
)
]
)
# Plot figure
#iplot({"data": fig_data, "layout": layout})
Explanation: Question 2.
Plot the time evolution of the principal P through the Binomial process. Where a more intense color means a higher probability and a less intense color means a lower probability.
End of explanation
# Calculate the possible capital declines and their respective probabilities
data_dict["decline_values"] = []
data_dict["decline_probs"] = []
data_dict["decline_iterations"] = []
for index, val in enumerate(data_dict["values"]):
if val < 1:
data_dict["decline_values"].append((1-val)*100)
data_dict["decline_probs"].append(100*data_dict["probs"][index])
data_dict["decline_iterations"].append(data_dict["iterations"][index])
# Plot data variable. It contains the trace objects
fig_data = [
go.Scatter(
x=data_dict['decline_iterations'],
y=data_dict['decline_values'],
mode='markers',
name="Evolution",
marker=dict(
cmin = 0,
cmax = 1,
color = data_dict['decline_probs']
)
)
]
fig_data[0].text = ["Probability: {0:.2f}%".format(prob) for prob in data_dict["decline_probs"]]
# Set layout for figure
layout = go.Layout(
title='Possible Capital Decline Through Binomial Process',
font=dict(
family='Arial, sans-serif;',
size=12,
color='#000'
),
xaxis = dict(title='Iteration Number'),
yaxis = dict(title='Percentage Decline [%]'),
orientation=0,
autosize=True,
annotations=[
dict(
x=max(data_dict["decline_iterations"])/2,
y=max(data_dict['decline_values']),
text='P_G: {0}'.format(P_G),
showarrow=False
)
]
)
# Plot figure
#iplot({"data": fig_data, "layout": layout})
Explanation: The previous plot shows the evolution of the capital throughout the Binomial process, alongside we show the mean and the most probable value of the possible outcomes. As one increases the number of iterations the mean surpassess the most probable value for good while maintaining a very close gap.
Question 4.
We want to see how likely it is to have a capital decline of "X" percent over the next "n" iterations.
The plot we want is obtained by selecting a subset of the evolution curve. The subset of the values correspond to those where the multiplying factors are less than 1. After such values are selected one applies the transformation:
$$ y = 1-x$$
In this new scale the y value represents the capital decline.
End of explanation
# Capital percentage decline of bankruptcy
CP_br = 20
# Variable to store the plot data
data_dict["bankruptcy_probs"] = []
data_dict["bankruptcy_iterations"] = []
# Calculate for each iteration the probability of bankruptcy
iter_counter = 0
for i, iteration in enumerate(data_dict["decline_iterations"]):
if data_dict["decline_values"][i] >= CP_br:
if iteration > iter_counter:
data_dict["bankruptcy_probs"].append(data_dict["decline_probs"][i])
data_dict["bankruptcy_iterations"].append(iteration)
else:
data_dict["bankruptcy_probs"][-1] = data_dict["bankruptcy_probs"][-1] + data_dict["decline_probs"][i]
iter_counter = iteration
# Plot data variable. It contains the trace objects
fig_data = [
go.Scatter(
x=data_dict['bankruptcy_iterations'],
y=data_dict['bankruptcy_probs'],
mode='lines+markers',
name="Mean",
line=dict(
shape='spline'
)
)
]
# Set layout for figure
layout = go.Layout(
title='Probability of Bankruptcy Through Binomial Process',
font=dict(
family='Arial, sans-serif;',
size=12,
color='#000'
),
xaxis = dict(title='Iteration Number'),
yaxis = dict(title='Event Probability [%]'),
orientation=0,
autosize=True,
annotations=[
dict(
x=max(data_dict['bankruptcy_iterations'])/2,
y=max(data_dict['bankruptcy_probs']),
text='P_G: {0} | CP_br: {1}%'.format(P_G, CP_br),
showarrow=False
)
]
)
# Plot figure
#iplot({"data": fig_data, "layout": layout})
Explanation: Question 5.
Obtain the probability of bankrupcty after N iterations, bankruptcy is defined for the purposes of this notebook as the event in which the principal perceives a capital decline bigger than or equal to X percent
End of explanation |
13,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
"Spatial Clustering" - the Galaxy Correlation Function
The degree to which objects positions are correlated with each other - "clustered" - is of great interest in astronomy.
We expect galaxies to appear in groups and clusters, as they fall together under gravity
Step1: The Correlation Function
The 2-point correlation function $\xi(\theta)$ is defined as "the probability of finding two galaxies separated by an angular distance $\theta$ with respect to that expected for a random distribution" (Peebles 1980), and is an excellent summary statistic for quantifying the clustering of galaxies.
The simplest possible estimator for this excess probability is just
$\hat{\xi}(\theta) = \frac{DD - RR}{RR}$,
where $DD(\theta) = N_{\rm pairs}(\theta) / N_D(N_D-1)/2$. Here, $N_D$ is the total number of galaxies in the dataset, and $N_{\rm pairs}(\theta)$ is the number of galaxy pairs with separation lying in a bin centered on $\theta$. $RR(\theta)$ is the same quantity computed in a "random catalog," covering the same field of view but with uniformly randomly distributed positions.
Correlations between mock galaxies distributed uniformly randomly over the survey "footprint" helps account for spurious effects in the correlation function that might arise from weird survey area design.
We'll use Mike Jarvis' TreeCorr code (Jarvis et al 2004) to compute this correlation function estimator efficiently. You can read more about better estimators starting from the TreeCorr wiki.
Step2: Random Catalogs
First we'll need a random catalog. Let's make it the same size as the data one.
While this may not be needed for the small field in this example, let's generate random points that are uniformly distributed on a patch of the sphere.
Step3: Now let's plot both catalogs, and compare.
Step4: Estimating $\xi(\theta)$ | Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import print_function
import numpy as np
import SDSS
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import copy
# We want to select galaxies, and then are only interested in their positions on the sky.
data = pd.read_csv("downloads/SDSSobjects.csv",usecols=['ra','dec','u','g',\
'r','i','size'])
# Filter out objects with bad magnitude or size measurements:
data = data[(data['u'] > 0) & (data['g'] > 0) & (data['r'] > 0) & (data['i'] > 0) & (data['size'] > 0)]
# Make size cuts, to exclude stars and nearby galaxies, and magnitude cuts, to get good galaxy detections:
data = data[(data['size'] > 0.8) & (data['size'] < 10.0) & (data['i'] > 17) & (data['i'] < 22)]
# Drop the things we're not so interested in:
del data['u'], data['g'], data['r'], data['i'],data['size']
data.head()
Ngals = len(data)
ramin,ramax = np.min(data['ra']),np.max(data['ra'])
decmin,decmax = np.min(data['dec']),np.max(data['dec'])
print (Ngals,"galaxy-like objects in (ra,dec) range (",ramin,":",ramax,",",decmin,":",decmax,")")
Explanation: "Spatial Clustering" - the Galaxy Correlation Function
The degree to which objects positions are correlated with each other - "clustered" - is of great interest in astronomy.
We expect galaxies to appear in groups and clusters, as they fall together under gravity: the statistics of galaxy clustering should contain information about galaxy evolution during hierarchical structure formation.
Let's try and measure a clustering signal in our SDSS photometric object catalog.
End of explanation
# !pip install --upgrade TreeCorr
Explanation: The Correlation Function
The 2-point correlation function $\xi(\theta)$ is defined as "the probability of finding two galaxies separated by an angular distance $\theta$ with respect to that expected for a random distribution" (Peebles 1980), and is an excellent summary statistic for quantifying the clustering of galaxies.
The simplest possible estimator for this excess probability is just
$\hat{\xi}(\theta) = \frac{DD - RR}{RR}$,
where $DD(\theta) = N_{\rm pairs}(\theta) / N_D(N_D-1)/2$. Here, $N_D$ is the total number of galaxies in the dataset, and $N_{\rm pairs}(\theta)$ is the number of galaxy pairs with separation lying in a bin centered on $\theta$. $RR(\theta)$ is the same quantity computed in a "random catalog," covering the same field of view but with uniformly randomly distributed positions.
Correlations between mock galaxies distributed uniformly randomly over the survey "footprint" helps account for spurious effects in the correlation function that might arise from weird survey area design.
We'll use Mike Jarvis' TreeCorr code (Jarvis et al 2004) to compute this correlation function estimator efficiently. You can read more about better estimators starting from the TreeCorr wiki.
End of explanation
random = pd.DataFrame({'ra' : ramin + (ramax-ramin)*np.random.rand(Ngals), 'dec' : (180./np.pi)*np.arcsin(np.random.uniform(np.sin(decmin*np.pi/180.0), np.sin(decmax*np.pi/180.),Ngals))})
print (len(random), type(random))
Explanation: Random Catalogs
First we'll need a random catalog. Let's make it the same size as the data one.
While this may not be needed for the small field in this example, let's generate random points that are uniformly distributed on a patch of the sphere.
End of explanation
fig, ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
random.plot(kind='scatter', x='ra', y='dec', ax=ax[0], title='Random')
ax[0].set_xlabel('RA / deg')
ax[0].set_ylabel('Dec. / deg')
data.plot(kind='scatter', x='ra', y='dec', ax=ax[1], title='Data')
ax[1].set_xlabel('RA / deg')
ax[1].set_ylabel('Dec. / deg')
Explanation: Now let's plot both catalogs, and compare.
End of explanation
import treecorr
random_cat = treecorr.Catalog(ra=random['ra'], dec=random['dec'], ra_units='deg', dec_units='deg')
data_cat = treecorr.Catalog(ra=data['ra'], dec=data['dec'], ra_units='deg', dec_units='deg')
# Set up some correlation function estimator objects:
sep_units='arcmin'
min_sep=0.5
max_sep=10.0
N = 7
bin_size = np.log10(1.0*max_sep/min_sep)/(1.0*N)
dd = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)
rr = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)
# Process the data:
dd.process(data_cat)
rr.process(random_cat)
# Combine into a correlation function and its variance:
xi, varxi = dd.calculateXi(rr)
plt.figure(figsize=(15,8))
plt.rc('xtick', labelsize=16)
plt.rc('ytick', labelsize=16)
plt.errorbar(np.exp(dd.logr),xi,np.sqrt(varxi),c='blue',linewidth=2)
# plt.xscale('log')
plt.xlabel('$\\theta / {\\rm arcmin}$',fontsize=20)
plt.ylabel('$\\xi(\\theta)$',fontsize=20)
plt.ylim([-0.1,0.2])
plt.grid(True)
Explanation: Estimating $\xi(\theta)$
End of explanation |
13,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The rise of Newsletter Spam
Step1: Connect to the Gmail API
To get our emails, we will use the Gmail API. To to this we first need to enable the Gmail api and download a credential file. In this case I have stored the credential file next to the jupyter notebook. Follow steps 1 and 2 on this page to enable the API and get the credential file.
First we need to connect to the gmail api with the credentials file and build a "Resource" object
Step2: This object called service has a set of functions to connect to the API service. The following lists all the labels in my inbox, and is a good test to see if our connection works
Step3: Loading our data
To download all our emails we can use the get function. This function needs an email id as as input. To get these IDs we need the users.messages.list method. The following function returns a list with all email id's belonging to a specific label (e.g. Inbox, Spam, Sent, etc)
Step4: For the purpose of this post we are just interested in messages in my inbox
Step5: Single events can be retrieved using the get function which returns a dictionary
Step6: To parse the output from the mentioned get function, I've created a small "Email" class which takes the email dictionary as the input of its constructor, and has the parts of the email we are interested in as its attributes. I've added type hints in case this is ever taken out of this notebook and put into a module.
Step8: Now we can fetch a list of emails and convert this to a DataFrame. As we don't want to send and get a new http request for each email we will use the BatchHttpRequest object. This object allows us to bundle multiple http requests and a callback function that handles the result of the individual requests. The gmail API is rate limited at 250 requests per second - so we will have to create batches of 250 or less requests, and wait one second after each batch request is executed.
Step9: Exploring our data
Let's have a look at our newly created DataFrame
Step10: We can now use this DataFrame to dig through our emails. For example calculate the total size of our inbox in gigabytes
Step11: Or find out the biggest email in our inbox - in this case a 34MB email with pictures from a Rafting trip in 2011. Fun times!
Step12: This is an obvious outlier - most emails are much smaller
Step13: Now lets see who our most frequent senders are. First we want to clean up the adresses a bit and strip out only the actuall email adress
Step14: To see who the top senders are we can group by this new column and calculate the number of emails recieved from this person (count) and the total size of all emails send by this person (sum of size_mb)
Step15: I've anonymised most of the senders for obvious reasons. I'm glad too see there are some friends and family members in the top 10, and it's not only newsletters. My number 1 spammer
Step16: Emails over time
Lets calculate the amount of emails received per week. First we need to change the index of the DataFrame to the date column
Step17: Now we need to resample the DataFrame in weekly periods, and count the number of emails per week. To get a nice and smooth line we will use a rolling average of these counts. To calculate the means we use a gaussian kernel (the function that is used to take the average of the neighboring points).
Step18: Now plot this moving average. I'm using the Object Oriented interface of Matplotlib which gives you much more flexibility and and ease of use in the long run.
Step19: Very cool! For the reader this might be "just" a graph, which is why I recommend to clone this notebook and run it on your own data. For me I see a clear period when I was in university, a period when I was not in university and using another email address, a period when I was basically using gmail as a substitute for what is now WhatsApp, and the rise of newsletter spam.
Step20: Our columns now are a MultiIndex with three levels, the first two having just a single value each ('sender_norm', and 'count'). We can remove these to get a cleaner looking plot
Step21: Let's plot it!
Step22: To be honest, it looks like someone threw a plate of spaghetti on my screen... Let's put it in something a little bit more readable, such as a heatmap. We can do this with the excellent Seaborn heatmap function. We can use this on our DataFrame directly, or transpose our DataFrame to get one were the senders are on the Y-axis and dates on the X-axis
Step24: Now create the heatmap
Step25: This looks a bit nicer, its easy to see when the hotspots were for certain senders and how they correlate. The downside of this plot is that the absolute numbers are harder to read.
Newsletter spam
Now back to the original subject - how much of the email in my inbox comes from maillinglists, or "newsletter spam"? Since 2010 Google started adding labels to each email which do a pretty good job at classifying the different types of email. The different categories an email can have are "forums", "personal", "social", "promotions", and "updates". Out of these 5 options, the only category I would not consider spam are the personal emails.
Building a DataFrame
First I delete the ns-vertragings mail. Its not fair to call this newsletter spam, as its basically something I send myself using a cron-job. Its also being mislabeled by gmail alot.
Step26: Each email can have multiple labels. We need to "explode" these into a new dataframe with one row for each label
Step27: As you can see, the labels are now shown in a "long" format, with multiple labels per email
Step28: On this data we'll do the same as we did before on all emails
Step29: I think its clear that the number of updates, promoties and social media updates are growing fast. The actual numbers are hard to gauge as I've been consistently inconsistent in clearing my inbox of these type of emails.
Deleting events
I'll leave the actual deletion as an exercise for the reader. You can batch delete messages with | Python Code:
import httplib2
import os
import base64
import numpy as np
import pandas as pd
import datetime
import logging
import time
import matplotlib.pyplot as plt
import seaborn as sns
from typing import Union, Any, List, Optional, cast
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
%matplotlib inline
sns.set()
Explanation: The rise of Newsletter Spam: A journey through my Gmail inbox
In the beginning there was spam. Cheap, unpersonalised, mass-send junk mail, easily defeated by simple Bayesian Filters. Over the years spammers improved and an arms race between spammers and spam filters was started. Spam was to me never more then a minor nuisance, and when gmail was launched, and all of google's might was put into the race spam in your inbox became virtually extinct. Now I don't even remember what an email for "ch3aP V1agrA" email looks like.
Does this mean my inbox is empty? No. In fact I feel I receive more unwanted mail then ever. With the internet being more and more intertwined in our lives, we drop our email addresses with more and more companies, whom in turn have started sending "promotions", and "updates" more and more frequent. Even though they usually contain an "unsubscribe" option which I sometimes spend some time clicking though, these mailing lists have become a bigger source of irritation then spam ever was.
This jupyter notebook started out as a way to regularly delete all "Newsletter spam" from my inbox. It turned out however, to be a lot more fun to dig through my gmail inbox, which is what this post is mostly about. I would recommend everyone reading this to clone this notebook and start the same journey on your own inbox. Viewing stats on my inbox is not that interesting, viewing the same stats on your own inbox? A completely different story. It also gives you a sense on how big mailing list spam has become. Although the Gmail API has a delete option - it went against my Data Scientist instinct to actually delete anything.
End of explanation
SCOPES = ['https://www.googleapis.com/auth/gmail.readonly']
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server()
service = build('gmail', 'v1', credentials=creds)
Explanation: Connect to the Gmail API
To get our emails, we will use the Gmail API. To to this we first need to enable the Gmail api and download a credential file. In this case I have stored the credential file next to the jupyter notebook. Follow steps 1 and 2 on this page to enable the API and get the credential file.
First we need to connect to the gmail api with the credentials file and build a "Resource" object:
End of explanation
labels = service.users().labels().list(userId='me').execute()
[label['name'] for label in labels['labels']]
Explanation: This object called service has a set of functions to connect to the API service. The following lists all the labels in my inbox, and is a good test to see if our connection works
End of explanation
def list_messages_with_labels(service, user_id, label_ids=[]):
response = service.users().messages().list(userId=user_id,
labelIds=label_ids).execute()
messages = list()
if 'messages' in response:
messages.extend(response['messages'])
while 'nextPageToken' in response:
page_token = response['nextPageToken']
response = service.users().messages().list(userId=user_id,
labelIds=label_ids,
pageToken=page_token).execute()
messages.extend(response['messages'])
return messages
Explanation: Loading our data
To download all our emails we can use the get function. This function needs an email id as as input. To get these IDs we need the users.messages.list method. The following function returns a list with all email id's belonging to a specific label (e.g. Inbox, Spam, Sent, etc):
End of explanation
all_email_ids = list_messages_with_labels(service, 'me', 'INBOX')
print(f'I have {format(len(all_email_ids), ",d")} messages in my inbox')
Explanation: For the purpose of this post we are just interested in messages in my inbox:
End of explanation
event = service.users().messages().get(userId='me', id='168480e9f32d4068').execute()
Explanation: Single events can be retrieved using the get function which returns a dictionary:
End of explanation
class Email(object):
def __init__(self, email: dict):
self._logger = logging.getLogger('Email')
self.id: str = email['id']
self.label_ids: List[str] = email.get('labelIds', None)
self.date: datetime.datetime = datetime.datetime.fromtimestamp(int(email['internalDate'])/1000)
self.size: int = email['sizeEstimate']
self.sender: str = None
self.to: str = None
self.subject: str = None
if 'headers' in email['payload']:
self._parse_headers(email)
else:
self._logger.warning(f'Headers not found for email with id: {self.id}')
self.__dict__ = self._as_dict()
def _parse_headers(self, email: dict):
headers = email['payload']['headers']
for header in headers:
if header['name'] == 'From':
self.sender = header['value']
elif header['name'] == 'To':
self.to = header['value']
elif header['name'] == 'Subject':
self.subject = header['value']
def _as_dict(self):
return {k: v for k, v in self.__dict__.items() if not k.startswith('_')}
Explanation: To parse the output from the mentioned get function, I've created a small "Email" class which takes the email dictionary as the input of its constructor, and has the parts of the email we are interested in as its attributes. I've added type hints in case this is ever taken out of this notebook and put into a module.
End of explanation
BATCH_SIZE = 200 # Maximum number of requests per second
emails = list() # List of Dictionaries with the emails - will be used as input for our DataFrame
def add_emails(request_id, response, exception):
Callback function that handles the result of each request
if exception is not None:
# Do something with the exception
raise ValueError(exception)
else:
# Convert the email to a dictionary using our Email class
emails.append(vars(Email(response)))
batch = service.new_batch_http_request()
for i, msg_id in enumerate(all_email_ids):
batch.add(service.users().messages().get(userId = 'me', id = msg_id['id']), callback=add_emails)
if i % BATCH_SIZE == 0:
batch.execute()
batch = service.new_batch_http_request()
print(f'{i} out of {len(all_email_ids)} done')
time.sleep(2)
# Create a DataFrame from our list of emails
all_emails = pd.DataFrame(emails)
Explanation: Now we can fetch a list of emails and convert this to a DataFrame. As we don't want to send and get a new http request for each email we will use the BatchHttpRequest object. This object allows us to bundle multiple http requests and a callback function that handles the result of the individual requests. The gmail API is rate limited at 250 requests per second - so we will have to create batches of 250 or less requests, and wait one second after each batch request is executed.
End of explanation
all_emails.head()
Explanation: Exploring our data
Let's have a look at our newly created DataFrame:
End of explanation
all_emails['size'].sum() / 1024 ** 3
Explanation: We can now use this DataFrame to dig through our emails. For example calculate the total size of our inbox in gigabytes:
End of explanation
all_emails[all_emails['size'] == max(all_emails['size'])]
Explanation: Or find out the biggest email in our inbox - in this case a 34MB email with pictures from a Rafting trip in 2011. Fun times!
End of explanation
# Add a column with sizes in Mb - which is easier to read
all_emails['size_mb'] = all_emails['size'] / 1024 ** 2
_ = plt.hist(all_emails['size_mb'], bins=[0, 0.05, 0.1, 0.2, 0.3, 0.5, 0.8, 1])
print(f'The median size is only {(all_emails["size"].median() / 1024):.2f} kb')
Explanation: This is an obvious outlier - most emails are much smaller:
End of explanation
all_emails['sender_norm'] = (all_emails['sender']
.str.extract('<?(\S+@\S+.\w+)>?', expand=False)
.str.lower()
.str.replace('"', '')
.str.replace('<', '')
.str.replace('[', ''))
Explanation: Now lets see who our most frequent senders are. First we want to clean up the adresses a bit and strip out only the actuall email adress:
End of explanation
top_senders = (all_emails.groupby('sender_norm')
.agg({'sender_norm': ['count'], 'size_mb' : ['sum']})
.sort_values(by=[('sender_norm', 'count')], ascending=False))
# Check the 10 senders that send most emails
top_senders.head(10)
Explanation: To see who the top senders are we can group by this new column and calculate the number of emails recieved from this person (count) and the total size of all emails send by this person (sum of size_mb)
End of explanation
top_senders.sort_values(by=[('size_mb', 'sum')], ascending=False).head(10)
Explanation: I've anonymised most of the senders for obvious reasons. I'm glad too see there are some friends and family members in the top 10, and it's not only newsletters. My number 1 spammer: [email protected] is an automated mail I get when the train I used to take gets delayed, see this previous blog post. As you can see, this train is delayed a lot... It's also good to know that the newsletters are generally much smaller then the emails from friends. If we sort by size we see mostly natural persons in the top 10, me sending myself emails with large attachments being number 1.
End of explanation
all_emails = all_emails.set_index('date')
Explanation: Emails over time
Lets calculate the amount of emails received per week. First we need to change the index of the DataFrame to the date column:
End of explanation
weekly_counts = all_emails.resample('W').count() # Get a count per week
# filter data from before gmail existed
weekly_counts = weekly_counts[weekly_counts.index > np.datetime64('2004-04-01')]
# Calculate the moving average
moving_av = weekly_counts.rolling(10, center=True, win_type='gaussian').mean(std=3)['id']
Explanation: Now we need to resample the DataFrame in weekly periods, and count the number of emails per week. To get a nice and smooth line we will use a rolling average of these counts. To calculate the means we use a gaussian kernel (the function that is used to take the average of the neighboring points).
End of explanation
fig, ax = plt.subplots(figsize=(20,8))
ax.set(xlabel='Date', ylabel='Weekly Count',
title='Emails received per Week')
_ = moving_av.plot(ax=ax)
Explanation: Now plot this moving average. I'm using the Object Oriented interface of Matplotlib which gives you much more flexibility and and ease of use in the long run.
End of explanation
# Filter only emails from the 15 frequent senders:
top_sender_over_time = all_emails[all_emails['sender_norm'].isin(top_senders.head(15).index)]
# Group by sender and month and count
top_sender_over_time = (top_sender_over_time
.groupby(['sender_norm', pd.Grouper(level='date', freq='M')])
.agg({'sender_norm': ['count']}))
# "Unstack" the sender part of the index, so each sender gets his own column
top_sender_over_time = top_sender_over_time.unstack(level='sender_norm')
# Resample to make sure all periods have a value, even when no emails were recieved in that period
top_sender_over_time = top_sender_over_time.resample('M')
# Calculate the moving average the same way we did before
top_sender_over_time = (top_sender_over_time.sum()
.rolling(10, center=True, win_type='gaussian')
.mean(std=3)
)
Explanation: Very cool! For the reader this might be "just" a graph, which is why I recommend to clone this notebook and run it on your own data. For me I see a clear period when I was in university, a period when I was not in university and using another email address, a period when I was basically using gmail as a substitute for what is now WhatsApp, and the rise of newsletter spam.
End of explanation
top_sender_over_time = top_sender_over_time['sender_norm']['count']
Explanation: Our columns now are a MultiIndex with three levels, the first two having just a single value each ('sender_norm', and 'count'). We can remove these to get a cleaner looking plot:
End of explanation
fig, ax = plt.subplots(figsize=(20,8))
ax.set(xlabel='Date', ylabel='Weekly Count',
title='Emails received per Week')
_ = top_sender_over_time.plot(ax=ax)
Explanation: Let's plot it!
End of explanation
top_sender_over_time_t = top_sender_over_time.transpose()
top_sender_over_time_t.columns = top_sender_over_time_t.columns.strftime('%Y-%m')
Explanation: To be honest, it looks like someone threw a plate of spaghetti on my screen... Let's put it in something a little bit more readable, such as a heatmap. We can do this with the excellent Seaborn heatmap function. We can use this on our DataFrame directly, or transpose our DataFrame to get one were the senders are on the Y-axis and dates on the X-axis:
End of explanation
def plot_heatmap(df_to_plot, xlabel, ylabel, title):
Plots heatmap based of df_to_plot with some extra formatting
fig, ax = plt.subplots(figsize=(25,10))
ax = sns.heatmap(df_to_plot, ax=ax, xticklabels=True, cmap="RdBu_r")
# I only want to see 1/5th of the orignal x axis labels for better readabilty
xticks = ax.get_xticks()
xtick_labels = ax.get_xticklabels()
x_labels = [label for i, label in enumerate(xtick_labels) if i % 5 == 0]
_ = ax.set_xticks([x for i, x in enumerate(xticks) if i % 5 == 0])
_ = ax.set_xticklabels(x_labels)
# The following formats the labels on the x-axis to be more readable
_ = fig.autofmt_xdate()
# Set axis labels and title
_ = plt.ylabel(xlabel)
_ = plt.xlabel(ylabel)
_ = ax.set_title(title)
plot_heatmap(top_sender_over_time_t, 'Sender', 'Date', 'Emails received per Month')
Explanation: Now create the heatmap:
End of explanation
all_emails = all_emails[all_emails.sender!='[email protected]']
Explanation: This looks a bit nicer, its easy to see when the hotspots were for certain senders and how they correlate. The downside of this plot is that the absolute numbers are harder to read.
Newsletter spam
Now back to the original subject - how much of the email in my inbox comes from maillinglists, or "newsletter spam"? Since 2010 Google started adding labels to each email which do a pretty good job at classifying the different types of email. The different categories an email can have are "forums", "personal", "social", "promotions", and "updates". Out of these 5 options, the only category I would not consider spam are the personal emails.
Building a DataFrame
First I delete the ns-vertragings mail. Its not fair to call this newsletter spam, as its basically something I send myself using a cron-job. Its also being mislabeled by gmail alot.
End of explanation
labels_over_time = pd.DataFrame(all_emails.label_ids.apply(pd.Series, 1).stack())
labels_over_time.columns = ['label']
labels_over_time = labels_over_time[labels_over_time.index.get_level_values('date') > np.datetime64('2004-04-01')]
Explanation: Each email can have multiple labels. We need to "explode" these into a new dataframe with one row for each label
End of explanation
labels_over_time.head()
Explanation: As you can see, the labels are now shown in a "long" format, with multiple labels per email:
End of explanation
labels_over_time_cnt = (labels_over_time
.groupby(['label', pd.Grouper(level='date', freq='M')])
.agg({'label': ['count']})
.label
)
labels_over_time_cnt = (labels_over_time_cnt
.unstack(level='label')
.resample('M').sum()
.rolling(10, center=True, win_type='gaussian')
.mean(std=3)
)
labels_over_time_cnt = labels_over_time_cnt['count']
labels_over_time_cnt_t = labels_over_time_cnt.transpose()
labels_over_time_cnt_t.columns = labels_over_time_cnt_t.columns.strftime('%Y-%m')
# Keep only the category labels
labels_over_time_cnt_t = labels_over_time_cnt_t[labels_over_time_cnt_t.index.str.startswith('CATEGORY')]
plot_heatmap(labels_over_time_cnt_t, 'Sender', 'Date', 'Emails received per Month')
fig, ax = plt.subplots(figsize=(20,8))
ax.set(xlabel='Date', ylabel='Weekly Count',
title='Emails received per Week')
_ = labels_over_time_cnt.filter(like='CATEGORY', axis=1).plot(ax=ax)
Explanation: On this data we'll do the same as we did before on all emails: group by month and get the counts for each label, resample and calculate the rolling average. After that we transpose to get the months as columns and the categories as rows:
End of explanation
service.users().messages().batchDelete(userId=user_id, body={
"ids": [ # The IDs of the messages to delete.
"A String",
],
}).execute()
Explanation: I think its clear that the number of updates, promoties and social media updates are growing fast. The actual numbers are hard to gauge as I've been consistently inconsistent in clearing my inbox of these type of emails.
Deleting events
I'll leave the actual deletion as an exercise for the reader. You can batch delete messages with:
End of explanation |
13,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ISWC 2017 RSP Demo
Ensure you have the latest version of rsplib
Step1: A simple Experiment Using CITYBENCH Streams
Step2: Deploy The Experiment, i.e. register streams, queries and observers to the output streams
Go And Check CSPARQL performance at http
Step3: Execute, i.e. unregister streams, queries and observers to the output streams after a fixed amount of time
Step4: Experiment Recap | Python Code:
!pip install rsplib --upgrade
from rsplib.processing import execute, deploy
from rsplib.processing.consumer.query import *
from rsplib.experiments import Experiment, ExperimentExecution, Report
Explanation: ISWC 2017 RSP Demo
Ensure you have the latest version of rsplib
End of explanation
#create the experiment
e = Experiment()
#number of time the experiment is reapeated
repetition=1
#set the experiment duration
e.set_duration(30,'s')
#Add an engine, in this case C-SPARQL engine, and specify which RSP dialect it speaks
e.add_engine('csparql', 8182, Dialects.CSPARQL)
#Add a query, using the programmatic API. No worries about different syntax
#rsplib takes care of using the right one, just specify the dialect
qname = "Demo"
# Name, Type, Dialect
q = e.add_query(qname, "stream", Dialects.CSPARQL)
q.set_select_clause("{?s ?p ?o}")
q.set_where_clause("?s ?p ?o")
e.add_windowed_stream(qname,"AarhusTrafficData182955", "http://aarhustrafficdata182955:4000/sgraph", '3s','1s' )
e.add_windowed_stream(qname,"AarhusTrafficData158505", "http://aarhustrafficdata158505:4001/sgraph", '3s','1s' )
Explanation: A simple Experiment Using CITYBENCH Streams
End of explanation
ex = deploy(e)
Explanation: Deploy The Experiment, i.e. register streams, queries and observers to the output streams
Go And Check CSPARQL performance at http://localhost:3000/dashboard/db/csparql login admin pw admin
End of explanation
execute(ex)
Explanation: Execute, i.e. unregister streams, queries and observers to the output streams after a fixed amount of time
End of explanation
ex.__dict__()
Explanation: Experiment Recap
End of explanation |
13,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-2', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
13,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using a Pre-trained PyTorch Model for Inference
In this demo, we will use a pre-trained model to perform inference on a single image.
There are 3 components to this demo
Step1: Model
Step2: Input
Step3: Output
Step4: Human | Python Code:
import torch
import torchvision
import torchvision.transforms as transforms
import timm
from einops import rearrange
from PIL import Image
Explanation: Using a Pre-trained PyTorch Model for Inference
In this demo, we will use a pre-trained model to perform inference on a single image.
There are 3 components to this demo:
1. Input
2. Model
3. Output
We will cover these components in detail below.
Let us first import the required packages.
End of explanation
use_timm = False
# Download and load the pretrained ResNet-18.
if use_timm:
resnet = timm.create_model('resnet18', pretrained=True)
else:
resnet = torchvision.models.resnet18(pretrained=True)
resnet.eval()
Explanation: Model: Loading a pre-trained ResNet18 model
We use a pre-trained ResNet18 model for inference. The model is available from torchvision or from timm.
When we use a model for inference, we need to specify the eval mode. This is because the model in train mode by default, and we need to disable all the dropout layers.
End of explanation
filename = input()
# Load a PIL Image given a file name from the current directory.
img = Image.open(filename)
# Display the loaded image on notebook.
display(img)
# Resize the image to 256x256.
# Then crop the center square of the image.
# Next, convert the image to a PyTorch Tensor.
# Lastly, normalize the image so that it has mean and standard deviation as shown below.
# Reference for image transforms: https://github.com/pytorch/examples/blob/42e5b996718797e45c46a25c55b031e6768f8440/imagenet/main.py#L89-L101
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])
# PIL image undergoes transforms.
img = transform(img)
# A simplified version is to simply transform the image to a tensor
#img = transforms.ToTensor()(img)
# Check img type and shape
print("Type:", img.dtype)
print("Shape:", img.shape)
Explanation: Input: Loading an input image
We can use matplotlib image to load an image into a numpy array.
However, PyTorch transforms expects a PIL image. While we can convert numpy array to PIL, we can load an image directly into a PIL image.
End of explanation
# We need the tensor to have a batch dimension of 1.
img = rearrange(img, 'c h w -> 1 c h w')
print("New shape:", img.shape)
with torch.no_grad():
pred = resnet(img)
print("Prediction shape:", pred.shape)
pred = torch.argmax(pred, dim=1)
print("Predicted index", pred)
Explanation: Output: Making a prediction
We will now use img tensor as input to the pre-trained resnet18 model.
Before running the model for prediction, there are 2 things that we should do:
Include a batch dimension. In this case, we are using a single image, so we need to add a batch size of 1. We use rearrange for this.
Execute inference within torch.no_grad() context manager. This is because we do not want to track the gradients.
The expected output is a torch.Tensor of shape (1, 1000). resnet18 was pre-trained on ImageNet1k. We can use torch.argmax to get the index of the maximum value.
End of explanation
import urllib
filename = "imagenet1000_labels.txt"
url = "https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt"
# Download the file if it does not exist
if not os.path.isfile(filename):
urllib.request.urlretrieve(url, filename)
with open(filename) as f:
idx2label = eval(f.read())
print("Predicted label:", idx2label[pred.cpu().numpy()[0]])
Explanation: Human: Convert class index to label
To make sense of the predicted index, we need to convert it to a label. We can use https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a to get the mapping from index to label.
End of explanation |
13,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to OpenFermion
A codealong of openfermion_tutorial.ipynb
Wayne H Nixalo – 2018/6/27
<div class="alert alert-info">
Note that all the examples below must be run sequentially within a section.
</div>
1. Initializing the FermionOperator data structure
Fermionic systems are often treated in second quantization where arbitrary operators can be expressed using the fermionic creation and annihilation operators $a_k^\dagger$ and $a_k$. The fermionic ladder operators play a similar role to their qubit ladder operator counterparts, $σ_k^\dagger$ and $σ_k^-$ but are distinguished by the canonical fermionic anticommutation relations, ${a_i^\dagger, a_j^\dagger} = {a_i, a_j} = 0$ and ${a_i, a_j^\dagger} = δ_{ij}$. Any weighted sums of products of these oeprators are represented with the FermionOperator data structure in OpenFermion. The following are examples of valid FermionOperators
Step1: Ther preffered way to specify the coefficient in openfermion is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost. All inplace operants (such as +=) modify classes wheras binary operands (such as +) create copies. Important caveats are that the empty tuple FermionOperator(()) and the empty string FermionOperator('') initializes identically. The empty initializer FermonOperator() initializes the zero operator.
Step2: <div class="alert alert-info">
Note that FermionOperator has only 1 attribute
Step3: 2. Manipulating the FermionOperator data structure
So far we have explained how to initialize a single FermionOperator such as $-1.7a_3^\dagger a_1$. However, in general we will want to represent sums of these operators such as $(1 + 2i) \,a_4^\dagger a_3^\dagger a_9 a_1 - 1.7 \,a_3^\dagger a_1$. To do this, just add together two FermionOperators! We demonstrate below.
Step4: The print function prints each term in the operator on a different line. Note that the line my_operator = term_1 + term_2 creates a new object, which involves a copy of term_1 and term_2. The second block of code uses the inplace method +=, which is more efficient. This is especially import when trying to construct a very large FermionOperator. FermionOperators also support a wide range of builtins including str(), repr(), ==, !=, =, /, /=, +, +=, -, -=, and **. Note that since FermionOperators involve floats, == and != check from (in)equality up to numerical precision. We demonstrate some of these methods below.
Step5: Additionally, there are a variety of methods that act on the FermionOperator data structure. We demonstrate a small subset of those methods here.
Step6: 3. The QubitOperator data structure
The QubitOperator data structure is another essential part of OpenFermion. As the name suggests, QubitOperator is used to store qubit operators in almost exactly the same way that FermionOperators is used to store fermion operators. For instance, $X_0Z_3Y_4$ is a QubitOperator. The inernal representation of this as a terms tuple would be $((0,"X"),(3,"Z"),(4,"Y"))$. Note that one important difference between QubitOperator and FermionOperator is that the terms in QubitOperator are always sorted in order of tensor factor. In some cases, this enables faster manipulation. We initialize some QubitOperators below.
Step7: 4. Jordan-Wigner and Barvyi-Kitaev
openfermion provides functions for mapping FermionOpeartors to QubitOperators.
Step8: We see that despite the different representation, these operators are iso-spectral. We can also apply the Jordan-Wigner transform in reverse to map arbitrary QubitOperators to FermionOperators. Note that we also demonstrate the .compress() method (a method on both FermionOperators and QubitOperators) which removes zero entries.
Step9: 5. Sparse matrices and the Hubbard model
Often, one would like to obtain a sparse matrix representation of an operator which can be analyzed numerically. There's code in both openfermion.transforms and openfermion.utils which facilitates this. The function get_sparse_operator converts either a FermionOperator, a QubitOperator, or other more advanced classes such as InteractionOperator to a scipy.sparse.csc matrix. There're numerous functions in openfermion.utils which one can call on the sparse operators such as "get_gap", "get_hartree_fock_state", "get_ground_state", etc. We show this off by computing the ground state energy of the Hubbard model. To do that, we use code from the openfermion.hamiltonians module which constructs lattice models of fermions such as Hubbard models.
Step10: 6. Hamiltonians in the plane wave basis
A user can write plugins to openfermion which allow for the use of, eg
Step11: 7. Basics of MolecularData class
Data from electronic structure calculations can be saved in an OpenFermion data structure called MolecularData, which makes it easy to access within our library. Often, one would like to analyze a chemical series or look at many different Hamiltonians and sometimes the elecronic structure calculations are either expensive to compute or difficult to converge (eg
Step12: If we had previously computed this molecule using an electronic structure package, we can call molecule.load() to populate all sorts of interesting fields in the data structure. Though we make no assumptions about what electronic structure packages users might install, we assume that the calculations are saved in OpenFermion's MolecularData objects. Currently plugins are availabel for Psi4, (OpenFermion-Psi4), and PySCF (OpenFermion-PrSCF), and there may be more in the future. For the purposes of this example, we'll load data that ships with OpenFermion to make a plot of the energy surface of hydrogen. Note that helper functions to initialize some interesting chemical benchmarks are found in openfermion.utils.
Step13: The geometry data needed to generate MolecularData can aplso be retrived from the PubChem online database by inputting the molecule's name.
Step14: 8. InteractionOperator and InteractionRDM for efficient numerical representations
Fermion Hamiltonians can be expressed as $H = h_0 + \sum_{pq}h_{pq}a^\dagger_p a^\dagger_q a_r a_s$, where $h_0$ is a constant shift due to nuclear repulsion and $h_{pq}$ and $h_{pqrs}$ are the famous molecular integrals. Since fermions interact pairwise, their energy is thus a unique function of the one-particle and two-particle reduced densitty matrices which are expressed in second quantization as $ρ_{pq} = \big\langle p\lvert a^\dagger_p a_q \rvert q\big\rangle$ and $ρ_{pqrs} = \big\langle pq \lvert a_p^\dagger a_q^\dagger a_r a_s \rvert rs \big\rangle$, respectively.
Because the RDMs and molecular Hamiltonians are both compactly represented and manipulated as 2- and 4- index tensors, we can represent them in a particularly efficient form using similar data structures. The InteractionOperator data structure can be initialized for a Hamiltonian by passing the constant $h_0$ (or 0), as well as numpy arrays representing $h_{pq}$ (or $ρ_{pq}$) and $h_{pqrs}$ (or $ρ_{pqrs}$). Importantly, InteractionOperators can also be obtained by calling MolecularData.get_molecular_hamiltonian() or by calling the function get_interaction_operator() found in openfermion.transforms) on a FermionOperator. The InteractionRDM data structure is similar but represents RDMs. For instance, one can get a molecular RDM by calling MolecularData.get_molecular_rdm(). When generating Hamiltonians from the MolecularData class, one can choose to restrict the system to an active space.
These classes inherit from the same base class, PolynomialTensor. This data structure overloads the slice oeprator [] so that one can get or set the key attributes of the InteractionOperator
Step15: 9. Quadratic Hamiltonians and Slater determinants
The general electronic structure Hamiltonian $H = h_0 + \sum_{pq} h_{pq} a^\dagger_p a_q + \frac{1}{2}\sum_{pqrs} h_{pqrs} a^\dagger_p a^\dagger_q a_r a_s$ contains terms that act on up to 4 sites, or is quartic in the fermionic creation and annihilation operators. However, in many situations we may fruitfully approximate these Hamiltonians by replacing these quartic terms with terms that act on at most 2 fermionic sites, or quadratic terms, as in mean-field approximation theory.
These Hamiltonians have a number of special properties one can exploit for efficient simulation and manipulation of the Hamiltonian, thus warranting a special data structure. We refer to Hamiltonians which only contain terms that are quadratic in the fermionic creation and annihilation operators as quadtratic Hamiltonians, and include the general case of non-particle conserving terms as in a general Боголюбов transformation. Eigenstates of quadtratic Hamiltonians can be prepared efficiently on both a quantum and classical computer, making them amenable to initial guesses for many more challenging problems.
A general quadratic Hamiltonian takes the form
<img src="https
Step16: Any quadtratic Hamiltonian may be rewritten in the form
<img src="https
Step17: Eigenstates of quadratic hamiltonians are known as fermionic Gaussian states, and they can be prepared efficiently on a quantum computer. One can use OpenFermion to obtain circuits for preparing these states. The following code obtains the description of a circuit which prepares the ground state (operations that can be performed in parallel are grouped together), along with a description of the starting state to which the circuit should be applied | Python Code:
from openfermion.ops import FermionOperator
my_term = FermionOperator(((3,1), (1,0)))
print(my_term)
my_term = FermionOperator('3^ 1')
print(my_term)
Explanation: Introduction to OpenFermion
A codealong of openfermion_tutorial.ipynb
Wayne H Nixalo – 2018/6/27
<div class="alert alert-info">
Note that all the examples below must be run sequentially within a section.
</div>
1. Initializing the FermionOperator data structure
Fermionic systems are often treated in second quantization where arbitrary operators can be expressed using the fermionic creation and annihilation operators $a_k^\dagger$ and $a_k$. The fermionic ladder operators play a similar role to their qubit ladder operator counterparts, $σ_k^\dagger$ and $σ_k^-$ but are distinguished by the canonical fermionic anticommutation relations, ${a_i^\dagger, a_j^\dagger} = {a_i, a_j} = 0$ and ${a_i, a_j^\dagger} = δ_{ij}$. Any weighted sums of products of these oeprators are represented with the FermionOperator data structure in OpenFermion. The following are examples of valid FermionOperators:
<img src="https://render.githubusercontent.com/render/math?math=%5Cbegin%7Balign%7D%0A%26amp%3B%20a_1%20%5Cnonumber%20%5C%5C%0A%26amp%3B%201.7%20a%5E%5Cdagger_3%20%5Cnonumber%20%5C%5C%0A%26amp%3B-1.7%20%5C%2C%20a%5E%5Cdagger_3%20a_1%20%5Cnonumber%20%5C%5C%0A%26amp%3B%281%20%2B%202i%29%20%5C%2C%20a%5E%5Cdagger_4%20a%5E%5Cdagger_3%20a_9%20a_1%20%5Cnonumber%20%5C%5C%0A%26amp%3B%281%20%2B%202i%29%20%5C%2C%20a%5E%5Cdagger_4%20a%5E%5Cdagger_3%20a_9%20a_1%20-%201.7%20%5C%2C%20a%5E%5Cdagger_3%20a_1%20%5Cnonumber%0A%5Cend%7Balign%7D&mode=display">
The FermionOperator class is contained in ops/_fermion_operators.py. In order to support fast addition of FermionOperator instances, the class is implemented as a hash table (python dictionary). The keys of the dictionary encoe the strings of ladder operators and values of the dictionary store the coefficients. The strings of ladder operators are encocded as a tuple of 2-tuples which we refer to as the "terms tuple". Each ladder operator is represented by a 2-tuple. The first element of the 2-tuple is an int indicating the tensor factor on which the ladder operator acts. The second element of the 2-tuple is Boole: 1 represents raising and 0 represents lowering. For instance $a_8^\dagger$ is represented in a 2-tuple as $(8,1)$. Note that indices start at 0 and the identity operator is an empty list. Below we give some examples of operators and their terms tuple:
<img src="https://render.githubusercontent.com/render/math?math=%5Cbegin%7Balign%7D%0AI%20%26amp%3B%20%5Cmapsto%20%28%29%20%5Cnonumber%20%5C%5C%0Aa_1%20%26amp%3B%20%5Cmapsto%20%28%281%2C%200%29%2C%29%20%5Cnonumber%20%5C%5C%0Aa%5E%5Cdagger_3%20%26amp%3B%20%5Cmapsto%20%28%283%2C%201%29%2C%29%20%5Cnonumber%20%5C%5C%0Aa%5E%5Cdagger_3%20a_1%20%26amp%3B%20%5Cmapsto%20%28%283%2C%201%29%2C%20%281%2C%200%29%29%20%5Cnonumber%20%5C%5C%0Aa%5E%5Cdagger_4%20a%5E%5Cdagger_3%20a_9%20a_1%20%26amp%3B%20%5Cmapsto%20%28%284%2C%201%29%2C%20%283%2C%201%29%2C%20%289%2C%200%29%2C%20%281%2C%200%29%29%20%5Cnonumber%0A%5Cend%7Balign%7D&mode=display">
Let's initialize our first term! We do it two different ways below.
End of explanation
good_way_to_initialize = FermionOperator('3^ 1', -1.7)
print(good_way_to_initialize)
bad_way_to_initialize = -1.7 * FermionOperator('3^ 1')
print(bad_way_to_initialize)
identity = FermionOperator('')
print(identity)
zero_operator = FermionOperator()
print(zero_operator)
Explanation: Ther preffered way to specify the coefficient in openfermion is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost. All inplace operants (such as +=) modify classes wheras binary operands (such as +) create copies. Important caveats are that the empty tuple FermionOperator(()) and the empty string FermionOperator('') initializes identically. The empty initializer FermonOperator() initializes the zero operator.
End of explanation
my_operator = FermionOperator('4^ 1^ 3 9', 1. + 2.j)
print(my_operator)
print(my_operator.terms)
Explanation: <div class="alert alert-info">
Note that FermionOperator has only 1 attribute: .terms. This attribute is the dictionary which stores the term tuples.
</div>
End of explanation
from openfermion.ops import FermionOperator
term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator = term_1 + term_2
print(my_operator)
my_operator = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator += term_2
print(); print(my_operator)
Explanation: 2. Manipulating the FermionOperator data structure
So far we have explained how to initialize a single FermionOperator such as $-1.7a_3^\dagger a_1$. However, in general we will want to represent sums of these operators such as $(1 + 2i) \,a_4^\dagger a_3^\dagger a_9 a_1 - 1.7 \,a_3^\dagger a_1$. To do this, just add together two FermionOperators! We demonstrate below.
End of explanation
term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator = term_1 -33. * term_2
print(my_operator)
my_operator *= 3.17 * (term_2 + term_1) ** 2
print(); print(my_operator)
print(); print(term_2 ** 3)
print(); print(term_1 == 2. * term_1 - term_1)
print(term_1 == my_operator)
Explanation: The print function prints each term in the operator on a different line. Note that the line my_operator = term_1 + term_2 creates a new object, which involves a copy of term_1 and term_2. The second block of code uses the inplace method +=, which is more efficient. This is especially import when trying to construct a very large FermionOperator. FermionOperators also support a wide range of builtins including str(), repr(), ==, !=, =, /, /=, +, +=, -, -=, and **. Note that since FermionOperators involve floats, == and != check from (in)equality up to numerical precision. We demonstrate some of these methods below.
End of explanation
from openfermion.utils import commutator, count_qubits, hermitian_conjugated, normal_ordered
# Get the Hermitian conjugate of a FermionOperator,
# count its qubits check if it's normal-ordered
term_1 = FermionOperator('4^ 3 3^', 1. + 2.j)
print(hermitian_conjugated(term_1))
print(term_1.is_normal_ordered())
print(count_qubits(term_1))
# Normal order the term
term_2 = normal_ordered(term_1)
print(); print(term_2); print(term_2.is_normal_ordered())
# Compute a commutator of the terms
print(); print(commutator(term_1, term_2))
Explanation: Additionally, there are a variety of methods that act on the FermionOperator data structure. We demonstrate a small subset of those methods here.
End of explanation
from openfermion.ops import QubitOperator
my_first_qubit_oeprator = QubitOperator('X1 Y2 Z3')
print(my_first_qubit_oeprator)
print(my_first_qubit_oeprator.terms)
operator_2 = QubitOperator('X3 Z4', 3.17)
operator_2 -= 77. * my_first_qubit_oeprator
print(f'\n{operator_2}')
Explanation: 3. The QubitOperator data structure
The QubitOperator data structure is another essential part of OpenFermion. As the name suggests, QubitOperator is used to store qubit operators in almost exactly the same way that FermionOperators is used to store fermion operators. For instance, $X_0Z_3Y_4$ is a QubitOperator. The inernal representation of this as a terms tuple would be $((0,"X"),(3,"Z"),(4,"Y"))$. Note that one important difference between QubitOperator and FermionOperator is that the terms in QubitOperator are always sorted in order of tensor factor. In some cases, this enables faster manipulation. We initialize some QubitOperators below.
End of explanation
from openfermion.ops import FermionOperator
from openfermion.transforms import jordan_wigner, bravyi_kitaev
from openfermion.utils import eigenspectrum, hermitian_conjugated
# Initialize an operator
fermion_operator = FermionOperator('2^ 0', 3.17)
fermion_operator += hermitian_conjugated(fermion_operator)
print(fermion_operator)
# Transform to qubits under the Jordan-Wigner transformation and print its spectrum
jw_operator = jordan_wigner(fermion_operator)
print(f'\n{jw_operator}')
jw_spectrum = eigenspectrum(jw_operator)
print(jw_spectrum)
# Transform to qubits under the Bravyi-Kitaev transformation and print its spectrum
bk_operator = bravyi_kitaev(fermion_operator)
print(f'\n{bk_operator}')
bk_spectrum = eigenspectrum(bk_operator)
print(f'{bk_spectrum}')
Explanation: 4. Jordan-Wigner and Barvyi-Kitaev
openfermion provides functions for mapping FermionOpeartors to QubitOperators.
End of explanation
from openfermion.transforms import reverse_jordan_wigner
# Initialize the QubitOperator
my_operator = QubitOperator('X0 Y1 Z2', 88.)
my_operator += QubitOperator('Z1 Z4', 3.17)
print(my_operator)
# Map QubitOperator to a FermionOperator
mapped_operator = reverse_jordan_wigner(my_operator)
print(f'\n{mapped_operator}')
# Map the operator back to qubits and make sure it's the same
back_to_normal = jordan_wigner(mapped_operator)
back_to_normal.compress()
print(f'\n{back_to_normal}')
Explanation: We see that despite the different representation, these operators are iso-spectral. We can also apply the Jordan-Wigner transform in reverse to map arbitrary QubitOperators to FermionOperators. Note that we also demonstrate the .compress() method (a method on both FermionOperators and QubitOperators) which removes zero entries.
End of explanation
from openfermion.hamiltonians import fermi_hubbard
from openfermion.transforms import get_sparse_operator, jordan_wigner
from openfermion.utils import get_ground_state
# Set model
x_dimension = 2
y_dimension = 2
tunneling = 2.
coulomb = 1.
magnetic_field = 0.5
chemical_potential = 0.25
periodic = 1
spinless = 1
# Get fermion operator
hubbard_model = fermi_hubbard(
x_dimension, y_dimension, tunneling, coulomb, chemical_potential,
magnetic_field, periodic, spinless)
print(hubbard_model)
# Get qubit operator under Jordan-Wigner
jw_hamiltonian = jordan_wigner(hubbard_model)
jw_hamiltonian.compress()
print(f'\n{jw_hamiltonian}')
# Get the scipy.sparse.csc representation
sparse_operator = get_sparse_operator(hubbard_model)
print(f'\n{sparse_operator}\nEnergy of the model is {get_ground_state(sparse_operator)[0]} in units of T and J.')
Explanation: 5. Sparse matrices and the Hubbard model
Often, one would like to obtain a sparse matrix representation of an operator which can be analyzed numerically. There's code in both openfermion.transforms and openfermion.utils which facilitates this. The function get_sparse_operator converts either a FermionOperator, a QubitOperator, or other more advanced classes such as InteractionOperator to a scipy.sparse.csc matrix. There're numerous functions in openfermion.utils which one can call on the sparse operators such as "get_gap", "get_hartree_fock_state", "get_ground_state", etc. We show this off by computing the ground state energy of the Hubbard model. To do that, we use code from the openfermion.hamiltonians module which constructs lattice models of fermions such as Hubbard models.
End of explanation
from openfermion.hamiltonians import jellium_model
from openfermion.utils import eigenspectrum, fourier_transform, Grid
from openfermion.transforms import jordan_wigner
# Let's look at a very small model of jellium in 1D
grid = Grid(dimensions=1, length=3, scale=1.0)
spinless = True
# Get the momentum Hamiltonian
momentum_hamiltonian = jellium_model(grid, spinless)
momentum_qubit_operator = jordan_wigner(momentum_hamiltonian)
momentum_qubit_operator.compress()
print(momentum_qubit_operator)
# Fourier transform the Hamiltonian to the position basis
position_hamiltonian = fourier_transform(momentum_hamiltonian, grid, spinless)
position_qubit_operator = jordan_wigner(position_hamiltonian)
position_qubit_operator.compress()
print(f'\n{position_qubit_operator}')
# Check the spectra to make sure these representations are iso-spectral
spectral_difference = eigenspectrum(momentum_qubit_operator) - eigenspectrum(position_qubit_operator)
print(f'\n{spectral_difference}')
Explanation: 6. Hamiltonians in the plane wave basis
A user can write plugins to openfermion which allow for the use of, eg: 3rd-party electronic structure packages to compute molecular orbitals, Hamiltonians, energies, reduced density matrices, coupled cluster amplitudes, etc, using Gaussian basis sets. We may provide scripts which interface between such packages and openfermion in the future but do not discuss them in this tutorial.
When using simpler basis sets such as plane waves, these packages are not needed. OpenFermion comes with code which computes Hamiltonians in the plane wave basis. Note that when using plane waves, one is working with the periodized Coulomb operator, best suited for condensed phase calculations such as studying the electronic structure of a solid. To obtain these Hamiltonians one must choose to study the system without a spin degree of freedom (spinless), one must specify the dimension in which the calculation is performed (n_dimensions, usually 3), one must specify how many plane waves are in each dimension (grid_length) and one must specify the length scale of the plane wave harmonics in each dimension (length_scale) and also the locations and charges of the nuclei. One can generate these models with plane_wave_hamiltonian() found in openfermion.hamiltonians. For simplicity, below we compute the Hamiltonian in the case of zero external charge (corresponding to the uniform electron gas, aka Jellium). We also demonstrate that one can transform the plane wave Hamiltonian using a Fourier transform without effecting the spectrum of the operator.
End of explanation
from openfermion.hamiltonians import MolecularData
# Set parameters to make a simple molecule
diatomic_bond_length = .7414
geometry = [('H', (0.,0.,0.)), ('H', (0.,0.,diatomic_bond_length))]
basis = 'sto-3g'
multiplicity = 1
charge = 0
description = str(diatomic_bond_length)
# Make molecule and print out a few interesting facts about it
molecule = MolecularData(geometry, basis, multiplicity,
charge, description)
print(f'Molecule has automatically generated name {molecule.name}')
print(f'Information about this molecule would be saved at:\n{molecule.filename}\n')
print(f'This molecule has {molecule.n_atoms} atoms and {molecule.n_electrons} electrons.')
for atom,atomic_number in zip(molecule.atoms, molecule.protons):
print(f'Contains {atom} atom, which has {atomic_number} protons.')
Explanation: 7. Basics of MolecularData class
Data from electronic structure calculations can be saved in an OpenFermion data structure called MolecularData, which makes it easy to access within our library. Often, one would like to analyze a chemical series or look at many different Hamiltonians and sometimes the elecronic structure calculations are either expensive to compute or difficult to converge (eg: one needs to mess around with different types of SCF routines to make things converge). Accordingly, we anticipate that users will want some way to automatically database the results of their electronic structure calculations so that important data (such as the SCF integrals) can be looked up on-the-fly if the user has computed them in the past. OpenFermion supports a data provenance strategy which saves key results of the electronic structure calculatin (including pointers to files containing large amounts of data, such as the molecular integrals) in an HDF5 container.
The MolecularData class stores information about molecules. One initializes a MolecularData object by specifying parameters of a molecule such as its geometry, basis, multiplicity, charge, and an optional string describing it. One can also initialize MolecularData simply by providing a string giving a filename where a previous MolecularData object was saved in an HDF5 container. One can save a MolecularData instance by calling the class' .save() method. This automatically saves the instance in a data folder specified during OpenFermion installation. The name of the file is generated automatically from the instance attributes and optionally provided description. Alternatively, a filename can also be provided as an optional input if one wishes to manually name the file.
When electronic structure calculations are run, the data files for the molecule can be automatically updated. If one wishes to later use the data they either initialize MolecularData with the instance filename or initialize the instance and then later call the .load() method.
Basis functions are provided to initialization using a string such as "6-31g". Geometries can be specified using a simple txt input file (see the geometry_from_file function in molecular_data.py) or can be passed using a simple python list format demonstrated below. Atoms are specified using a string for their atomic symbol. Distances should be provided in ångstrom. Below we initialize a simple instance of MolecularData without performing any electronic structure calculations.
End of explanation
%matplotlib inline
# Set molecule parameters
basis = 'sto-3g'
multiplicity = 1
bond_length_interval = 0.1
n_points = 25
# Generate molecule at different bond lengths
hf_energies = []
fci_energies = []
bond_lengths = []
for point in range(3, n_points + 1):
bond_length = bond_length_interval * point
bond_lengths += [bond_length]
description = str(round(bond_length,2))
print(f'\n{description}')
geometry = [('H', (0.,0.,0.)), ('H', (0.,0., bond_length))]
molecule = MolecularData(
geometry, basis, multiplicity, description=description)
# Load data
molecule.load()
# Print out some calculation results
print(f'At bond length of {bond_length} ångstrom, molecular hydrogen has:')
print(f'Hartree-Fock energy of {molecule.hf_energy} Hartree.')
print(f'MP2 energy of {molecule.mp2_energy} Hartree.')
print(f'FCI energy of {molecule.fci_energy} Hartree.')
print(f'Nuclear repulsion energy between protons is {molecule.nuclear_repulsion} Hartree.')
for orbital in range(molecule.n_orbitals):
print(f'Spatial orbital {orbital} has energy of {molecule.orbital_energies[orbital]} Hartree.')
hf_energies += [molecule.hf_energy]
fci_energies += [molecule.fci_energy]
# Plot
import matplotlib.pyplot as plt
plt.figure(0)
plt.plot(bond_lengths, fci_energies, 'x-'); plt.plot(bond_lengths, hf_energies, 'o-');
plt.ylabel('Energy in Hartree'); plt.xlabel('Bond length in ångstrom');
Explanation: If we had previously computed this molecule using an electronic structure package, we can call molecule.load() to populate all sorts of interesting fields in the data structure. Though we make no assumptions about what electronic structure packages users might install, we assume that the calculations are saved in OpenFermion's MolecularData objects. Currently plugins are availabel for Psi4, (OpenFermion-Psi4), and PySCF (OpenFermion-PrSCF), and there may be more in the future. For the purposes of this example, we'll load data that ships with OpenFermion to make a plot of the energy surface of hydrogen. Note that helper functions to initialize some interesting chemical benchmarks are found in openfermion.utils.
End of explanation
from openfermion.utils import geometry_from_pubchem
methane_geometry = geometry_from_pubchem('methane')
print(methane_geometry)
Explanation: The geometry data needed to generate MolecularData can aplso be retrived from the PubChem online database by inputting the molecule's name.
End of explanation
from openfermion.hamiltonians import MolecularData
from openfermion.transforms import get_fermion_operator, get_sparse_operator, jordan_wigner
from openfermion.utils import get_ground_state
import numpy as np
import scipy
import scipy.linalg
# Load saved file for LiH
diatimoc_bond_length = 1.45
geometry = [('Li', (0.,0.,0.,)), ('H', (0.,0.,diatomic_bond_length))]
basis = 'sto-3g'
multiplicity = 1
# Set Hamiltonian parameters
active_space_start = 1
active_space_stop = 3
# Generate and populate instance of MolecularData
molecule = MolecularData(geometry, basis, multiplicity, description='1.45')
molecule.load()
# Get the Hamiltonian in an active space
molecular_hamiltonian = molecule.get_molecular_hamiltonian(
occupied_indices=range(active_space_start),
active_indices=range(active_space_start, active_space_stop))
# Map operator to fermions and qubits
fermion_hamiltonian = get_fermion_operator(molecular_hamiltonian)
qubit_hamiltonian = jordan_wigner(fermion_hamiltonian)
qubit_hamiltonian.compress()
print(f'The Jordan-Wigner Hamiltonian in canonical basis follows:\n{qubit_hamiltonian}')
# Get sparse operator and ground state energy
sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)
energy,state = get_ground_state(sparse_hamiltonian)
print(f'Ground state energy before rotation is {energy} Hartree.\n')
# Randomly rotate
n_orbitals = molecular_hamiltonian.n_qubits // 2
n_variables = int(n_orbitals * (n_orbitals - 1) / 2)
np.random.seed(1)
random_angles = np.pi * (1. - 2. * np.random.rand(n_variables))
k = np.zeros((n_orbitals, n_orbitals))
index = 0
for p in range(n_orbitals):
for q in range(p + 1, n_orbitals):
k[p, q] = random_angles[index]
k[q, p] = -np.conjugate(random_angles[index])
index += 1
# Build the unitary rotation matrix
difference_matrix = k + k.transpose()
rotation_matrix = scipy.linalg.expm(k)
# Apply the unitary
molecular_hamiltonian.rotate_basis(rotation_matrix)
# Get qubit Hamiltonian in rotate basis
qubit_hamiltonian = jordan_wigner(molecular_hamiltonian)
qubit_hamiltonian.compress()
print(f'The Jordan-Wigner Hamiltonian in rotate basis follows:\n{qubit_hamiltonian}')
# Get sparse Hamiltonian and energy in rotated basis
sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)
energy,state = get_ground_state(sparse_hamiltonian)
print(f'Ground state energy after rotation is {energy} Hartree.')
Explanation: 8. InteractionOperator and InteractionRDM for efficient numerical representations
Fermion Hamiltonians can be expressed as $H = h_0 + \sum_{pq}h_{pq}a^\dagger_p a^\dagger_q a_r a_s$, where $h_0$ is a constant shift due to nuclear repulsion and $h_{pq}$ and $h_{pqrs}$ are the famous molecular integrals. Since fermions interact pairwise, their energy is thus a unique function of the one-particle and two-particle reduced densitty matrices which are expressed in second quantization as $ρ_{pq} = \big\langle p\lvert a^\dagger_p a_q \rvert q\big\rangle$ and $ρ_{pqrs} = \big\langle pq \lvert a_p^\dagger a_q^\dagger a_r a_s \rvert rs \big\rangle$, respectively.
Because the RDMs and molecular Hamiltonians are both compactly represented and manipulated as 2- and 4- index tensors, we can represent them in a particularly efficient form using similar data structures. The InteractionOperator data structure can be initialized for a Hamiltonian by passing the constant $h_0$ (or 0), as well as numpy arrays representing $h_{pq}$ (or $ρ_{pq}$) and $h_{pqrs}$ (or $ρ_{pqrs}$). Importantly, InteractionOperators can also be obtained by calling MolecularData.get_molecular_hamiltonian() or by calling the function get_interaction_operator() found in openfermion.transforms) on a FermionOperator. The InteractionRDM data structure is similar but represents RDMs. For instance, one can get a molecular RDM by calling MolecularData.get_molecular_rdm(). When generating Hamiltonians from the MolecularData class, one can choose to restrict the system to an active space.
These classes inherit from the same base class, PolynomialTensor. This data structure overloads the slice oeprator [] so that one can get or set the key attributes of the InteractionOperator: .constant, .one_body_coefficients, and .two_body_coefficients. For instance, InteractionOperator[(p,1), (q,1), (r,0), (s,0)] would return $h_{pqrs}$ and InteractionRDM would return $ρ_{pqrs}$. Importantly, the class supports fast basis transformations using the method PolynomialTensor.rotate_basis(rotation_matrix). But perhaps most importantly, one can map the InteractionOperator to any of the other data structures we've described here.
Below, we load MolecularData from a saved calculation of LiH. We then obtain an InteractionOperator representation of this system in an active space. We then map that operator to qubits. We then demonstrate that one can rotate the orbital basis of the InteractionOperator using random angles to obtain a totally different operator that is still iso-spectral.
End of explanation
from openfermion.hamiltonians import mean_field_dwave
from openfermion.transforms import get_quadratic_hamiltonian
# Set model
x_dim = 2
y_dim = 2
tunenling = 2.
sc_gap = 1.
periodic = True
# Get FermionOperator
mean_field_model = mean_field_dwave(x_dim, y_dim,
tunneling, sc_gap, periodic=periodic)
# Covnert to Quadratic Hamiltonian
quadratic_hamiltonian = get_quadratic_hamiltonian(mean_field_model)
# Compute the ground energy
ground_energy = quadratic_hamiltonian.ground_energy()
print(ground_energy)
Explanation: 9. Quadratic Hamiltonians and Slater determinants
The general electronic structure Hamiltonian $H = h_0 + \sum_{pq} h_{pq} a^\dagger_p a_q + \frac{1}{2}\sum_{pqrs} h_{pqrs} a^\dagger_p a^\dagger_q a_r a_s$ contains terms that act on up to 4 sites, or is quartic in the fermionic creation and annihilation operators. However, in many situations we may fruitfully approximate these Hamiltonians by replacing these quartic terms with terms that act on at most 2 fermionic sites, or quadratic terms, as in mean-field approximation theory.
These Hamiltonians have a number of special properties one can exploit for efficient simulation and manipulation of the Hamiltonian, thus warranting a special data structure. We refer to Hamiltonians which only contain terms that are quadratic in the fermionic creation and annihilation operators as quadtratic Hamiltonians, and include the general case of non-particle conserving terms as in a general Боголюбов transformation. Eigenstates of quadtratic Hamiltonians can be prepared efficiently on both a quantum and classical computer, making them amenable to initial guesses for many more challenging problems.
A general quadratic Hamiltonian takes the form
<img src="https://render.githubusercontent.com/render/math?math=H%20%3D%20%5Csum_%7Bp%2C%20q%7D%20%28M_%7Bpq%7D%20-%20%5Cmu%20%5Cdelta_%7Bpq%7D%29%20a%5E%5Cdagger_p%20a_q%20%2B%20%5Cfrac%7B1%7D%7B2%7D%20%5Csum_%7Bp%2C%20q%7D%20%28%5CDelta_%7Bpq%7D%20a%5E%5Cdagger_p%20a%5E%5Cdagger_q%20%2B%20%5CDelta_%7Bpq%7D%5E%2A%20a_q%20a_p%29%20%2B%20%5Ctext%7Bconstant%7D%2C&mode=display">
where $M$ is a Hermitian matrix, $Δ$ is an antisymmetric matrix, $δ_{pq}$ is the Kronecker delta symbol, and $μ$ is a chemical potential term which we keep spearate from $M$ so that we can use it to adjust the expectation of the total number of particles. In OpenFermion, quadtratic Hamiltonians are conveniently represented and manipulated using the QuadraticHamiltonian class, which stores $M$, $Δ$, $μ$, and the constant. It's specialized to exploit the properties unique to quadratic Hamiltonians. Like InteractionOperator and InteractionRDM, it inherits from the PolynomialTensor class.
The BCS mean-field model of superconductivity is a quadratic Hamiltonian. The following code constructs an instance of this model as a FermionOperator, converts it to a QuadraticHamiltonian, and then computes its ground energy:
End of explanation
orbital_energies,constant = quadratic_hamiltonian.orbital_energies()
print(f'{orbital_energies}\n{constant}')
Explanation: Any quadtratic Hamiltonian may be rewritten in the form
<img src="https://render.githubusercontent.com/render/math?math=H%20%3D%20%5Csum_p%20%5Cvarepsilon_p%20b%5E%5Cdagger_p%20b_p%20%2B%20%5Ctext%7Bconstant%7D%2C&mode=display">
where the $b_p$ are new annihilation operators that satisfy the fermionic anticommutation relations, and which are linear combinations of the old creation and annihilation operators. This form of $H$ makes it easy to deduce its eigenvalues; they are sums of subsets of the $ε_p$, which we call the orbital energies of $H$. The following code computes the orbital energies and the constant:
End of explanation
from openfermion.utils import gaussian_state_preparation_circuit
circuit_description,start_orbitals = gaussian_state_preparation_circuit(quadratic_hamiltonian)
for parallel_ops in circuit_description:
print(parallel_ops)
print(f'\n{start_orbitals}')
Explanation: Eigenstates of quadratic hamiltonians are known as fermionic Gaussian states, and they can be prepared efficiently on a quantum computer. One can use OpenFermion to obtain circuits for preparing these states. The following code obtains the description of a circuit which prepares the ground state (operations that can be performed in parallel are grouped together), along with a description of the starting state to which the circuit should be applied:
End of explanation |
13,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
第二章 探索数据
加载CSV
通常NumPy, pandas和matplotlib是用来进行数据分析的常用包
Step1: 创建一个变量url,指向一个csv文件。然后通过read_csv()函数来加载它。
Step2: 变量df包含了一个DataFrame对象,一种二维表的pandas数据结构。 接下来就调用head(n)方法来显示前n列的数据吧。notebook会将其显示为一个HTML的表,如下所示: | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: 第二章 探索数据
加载CSV
通常NumPy, pandas和matplotlib是用来进行数据分析的常用包
End of explanation
url = 'http://aima.cs.berkeley.edu/data/iris.csv'
df = pd.read_csv(url,delimiter=',')
Explanation: 创建一个变量url,指向一个csv文件。然后通过read_csv()函数来加载它。
End of explanation
df.head(2)
df.describe()
from numpy import genfromtxt, zeros
data = genfromtxt(url,delimiter=',',usecols=(0,1,2,3))
target = genfromtxt(url,delimiter=',',usecols=(4),dtype=str)
print(data.shape)
print(target.shape)
print(set(target))
from pylab import plot, show
plot(data[target=='setosa',0],data[target=='setosa',2],'bo')
plot(data[target=='versicolor',0],data[target=='versicolor',2],'ro')
plot(data[target=='virginica',0],data[target=='virginica',2],'go')
show()
Explanation: 变量df包含了一个DataFrame对象,一种二维表的pandas数据结构。 接下来就调用head(n)方法来显示前n列的数据吧。notebook会将其显示为一个HTML的表,如下所示:
End of explanation |
13,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
Step14: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom model and training for CIFAR10.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step16: Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary
Step17: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step18: Create and run custom training job
To train a custom model, you perform two steps
Step19: Prepare your command-line arguments
Now define the command-line arguments for your custom training container
Step20: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters
Step21: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step22: Evaluate the model
Now find out how good the model is.
Load evaluation data
You will load the CIFAR10 test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements
Step23: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step24: Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model
Step25: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
Step26: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of
Step27: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of
Step28: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters
Step29: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters
Step30: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
Step31: Prepare the request content
You are going to send the CIFAR10 image as compressed JPG image, instead of the raw uncompressed bytes
Step32: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is
Step33: Understanding the explanations response
Preview the images and their predicted classes without the explanations. Why did the model predict these classes?
Step34: Visualize the images with AI Explanations
The images returned show the explanations for only the top class predicted by the model. This means that if one of the model's predictions is incorrect, the pixels you see highlighted are for the incorrect class. For example, if the model predicted "airplane" when it should have predicted "cat", you can see explanations for why the model classified this image as an airplane.
If you deployed an Integrated Gradients model, you can visualize its feature attributions. Currently, the highlighted pixels returned from AI Explanations show the top 60% of pixels that contributed to the model's prediction. The pixels you see after running the cell below show the pixels that most signaled the model's prediction.
Step35: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step36: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: Custom training image classification model for online prediction with explainabilty
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_image_classification_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_image_classification_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_image_classification_online_explain.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom image classification model for online prediction with explanation.
Dataset
The dataset used for this tutorial is the CIFAR10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a prediction with explanations on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train a TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Set explanation parameters.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction with explanation.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.
Run jupyter notebook on the command line in a terminal shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
if os.getenv("IS_TESTING"):
! apt-get update && apt-get install -y python3-opencv-headless
! apt-get install -y libgl1-mesa-dev
! pip3 install --upgrade opencv-python-headless $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more here hardware accelerator support for your region
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for CIFAR10.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(args.model_dir)) to the specified model directory.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
job = aip.CustomTrainingJob(
display_name="cifar10_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
Explanation: Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
Explanation: Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
args: The command-line arguments to pass to the executable that is set as the entry point into the container.
--model-dir : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
"--epochs=" + EPOCHS: The number of epochs for training.
"--steps=" + STEPS: The number of steps per epoch.
End of explanation
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
Explanation: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
args: The command-line arguments to pass to the training script.
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
Explanation: Evaluate the model
Now find out how good the model is.
Load evaluation data
You will load the CIFAR10 test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:<br/>
2. The labels are currently scalar (sparse). If you look back at the compile() step in the trainer/task.py script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
End of explanation
local_model.evaluate(x_test, y_test)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(local_model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
local_model,
model_path_to_deploy,
signatures={
"serving_default": serving_fn,
# Required for XAI
"xai_preprocess": preprocess_fn,
"xai_model": m_call,
},
)
Explanation: Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:
- io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
- image.convert_image_dtype - Changes integer pixel values to float 32.
- image.resize - Resizes the image to match the input shape for the model.
- resized / 255.0 - Rescales (normalization) the pixel data between 0 and 1.
At this point, the data can be passed to the model (m_call).
XAI Signatures
When the serving function is saved back with the underlying model (tf.saved_model.save), you specify the input layer of the serving function as the signature serving_default.
For XAI image models, you need to save two additional signatures from the serving function:
xai_preprocess: The preprocessing function in the serving function.
xai_model: The concrete function for calling the model.
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
End of explanation
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
Explanation: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of:
parameters: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:
Shapley - Note, not recommended for image data -- can be very long running
XRAI
Integrated Gradients
metadata: This is the specification for how the algoithm is applied on your custom model.
Explanation Parameters
Let's first dive deeper into the settings for the explainability algorithm.
Shapley
Assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.
Use Cases:
- Classification and regression on tabular data.
Parameters:
path_count: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).
For any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * path_count.
Integrated Gradients
A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.
Use Cases:
- Classification and regression on tabular data.
- Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
XRAI
Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.
Use Cases:
Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
In the next code cell, set the variable XAI to which explainabilty algorithm you will use on your custom model.
End of explanation
random_baseline = np.random.rand(32, 32, 3)
input_baselines = [{"number_vaue": x} for x in random_baseline]
INPUT_METADATA = {"input_tensor_name": CONCRETE_INPUT, "modality": "image"}
OUTPUT_METADATA = {"output_tensor_name": serving_output}
input_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)
output_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)
metadata = aip.explain.ExplanationMetadata(
inputs={"image": input_metadata}, outputs={"class": output_metadata}
)
Explanation: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of:
outputs: A scalar value in the output to attribute -- what to explain. For example, in a probability output [0.1, 0.2, 0.7] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is y and that is what we want to explain.
y = f(x)
Consider the following formulae, where the outputs are y and z. Since we can only do attribution for one scalar value, we have to pick whether we want to explain the output y or z. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.
y, z = f(x)
The dictionary format for outputs is:
{ "outputs": { "[your_display_name]":
"output_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the output to explain. A common example is "probability".<br/>
- "output_tensor_name": The key/value field to identify the output layer to explain. <br/>
- [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.
</blockquote>
inputs: The features for attribution -- how they contributed to the output. Consider the following formulae, where a and b are the features. We have to pick which features to explain how the contributed. Assume that this model is deployed for A/B testing, where a are the data_items for the prediction and b identifies whether the model instance is A or B. You would want to pick a (or some subset of) for the features, and not b since it does not contribute to the prediction.
y = f(a,b)
The minimum dictionary format for inputs is:
{ "inputs": { "[your_display_name]":
"input_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the input to explain. A common example is "features".<br/>
- "input_tensor_name": The key/value field to identify the input layer for the feature attribution. <br/>
- [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.
</blockquote>
Since the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:
<blockquote>
- "modality": "image": Indicates the field values are image data.
</blockquote>
Since the inputs to the model are images, you can specify the following additional fields as reporting/visualization aids:
<blockquote>
- "modality": "image": Indicates the field values are image data.
</blockquote>
End of explanation
model = aip.Model.upload(
display_name="cifar10_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
Explanation: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
explanation_parameters: Parameters to configure explaining for Model's predictions.
explanation_metadata: Metadata describing the Model's input and output for explanation.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
DEPLOYED_NAME = "cifar10-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
End of explanation
test_image = x_test[0]
test_label = y_test[0]
print(test_image.shape)
Explanation: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
End of explanation
import base64
import cv2
cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8))
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
Explanation: Prepare the request content
You are going to send the CIFAR10 image as compressed JPG image, instead of the raw uncompressed bytes:
cv2.imwrite: Use openCV to write the uncompressed image to disk as a compressed JPEG image.
Denormalize the image data from [0,1) range back to [0,255).
Convert the 32-bit floating point values to 8-bit unsigned integers.
tf.io.read_file: Read the compressed JPG images back into memory as raw bytes.
base64.b64encode: Encode the raw bytes into a base 64 encoded string.
End of explanation
instances_list = [{serving_input: {"b64": b64str}}]
response = endpoint.explain(instances_list)
print(response)
Explanation: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[{serving_input: {'b64': bytes}]
Since the explain() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the explain() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The prediction per instance.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
explanations: The feature attributions
End of explanation
from io import BytesIO
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
CLASSES = [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]
# Note: change the `ig_response` variable below if you didn't deploy an IG model
for prediction in response.predictions:
label_index = np.argmax(prediction)
class_name = CLASSES[label_index]
confidence_score = prediction[label_index]
print(
"Predicted class: "
+ class_name
+ "\n"
+ "Confidence score: "
+ str(confidence_score)
)
image = base64.b64decode(b64str)
image = BytesIO(image)
img = mpimg.imread(image, format="JPG")
plt.imshow(img, interpolation="nearest")
plt.show()
Explanation: Understanding the explanations response
Preview the images and their predicted classes without the explanations. Why did the model predict these classes?
End of explanation
import io
for explanation in response.explanations:
attributions = dict(explanation.attributions[0].feature_attributions)
label_index = explanation.attributions[0].output_index[0]
class_name = CLASSES[label_index]
b64str = attributions["image"]["b64_jpeg"]
image = base64.b64decode(b64str)
image = io.BytesIO(image)
img = mpimg.imread(image, format="JPG")
plt.imshow(img, interpolation="nearest")
plt.show()
Explanation: Visualize the images with AI Explanations
The images returned show the explanations for only the top class predicted by the model. This means that if one of the model's predictions is incorrect, the pixels you see highlighted are for the incorrect class. For example, if the model predicted "airplane" when it should have predicted "cat", you can see explanations for why the model classified this image as an airplane.
If you deployed an Integrated Gradients model, you can visualize its feature attributions. Currently, the highlighted pixels returned from AI Explanations show the top 60% of pixels that contributed to the model's prediction. The pixels you see after running the cell below show the pixels that most signaled the model's prediction.
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
13,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot Trajectories with Pictures
take dimension (e.g. red) that I've trained the nn features to classify and plot sequences in that dimension.
use sequences that have images
Step1: Plot Trajectories from User Profile Eval Dataset
same as above, but without images.
Step2: Plot Trajectories from User Profile Eval Dataset using PCA
Step3: Save | Python Code:
# our lib
from lib.resnet50 import ResNet50
from lib.imagenet_utils import preprocess_input, decode_predictions
#keras
from keras.preprocessing import image
from keras.models import Model
import glob
def preprocess_img(img_path):
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return(x,img)
# instantiate the model
base_model = ResNet50(include_top=False, weights='imagenet') #this will pull the weights from the folder
# cut the model to lower levels only
model = Model(input=base_model.input, output=base_model.get_layer('avg_pool').output)
user_id = 106144465
#get images
folder = '../data_img_sample_item_view_sequences/'
img_files = glob.glob(folder+'*'+str(user_id)+'*')
print(img_files)
# make features
trajectory_features = np.empty((len(img_files),2048))
for i,img_file in enumerate(img_files):
x,img = preprocess_img(img_file) # preprocess
trajectory_features[i,:] = model.predict(x)[0,0,0,:]
red_traj = np.dot(trajectory_features,red_weights)
print('target class')
plt.figure(figsize=(12,6))
len_seq = len(img_files)
fig,axes = plt.subplots(2,len_seq)
# make color
color_red_black = pd.Series(red_traj>0).map({False:'k',True:'r'}).as_matrix()
for i in range(len_seq):
img = image.load_img(img_files[i], target_size=(224, 224))
# images
axes[0,i].imshow(img)
axes[0,i].set_xticklabels([])
#axes[0,i].get_xaxis().set_visible(False)
axes[0,i].get_xaxis().set_ticks([])
axes[0,i].get_yaxis().set_visible(False)
if i<(len_seq-1):
axes[0,i].set_xlabel('view '+str(i))
else:
axes[0,i].set_xlabel('buy')
# bar
axes[1,i].bar(0,red_traj[i],color=color_red_black[i])
axes[1,i].set_ylim([-10,5])
axes[1,i].get_xaxis().set_visible(False)
axes[1,i].axhline(y=0,linestyle='--',color='w')
if i==0:
print('here')
axes[1,i].set_ylabel('red classification')
else:
axes[1,i].get_yaxis().set_visible(False)
sns.despine()
savefile = '../figures/example_sequence_interpretable_features_ui_'+str(user_id)+'.png'
plt.savefig(savefile,dpi=300)
reload(src.s3_data_management)
from src import s3_data_management
s3_data_management.push_results_to_s3(os.path.basename(savefile),savefile)
Explanation: Plot Trajectories with Pictures
take dimension (e.g. red) that I've trained the nn features to classify and plot sequences in that dimension.
use sequences that have images
End of explanation
# load weights from the nn
red_weights = np.loadtxt('../data_nn_features/class_weights_LR_redpink.txt')
# load smaller user behavior dataset
user_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2_sample1000.pkl')
user_sample = user_profile.user_id.unique()
print(len(user_profile))
print(len(user_sample))
user_profile.head()
# read nn features
spu_fea = pd.read_pickle("../data_nn_features/spu_fea_sample1000.pkl")
spu_fea.head()
# sample users
size = 10
np.random.seed(1000)
user_ids = np.random.choice(user_profile.user_id.unique(),size=size)
fig,axes = plt.subplots(size,1,figsize=(16,3*size),sharex=True,sharey=True)
for ui,user_id in enumerate(user_ids):
# get his trajectory
trajectory = user_profile.loc[user_profile.user_id==user_id,]
# get trajectory features (make a separate function # )
trajectory_features = np.empty((len(trajectory),2048))
for i,(index,row) in enumerate(trajectory.iterrows()):
trajectory_features[i,:] = spu_fea.loc[spu_fea.spu_id==row['view_spu'],'features'].as_matrix()[0]
# project onto red dimension
red_traj = np.dot(trajectory_features,red_weights)
# plot
axes[ui].plot(np.arange(len(red_traj)),red_traj)
axes[ui].axhline(y=0,linestyle='--',color='k')
axes[ui].set_ylabel('red features')
sns.despine()
plt.xlabel('positition in sequence')
savefile = '../figures/example_sequences_red_10_users.png'
plt.savefig(savefile,dpi=300)
Explanation: Plot Trajectories from User Profile Eval Dataset
same as above, but without images.
End of explanation
from sklearn.decomposition import PCA
# load smaller user behavior dataset
user_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2_sample1000.pkl')
user_sample = user_profile.user_id.unique()
print(len(user_profile))
print(len(user_sample))
user_profile.head()
# read nn features
spu_fea = pd.read_pickle("../data_nn_features/spu_fea_sample1000.pkl")
# reduce dimensionality
pca = pickle.load(open('../data_nn_features/pca_all_items_sample1000.pkl','rb'))
pca.components_.shape
projection = pca.transform(X_item_feature[0,:].reshape(-1,1).T)
projection.shape
plt.plot(np.arange(100),projection[0,0:100])
plt.xlabel('component')
plt.ylabel('projection')
sns.despine()
# sample users
size = 10
np.random.seed(1000)
user_ids = np.random.choice(user_profile.user_id.unique(),size=size)
fig,axes = plt.subplots(size,1,figsize=(16,3*size),sharex=True,sharey=True)
for ui,user_id in enumerate(user_ids):
# get his trajectory
trajectory = user_profile.loc[user_profile.user_id==user_id,]
# get trajectory features (make a separate function # )
trajectory_features = np.empty((len(trajectory),2048))
for i,(index,row) in enumerate(trajectory.iterrows()):
trajectory_features[i,:] = spu_fea.loc[spu_fea.spu_id==row['view_spu'],'features'].as_matrix()[0]
# project onto pca dimension
projected_traj = pca.transform(trajectory_features)
# get first dimension
traj_PC1 = projected_traj[:,0]
traj_PC2 = projected_traj[:,1]
traj_PC3 = projected_traj[:,2]
# plot
axes[ui].plot(traj_PC1,label='PC1')
axes[ui].plot(traj_PC2,label='PC2')
axes[ui].plot(traj_PC3,label='PC3')
plt.legend()
axes[ui].axhline(y=0,linestyle='--',color='k')
axes[ui].set_ylabel('red features')
sns.despine()
plt.xlabel('positition in sequence')
savefile = '../figures/example_sequences_PCA_10_users.png'
plt.savefig(savefile,dpi=300)
Explanation: Plot Trajectories from User Profile Eval Dataset using PCA
End of explanation
%%bash
#jupyter nbconvert --to Plotting_Sequences_in_low_dimensions.ipynb && mv Plotting_Sequences_in_low_dimensions.slides.html ../notebook_slides/Plotting_Sequences_in_low_dimensions_v1.slides.html
jupyter nbconvert --to html Plotting_Sequences_in_low_dimensions.ipynb && mv Exploring_Data.html ../notebook_htmls/Plotting_Sequences_in_low_dimensions_v1.html
cp Plotting_Sequences_in_low_dimensions.ipynb ../notebook_versions/Plotting_Sequences_in_low_dimensions_v1.ipynb
Explanation: Save
End of explanation |
13,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Shooting victims by block
Which Chicago block has the most shooting victims so far this year?
Fetch the data from NewsroomDB
NewsroomDB is the Tribune's proprietary database for tracking data that needs to be manually entered and validated rather than something that can be ingested from an official source. It's mostly used to track shooting victims and homicides. As far as I know, CPD doesn't provide granular data on shooting victims and the definition of homicide can be tricky (and vary from source to source).
We'll grab shooting victims from the shootings collection.
Step1: Filter to only shootings this year
Step4: Get the block address
Step5: Count victims by block | Python Code:
import os
import requests
def get_table_url(table_name, base_url=os.environ['NEWSROOMDB_URL']):
return '{}table/json/{}'.format(os.environ['NEWSROOMDB_URL'], table_name)
def get_table_data(table_name):
url = get_table_url(table_name)
try:
r = requests.get(url)
return r.json()
except:
print("Request failed. Probably because the response is huge. We should fix this.")
return get_table_data(table_name)
shooting_victims = get_table_data('shootings')
print("Loaded {} shooting victims".format(len(data['shooting_victims'])))
Explanation: Shooting victims by block
Which Chicago block has the most shooting victims so far this year?
Fetch the data from NewsroomDB
NewsroomDB is the Tribune's proprietary database for tracking data that needs to be manually entered and validated rather than something that can be ingested from an official source. It's mostly used to track shooting victims and homicides. As far as I know, CPD doesn't provide granular data on shooting victims and the definition of homicide can be tricky (and vary from source to source).
We'll grab shooting victims from the shootings collection.
End of explanation
from datetime import date, datetime
def get_shooting_date(shooting_victim):
return datetime.strptime(shooting_victim['Date'], '%Y-%m-%d')
def shooting_is_this_year(shooting_victim, today):
try:
shooting_date = get_shooting_date(shooting_victim)
except ValueError:
if shooting_victim['RD Number']:
msg = "Could not parse date for shooting victim with RD Number {}".format(
shooting_victim['RD Number'])
else:
msg = "Could not parse date for shooting victim with record ID {}".format(
shooting_victim['_id'])
print(msg)
return False
return shooting_date.year == today.year
today = date.today()
# Use a list comprehension to filter the shooting victims to ones that
# occured on or before today's month and day.
# Also sort by date because it makes it easier to group by year
shooting_victims_this_year = sorted([sv for sv in shooting_victims
if shooting_is_this_year(sv, today)],
key=get_shooting_date)
Explanation: Filter to only shootings this year
End of explanation
import re
def blockify(address):
Convert a street address to a block level address
Example:
>>> blockify("1440 W 84th St, Chicago, IL 60620")
'1400 W 84th St, Chicago, IL 60620'
m = re.search(r'^(?P<address_number>\d+) ', address)
address_number = m.group('address_number')
block_address_number = (int(address_number) // 100) * 100
return address.replace(address_number, str(block_address_number))
def add_block(sv):
Make a copy of a shooting victim record with an added block field
with_block = dict(**sv)
if not sv['Shooting Location']:
# No location, just set block to none
print("Record with RD number {0} has no location.".format(
sv['RD Number']))
with_block['block'] = None
return with_block
if sv['Shooting Specificity'] == 'Exact':
# Address is exact, convert to 100-block
with_block['block'] = blockify(sv['Shooting Location'])
else:
# Address is already block. Use it
with_block['block'] = sv['Shooting Location']
return with_block
# Create a list of shooting victim dictionaries with blocks
shooting_victims_this_year_with_block = [add_block(sv) for sv in shooting_victims_this_year]
Explanation: Get the block address
End of explanation
import pandas as pd
# Load shooting victims into a dataframe,
# filtering out victim records for which we couldn't determine the block
shooting_victims_this_year_df = pd.DataFrame([sv for sv in shooting_victims_this_year_with_block if sv['block'] is not None])
# Group by block
shooting_victims_this_year_by_block = shooting_victims_this_year_df.groupby('block').size().sort_values(ascending=False)
shooting_victims_this_year_by_block
# Output to a CSV file so I can email to the reporter who requested it
shooting_victims_this_year_by_block.to_csv("shooting_victims_by_block.csv")
Explanation: Count victims by block
End of explanation |
13,550 | Given the following text description, write Python code to implement the functionality described.
Description:
Return True if all numbers in the list l are below threshold t.
This is how the function will work:
below_threshold([1, 2, 4, 10], 100)
True
This is how the function will work:
below_threshold([1, 20, 4, 10], 5)
False
| Python Code:
def below_threshold(l: list, t: int):
for e in l:
if e >= t:
return False
return True |
13,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 18
Wednesday, November 8th, 2017
Databases with SQlite
SQLite Exercises
Today you will work with the candidates and contributors datasets to create a database in Python using SQLite.
The exercises will consist of a sequence of steps to help illustrate basic commands.
<a id='deliverables'></a>
Exercise Deliverables
Create a Jupyter notebook called Exercises-Final.ipynb inside the L18 directory. This is the one we will grade.
For each step in this lecture, there were instructions labeled "Do the following
Step1: We will also use a basic a pandas feature to display tables in the database. Although this lecture isn't on pandas, I will still have you use it a little bit.
Step2: Now we create the tables in the database (just like last time).
Step3: <a id='step_1'></a>
Step 1
Read candidates.txt and contributors.txt and insert their values into the respective tables.
Step4: <a id='interlude'></a>
Interlude
Now that you have values in the tables of the database, it would be convenient to be able to visualize those tables in some way. We'll write a little helper function to accomplish this.
Step5: Here's how we can use our helper function. It gives a pretty nice visualization of our table. You should do the same thing with the contributors table.
Step6: <a id='step_2'></a>
Step 2
Step7: We can also see how many entries satisfy the query
Step8: Do the following queries
Step9: Do the following sorts on the contributors table
Step10: Using the DISTINCT clause, you remove duplicate rows.
Step11: Do the following
Step12: What if we want to rename or delete a columm? It can't be done with SQLite with a single command. We need to follow some roundabout steps (see SQLite ALTER TABLE). We won't consider this case at the moment.
For now, let's put a few commands together to populate the full_name column.
Step13: Here's another update, this time on an existing column.
Step14: Do the following
Step15: Do the following | Python Code:
import sqlite3
Explanation: Lecture 18
Wednesday, November 8th, 2017
Databases with SQlite
SQLite Exercises
Today you will work with the candidates and contributors datasets to create a database in Python using SQLite.
The exercises will consist of a sequence of steps to help illustrate basic commands.
<a id='deliverables'></a>
Exercise Deliverables
Create a Jupyter notebook called Exercises-Final.ipynb inside the L18 directory. This is the one we will grade.
For each step in this lecture, there were instructions labeled "Do the following:". Put all the code from those instructions in a single Jupyter notebook cell. It should look like a Python script. You must comment where appropriate to demonstrate that you understand what you are doing.
Save and close your database. Be sure to upload your database with the lecture exercises. You must name your database L18DB.sqlite.
Table of Contents
Setting the Stage
Step 1
Interlude: Not required but highly recommended.
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
Step 8
<a id='setting_the_stage'></a>
Setting the Stage
You should import sqlite3 again like last time.
End of explanation
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
Explanation: We will also use a basic a pandas feature to display tables in the database. Although this lecture isn't on pandas, I will still have you use it a little bit.
End of explanation
db = sqlite3.connect('L18DB_demo.sqlite')
cursor = db.cursor()
cursor.execute("DROP TABLE IF EXISTS candidates")
cursor.execute("DROP TABLE IF EXISTS contributors")
cursor.execute("PRAGMA foreign_keys=1")
cursor.execute('''CREATE TABLE candidates (
id INTEGER PRIMARY KEY NOT NULL,
first_name TEXT,
last_name TEXT,
middle_init TEXT,
party TEXT NOT NULL)''')
db.commit() # Commit changes to the database
cursor.execute('''CREATE TABLE contributors (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
last_name TEXT,
first_name TEXT,
middle_name TEXT,
street_1 TEXT,
street_2 TEXT,
city TEXT,
state TEXT,
zip TEXT,
amount REAL,
date DATETIME,
candidate_id INTEGER NOT NULL,
FOREIGN KEY(candidate_id) REFERENCES candidates(id))''')
db.commit()
Explanation: Now we create the tables in the database (just like last time).
End of explanation
with open ("candidates.txt") as candidates:
next(candidates) # jump over the header
for line in candidates.readlines():
cid, first_name, last_name, middle_name, party = line.strip().split('|')
vals_to_insert = (int(cid), first_name, last_name, middle_name, party)
cursor.execute('''INSERT INTO candidates
(id, first_name, last_name, middle_init, party)
VALUES (?, ?, ?, ?, ?)''', vals_to_insert)
with open ("contributors.txt") as contributors:
next(contributors)
for line in contributors.readlines():
cid, last_name, first_name, middle_name, street_1, street_2, \
city, state, zip_code, amount, date, candidate_id = line.strip().split('|')
vals_to_insert = (last_name, first_name, middle_name, street_1, street_2,
city, state, int(zip_code), amount, date, candidate_id)
cursor.execute('''INSERT INTO contributors (last_name, first_name, middle_name,
street_1, street_2, city, state, zip, amount, date, candidate_id)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert)
Explanation: <a id='step_1'></a>
Step 1
Read candidates.txt and contributors.txt and insert their values into the respective tables.
End of explanation
def viz_tables(cols, query):
q = cursor.execute(query).fetchall()
framelist = []
for i, col_name in enumerate(cols):
framelist.append((col_name, [col[i] for col in q]))
return pd.DataFrame.from_items(framelist)
Explanation: <a id='interlude'></a>
Interlude
Now that you have values in the tables of the database, it would be convenient to be able to visualize those tables in some way. We'll write a little helper function to accomplish this.
End of explanation
candidate_cols = [col[1] for col in cursor.execute("PRAGMA table_info(candidates)")]
query = '''SELECT * FROM candidates'''
viz_tables(candidate_cols, query)
Explanation: Here's how we can use our helper function. It gives a pretty nice visualization of our table. You should do the same thing with the contributors table.
End of explanation
query = '''SELECT * FROM candidates WHERE middle_init <> ""'''
viz_tables(candidate_cols, query)
Explanation: <a id='step_2'></a>
Step 2: Various Queries
We can query our database for entries with certain characteristics. For example, we can query the candidates table for entries who's middle name fields are not empty.
End of explanation
print("{} candidates have a middle initial.".format(viz_tables(candidate_cols, query).shape[0]))
Explanation: We can also see how many entries satisfy the query:
End of explanation
query = '''SELECT * FROM candidates ORDER BY id DESC'''
viz_tables(candidate_cols, query)
Explanation: Do the following queries:
Display the contributors where the state is "PA"
Display the contributors where the amount contributed is greater than $\$1000.00$.
Display the contributors from "UT" where the amount contributed is greater than $\$1000.00$.
Display the contributors who didn't list their state
Hint: Match state to the empty string
Display the contributors from "WA" and "PA"
Hint: You will need to use IN ("WA", "PA") in your SELECT statement.
Display the contributors who contributed between $\$100.00$ and $\$200.00$.
Hint: You can use the BETWEEN 100.00 and 200.00 clause.
<a id='step_3'></a>
Step 3: Sorting
It could be beneficial to sort by one of the attributes in the database. The following cell contains a basic sorting demo.
End of explanation
query = '''SELECT last_name, party FROM candidates'''
viz_tables(['last_name', 'party'], query)
Explanation: Do the following sorts on the contributors table:
Sort the contributors table by last_name.
Sort by the amount in decending order where amount is restricted to be between $\$1000.00$ and $\$5000.00$.
Sort the contributors who donted between $\$1000.00$ and $\$5000.00$ by candidate_id and then by amount in descending order.
Hint: Multiple orderings can be accomplished by separating requests after ORDER BY with commas.
e.g. ORDER BY amount ASC, last_name DESC
<a id='step_4'></a>
Step 4: Selecting Columns
So far, we've been selecting all columns from a table (i.e. SELECT * FROM). Often, we just want to select specific columns (e.g. SELECT amount FROM).
End of explanation
query = '''SELECT DISTINCT party FROM candidates'''
viz_tables(['party'], query)
Explanation: Using the DISTINCT clause, you remove duplicate rows.
End of explanation
cursor.execute('''ALTER TABLE candidates ADD COLUMN full_name TEXT''')
candidate_cols = [col[1] for col in cursor.execute("PRAGMA table_info(candidates)")]
viz_tables(candidate_cols, '''SELECT * FROM candidates''')
Explanation: Do the following:
Get the first and last name of contributors. Make sure each row has distinct values.
<a id='step_5'></a>
Step 5: Altering Tables
The ALTER clause allows us to modify tables in our database. Here, we had a new column to our candidates table called nick_name.
End of explanation
candidate_cols = [col[1] for col in cursor.execute("PRAGMA table_info(candidates)")] # regenerate columns with full_name
query = '''SELECT id, last_name, first_name FROM candidates''' # Select a few columns
full_name_and_id = [(attr[1] + ", " + attr[2], attr[0]) for attr in cursor.execute(query).fetchall()] # List of tuples: (full_name, id)
update = '''UPDATE candidates SET full_name = ? WHERE id = ?''' # Update the table
for rows in full_name_and_id:
cursor.execute(update, rows)
query = '''SELECT * FROM candidates'''
viz_tables(candidate_cols, query)
Explanation: What if we want to rename or delete a columm? It can't be done with SQLite with a single command. We need to follow some roundabout steps (see SQLite ALTER TABLE). We won't consider this case at the moment.
For now, let's put a few commands together to populate the full_name column.
End of explanation
update = '''UPDATE candidates SET full_name = "Eventual Winner" WHERE last_name = "Obama"'''
cursor.execute(update)
update = '''UPDATE candidates SET full_name = "Eventual Loser" WHERE last_name = "McCain"'''
cursor.execute(update)
viz_tables(candidate_cols, query)
Explanation: Here's another update, this time on an existing column.
End of explanation
contributor_cols = [col[1] for col in cursor.execute("PRAGMA table_info(contributors)")] # You've already done this part. I just need to do it here b/c I haven't yet.
function = '''SELECT *, MAX(amount) AS max_amount FROM contributors'''
pcursor.execute(function)
Explanation: Do the following:
Add a new column to the contributors table called full_name. The value in that column should be in the form last_name, first_name.
Change the value in the full_name column to the string "Too Much" if someone donated more than $\$1000.00$.
<a id='step_6'></a>
Step 6: Aggregation
You can perform some nice operations on the values in the database. For example, you can compute the maximum, minimum, and sum of a set. You can also count the number of items in a given set. Here's a little example. You can do the rest.
End of explanation
query = '''SELECT * FROM candidates LIMIT 3'''
viz_tables(candidate_cols, query)
query = '''SELECT * FROM candidates LIMIT 4 OFFSET 5'''
viz_tables(candidate_cols, query)
query = '''SELECT * FROM candidates ORDER BY last_name LIMIT 4 OFFSET 5'''
viz_tables(candidate_cols, query)
Explanation: Do the following:
Count how many donations there were above $\$1000.00$.
Calculate the average donation.
Calculate the average contribution from each state and display in a table.
Hint: Use code that looks like:
python
"SELECT state,SUM(amount) FROM contributors GROUP BY state"
<a id='step_7'></a>
Step 7: DELETE
We have already noted that SQLite can't drop columns in a straightfoward manner. However, it can delete rows quite simply. Here's the syntax:
python
deletion = '''DELETE FROM table_name WHERE condition'''
Do the following:
Delete rows in the contributors table with last name "Ahrens".
<a id='step_8'></a>
Step 8: LIMIT
The LIMIT clause offers convenient functionality. It allows you to constrain the number of rows returned by your query. It shows up in many guises.
End of explanation |
13,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported
Step1: You can do a lot with the numpy module. Below is an example to jog your memory
Step2: Do you remember loops? Let's use a while loop to make an array of 10 numbers. Let's have each element be increased by 2 compared with the previous element. Let's also have the first element of the array be 1.
Step3: There's your quick review of numpy and a while loop. Now we can move on to the content of Lecture 3.
Lecture 3 - Distributions, Histograms, and Curve Fitting
In the previous lecture, you learned how to import the module numpy and how to use many of its associated functions. As you've seen, numpy gives us the ability to generate arrays of numbers using commands usch as np.linspace and others.
In addition to these commands, you can also use numpy to generate distributions of numbers. The two most frequently used distributions are the following
Step4: Let's generate a numpy array of length 5 populated with uniformly distributed random numbers. The function np.random.rand takes the array output size as an argument (in this case, 5).
Step5: Additionally, you are not limited to one-dimensional arrays! Let's make a 5x5, two-dimensional array
Step6: Great, so now you have a handle on generating uniform distributions. Let's quickly look at one more type of distribution.
The normal distribution (randn) selects numbers from a Gaussian curve, sometimes called a bell curve, also from the interval [0,1).
The equation for a Gaussian curve is the following
Step7: So these numbers probably don't mean that much to you. Don't worry; they don't mean much to me either!
Instead of trying to derive meaning from a list of numbers, let's actually plot these outputs and see what they look like. This will allow us to determine whether or not these distributions actually look like what we are expecting. How do we do that? The answer is with histograms!
B. Plotting distributions
Histogram documentation
Step8: Now, let's plot a uniform distribution and take a look.
Use what you learned above to define your variable X as a uniformly distributed array with 5000 elements.
Step9: Now, let's use plt.hist to see what X looks like. First, run the cell below. Then, vary bins -- doing so will either increase or decrease the apparent effect of noise in your distribution.
Step10: Nice job! Do you see why the "uniform distribution" is referred to as such?
Next, let's take a look at the Gaussian distribution using histograms.
In the cell below, generate a vector of length 5000, called X, from the normal (Gaussian) distribution and plot a histogram with 50 bins.
HINT
Step11: Nice job! You just plotted a Gaussian distribution with mean of 0 and a standard deviation of 1.
As a reminder, this is considered the "standard" normal distribution, and it's not particularly interesting. We can transform the distribution given by np.random.randn (and make it more interesting!) using simple arithmetic.
Run the cell below to see. How is the code below different from the code you've already written?
Step12: Before moving onto the next section, vary the values of mu and sigma in the above code to see how your histogram changes. You should find that changing mu (the mean) affects the center of the distribution while changing sigma (the standard deviation) affects the width of the distribution.
Take a look at the histograms you have generated and compare them. Do the histograms of the uniform and normal (Gaussian) distributions look different? If so, how? Describe your observations in the cell below.
Step13: For simplicity's sake, we've used plt.hist without generating any return variables. Remember that plt.hist takes in your data (X) and the number of bins, and it makes histograms from it. In the process, plt.hist generates variables that you can store; we just haven't thus far. Run the cell below to see -- it should replot the Gaussian from above while also generating the output variables.
Step14: Something that might be useful to you is that you can make use of variables outputted by plt.hist -- particularly bins and N.
The bins array returned by plt.hist is longer (by one element) than the actual number of bins. Why? Because the bins array contains all the edges of the bins. For example, if you have 2 bins, you will have 3 edges. Does this make sense?
So you can generate these outputs, but what can you do with them? You can average consecutive elements from the bins output to get, in a sense, a location of the center of a bin. Let's call it bin_avg. Then you can plot the number of observations in that bin (N) against the bin location (bin_avg).
Step15: The plot above (red stars) should look like it overlays the histogram plot above it. If that's what you see, nice job! If not, let your instructor and/or TAs know before moving onto the next section.
C. Checking your distributions with statistics
If you ever want to check that your distributions are giving you what you expect, you can use numpy to calculate the mean and standard deviation of your distribution. Let's do this for X, our Gaussian distribution, and print the results.
Run the cell below. Are your mean and standard deviation what you expect them to be?
Step16: So you've learned how to generate distributions of numbers, plot them, and generate statistics on them. This is a great starting point, but let's try working with some real data!
D. Visualizing and understanding real data
Hope you're excited -- we're about to get our hands on some real data! Let's import a list of fluorescence lifetimes in nanoseconds from Nitrogen-Vacancy defects in diamond.
(While it is not at all necessary to understand the physics behind this, know that this is indeed real data! You can read more about it at http
Step17: Next, plot a histogram of this data set (play around with the number of bins, too).
Step18: Now, calculate and print the mean and standard deviation of this distribution.
Step19: Nice job! Now that you're used to working with real data, we're going to try to fit some more real data to known functions to gain a better understanding of that data.
E. Basic curve fitting
In this section, we're going to introduce you to the Python module known as scipy (short for Scientific Python).
scipy allows you to perform a range of functions such as numerical integration and optimization. In particular, it's useful for data analysis, which we shall see shortly. In particular, we will do curve fitting using curve_fit from scipy.optimize.
Curve fitting documentation
Step20: We will show you an example, and then you get to try it out for yourself!
We start by creating an equally-spaced numpy array x_vals consisting of 100 numbers from -5 to 5. Try it out yourself below.
Step21: Next, we will define a function $f(x) = \frac 1 3x^2+3$ that will square the elements in x and add an offset. Call this function f_scalar, and implement it (for scalar values) below.
Step22: We will create a new variable, y, that will call the function f_scalar with x_vals as the input. Note that we are using two separate variable names, x and x_vals, so we don't confuse them! This is good programming practice; you should try not to use the same variable names unless you are intentionally overriding something.
Step23: Now we will add some noise to the array y using the np.random.rand() function and store it in a new variable called y_noisy.
Important question
Step24: Let's see what the y_noisy values look like now
Step25: It seems like there's still a rough parabolic shape, so let's see if we can recover the original y values without any noise.
We can treat this y_noisy as data values that we want to fit with a parabolic funciton. To do this, we first need to define the general form of a quadratic function
Step26: Then, we want to find the optimal values of a, b, and c that will give a function that fits best to y_noisy.
We do this using the curve_fit function in the following way
Step27: Now that we have the fitted parameters, let's use quadratic to plot the fitted parabola alongside the noisy y values.
Step28: And we can also compare y_fitted to the original y values without any noise
Step29: Not a bad job for your first fit function!
F. More advanced curve fitting
In this section, you will visualize real data and plot a best-fit function to model the underlying physics.
You just used curve_fit above to fit simulated data to a linear function. Using that code as your guide, combined with the steps below, you will use curve_fit to fit your real data to a non-linear function that you define. This exercise will combine most of what you've learned so far!
Steps for using curve_fit
Here is the basic outline on how to use curve_fit. As this is the last section, you will mostly be on your own. Try your best with new skills you have learned here and feel free to ask for help!
1) Load in your x and y data. You will be using "photopeak.txt", which is in the folder Data.
HINT 1
Step30: So you've imported your data and plotted it. It should look similar to the figure below. Run the next cell to see.
Step31: What type of function would you say this is? Think back to the distributions we've learned about today. Any ideas?
Based on what you think, define your function below. | Python Code:
import numpy as np
Explanation: Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported:
End of explanation
np.linspace(0,10,11)
Explanation: You can do a lot with the numpy module. Below is an example to jog your memory:
End of explanation
# your code here
#start by defining the length of the array
arrayLength = 10
#let's set the array to currently be an array of 0s
myArray = np.zeros(arrayLength) #make a numpy array of 10 zeros
# Let's define the first element of the array
myArray[0] = 1
i = 1 #with the first element defined, we can calculate the rest of the sequence beginning with the 2nd element
while i < arrayLength:
myArray[i] = myArray[i-1]+2
i = i + 1
print(myArray)
Explanation: Do you remember loops? Let's use a while loop to make an array of 10 numbers. Let's have each element be increased by 2 compared with the previous element. Let's also have the first element of the array be 1.
End of explanation
import numpy as np
Explanation: There's your quick review of numpy and a while loop. Now we can move on to the content of Lecture 3.
Lecture 3 - Distributions, Histograms, and Curve Fitting
In the previous lecture, you learned how to import the module numpy and how to use many of its associated functions. As you've seen, numpy gives us the ability to generate arrays of numbers using commands usch as np.linspace and others.
In addition to these commands, you can also use numpy to generate distributions of numbers. The two most frequently used distributions are the following:
the uniform distribution: np.random.rand
the normal (Gaussian) distribution: np.random.randn
(notice the "n" that distinguishes the functions for generating normal vs. uniform distributions)
A. Generating distributions
Let's start with the uniform distribution (rand), which gives numbers uniformly distributed over the interval [0,1).
If you haven't already, import the numpy module.
End of explanation
np.random.rand(5)
Explanation: Let's generate a numpy array of length 5 populated with uniformly distributed random numbers. The function np.random.rand takes the array output size as an argument (in this case, 5).
End of explanation
np.random.rand(5,5)
Explanation: Additionally, you are not limited to one-dimensional arrays! Let's make a 5x5, two-dimensional array:
End of explanation
np.random.randn(5)
Explanation: Great, so now you have a handle on generating uniform distributions. Let's quickly look at one more type of distribution.
The normal distribution (randn) selects numbers from a Gaussian curve, sometimes called a bell curve, also from the interval [0,1).
The equation for a Gaussian curve is the following:
$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(x-\mu)^2}{2\sigma^2}}$
where $\mu$ is the mean and $\sigma$ is the standard deviation.
Don't worry about memorizing this equation, but do know that it exists and that numbers can be randomly drawn from it.
In python, the command np.random.randn selects numbers from the "standard" normal distribution.
All this means is that, in the equation above, $\mu$ (mean) = 0 and $\sigma$ (standard deviation ) 1. randn takes the size of the output as an argument just like rand does.
Try running the cell below to see the numbers you get from a normal distribution.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: So these numbers probably don't mean that much to you. Don't worry; they don't mean much to me either!
Instead of trying to derive meaning from a list of numbers, let's actually plot these outputs and see what they look like. This will allow us to determine whether or not these distributions actually look like what we are expecting. How do we do that? The answer is with histograms!
B. Plotting distributions
Histogram documentation: http://matplotlib.org/1.2.1/api/pyplot_api.html?highlight=hist#matplotlib.pyplot.hist
Understanding distributions is perhaps best done by plotting them in a histogram. Lucky for us, matplotlib makes that very simple for us.
To make a histogram, we use the command plt.hist, which takes -- at minimum -- a vector of values that we want to plot as a histogram. We can also specify the number of bins.
First things first: let's import matplotlib:
End of explanation
#your code here
X = np.random.rand(5000)
Explanation: Now, let's plot a uniform distribution and take a look.
Use what you learned above to define your variable X as a uniformly distributed array with 5000 elements.
End of explanation
plt.hist(X, bins=20)
Explanation: Now, let's use plt.hist to see what X looks like. First, run the cell below. Then, vary bins -- doing so will either increase or decrease the apparent effect of noise in your distribution.
End of explanation
#your code here
X = np.random.randn(5000)
plt.hist(X, bins=50)
Explanation: Nice job! Do you see why the "uniform distribution" is referred to as such?
Next, let's take a look at the Gaussian distribution using histograms.
In the cell below, generate a vector of length 5000, called X, from the normal (Gaussian) distribution and plot a histogram with 50 bins.
HINT: You will use a similar format as above when you defined and plotted a uniform distribution.
End of explanation
mu = 5 #the mean of the distribution
sigma = 3 #the standard deviation
X = sigma * np.random.randn(5000) + mu
plt.hist(X,bins=50)
Explanation: Nice job! You just plotted a Gaussian distribution with mean of 0 and a standard deviation of 1.
As a reminder, this is considered the "standard" normal distribution, and it's not particularly interesting. We can transform the distribution given by np.random.randn (and make it more interesting!) using simple arithmetic.
Run the cell below to see. How is the code below different from the code you've already written?
End of explanation
#write your observations here
Explanation: Before moving onto the next section, vary the values of mu and sigma in the above code to see how your histogram changes. You should find that changing mu (the mean) affects the center of the distribution while changing sigma (the standard deviation) affects the width of the distribution.
Take a look at the histograms you have generated and compare them. Do the histograms of the uniform and normal (Gaussian) distributions look different? If so, how? Describe your observations in the cell below.
End of explanation
N,bins,patches = plt.hist(X, bins=50)
Explanation: For simplicity's sake, we've used plt.hist without generating any return variables. Remember that plt.hist takes in your data (X) and the number of bins, and it makes histograms from it. In the process, plt.hist generates variables that you can store; we just haven't thus far. Run the cell below to see -- it should replot the Gaussian from above while also generating the output variables.
End of explanation
bin_avg = (bins[1:]+bins[:-1])/2
plt.plot(bin_avg, N, 'r*')
plt.show()
Explanation: Something that might be useful to you is that you can make use of variables outputted by plt.hist -- particularly bins and N.
The bins array returned by plt.hist is longer (by one element) than the actual number of bins. Why? Because the bins array contains all the edges of the bins. For example, if you have 2 bins, you will have 3 edges. Does this make sense?
So you can generate these outputs, but what can you do with them? You can average consecutive elements from the bins output to get, in a sense, a location of the center of a bin. Let's call it bin_avg. Then you can plot the number of observations in that bin (N) against the bin location (bin_avg).
End of explanation
mean = np.mean(X)
std = np.std(X)
print('mean: '+ repr(mean) )
print('standard deviation: ' + repr(std))
Explanation: The plot above (red stars) should look like it overlays the histogram plot above it. If that's what you see, nice job! If not, let your instructor and/or TAs know before moving onto the next section.
C. Checking your distributions with statistics
If you ever want to check that your distributions are giving you what you expect, you can use numpy to calculate the mean and standard deviation of your distribution. Let's do this for X, our Gaussian distribution, and print the results.
Run the cell below. Are your mean and standard deviation what you expect them to be?
End of explanation
lifetimes = np.loadtxt('Data/LifetimeData.txt')
Explanation: So you've learned how to generate distributions of numbers, plot them, and generate statistics on them. This is a great starting point, but let's try working with some real data!
D. Visualizing and understanding real data
Hope you're excited -- we're about to get our hands on some real data! Let's import a list of fluorescence lifetimes in nanoseconds from Nitrogen-Vacancy defects in diamond.
(While it is not at all necessary to understand the physics behind this, know that this is indeed real data! You can read more about it at http://www.nature.com/articles/ncomms11820 if you are so inclined. This data is from Fig. 6a).
Do you remember learning how to import data in yesterday's Lecture 2? The command you want to use is np.loadtxt. The data we'll be working with is called LifetimeData.txt, and it's located in the Data folder.
End of explanation
#your code here
N,bins,patches = plt.hist(lifetimes,bins=40)
Explanation: Next, plot a histogram of this data set (play around with the number of bins, too).
End of explanation
#your code here
mean = np.mean(lifetimes)
std = np.std(lifetimes)
print("mean: "+repr(mean))
print("standard deviation: "+repr(std))
Explanation: Now, calculate and print the mean and standard deviation of this distribution.
End of explanation
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Nice job! Now that you're used to working with real data, we're going to try to fit some more real data to known functions to gain a better understanding of that data.
E. Basic curve fitting
In this section, we're going to introduce you to the Python module known as scipy (short for Scientific Python).
scipy allows you to perform a range of functions such as numerical integration and optimization. In particular, it's useful for data analysis, which we shall see shortly. In particular, we will do curve fitting using curve_fit from scipy.optimize.
Curve fitting documentation: https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.curve_fit.html
In this section, you will learn how to use curve fitting on simulated data. Next will be real data!
First, let's load the modules.
End of explanation
# your code here
x_vals = np.linspace(-5,5,100)
Explanation: We will show you an example, and then you get to try it out for yourself!
We start by creating an equally-spaced numpy array x_vals consisting of 100 numbers from -5 to 5. Try it out yourself below.
End of explanation
# your code here
def f_scalar(x):
return 1/3*x**2 + 3
Explanation: Next, we will define a function $f(x) = \frac 1 3x^2+3$ that will square the elements in x and add an offset. Call this function f_scalar, and implement it (for scalar values) below.
End of explanation
y = f_scalar(x_vals)
Explanation: We will create a new variable, y, that will call the function f_scalar with x_vals as the input. Note that we are using two separate variable names, x and x_vals, so we don't confuse them! This is good programming practice; you should try not to use the same variable names unless you are intentionally overriding something.
End of explanation
# your code here
y_noisy = y + np.random.rand(100)
Explanation: Now we will add some noise to the array y using the np.random.rand() function and store it in a new variable called y_noisy.
Important question: What value for the array size should we pass into this function?
End of explanation
plt.plot(x_vals,y_noisy)
Explanation: Let's see what the y_noisy values look like now
End of explanation
def quadratic(x,a,b,c):
return a*x**2 + b*x + c
Explanation: It seems like there's still a rough parabolic shape, so let's see if we can recover the original y values without any noise.
We can treat this y_noisy as data values that we want to fit with a parabolic funciton. To do this, we first need to define the general form of a quadratic function:
End of explanation
optimal_values, _ = curve_fit(quadratic,x_vals,y_noisy)
a = optimal_values[0]
b = optimal_values[1]
c = optimal_values[2]
print(a, b, c)
Explanation: Then, we want to find the optimal values of a, b, and c that will give a function that fits best to y_noisy.
We do this using the curve_fit function in the following way:
curve_fit(f,xdata,ydata)
where f is the model we're fitting to (quadratic in this case).
This function will return the optimal values for a, b, and c in a list. Try it out!
End of explanation
y_fitted = quadratic(x_vals,a,b,c)
plt.plot(x_vals,y_fitted)
plt.plot(x_vals,y_noisy)
Explanation: Now that we have the fitted parameters, let's use quadratic to plot the fitted parabola alongside the noisy y values.
End of explanation
plt.plot(x_vals,y_fitted)
plt.plot(x_vals,y)
Explanation: And we can also compare y_fitted to the original y values without any noise:
End of explanation
# let's get you started by importing the right libraries
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
%matplotlib inline
# Step 1: Import the data
xData, yData = np.loadtxt('Data/photopeak.txt', usecols=(0,1), unpack=True)
print(xData,yData)
# Step 2: Plot the data to see what it looks like
plt.plot(xData,yData,'*')
Explanation: Not a bad job for your first fit function!
F. More advanced curve fitting
In this section, you will visualize real data and plot a best-fit function to model the underlying physics.
You just used curve_fit above to fit simulated data to a linear function. Using that code as your guide, combined with the steps below, you will use curve_fit to fit your real data to a non-linear function that you define. This exercise will combine most of what you've learned so far!
Steps for using curve_fit
Here is the basic outline on how to use curve_fit. As this is the last section, you will mostly be on your own. Try your best with new skills you have learned here and feel free to ask for help!
1) Load in your x and y data. You will be using "photopeak.txt", which is in the folder Data.
HINT 1: When you load your data, I recommend making use of the usecols and unpack argument.
HINT 2: Make sure the arrays are the same length!
2) Plot this data to see what it looks like. Determine the function your data most resembles.
3) Define the function to which your data will be fit.
4) PART A: Use curve_fit and point the output to popt and pcov. These are the fitted parameters (popt) and their estimated errors (pcov).
4) PART B - OPTIONAL (only do this if you get through all the other steps): Input a guess (p0) and bounds (bounds) into curve_fit. For p0, I would suggest [0.5, 0.1, 5].
5) Pass the popt parameters into the function you've defined to create the model fit.
6) Plot your data and your fitted function.
7) Pat yourself on the back!
End of explanation
from IPython.display import display, Image
display(Image(filename='Data/photopeak.png'))
Explanation: So you've imported your data and plotted it. It should look similar to the figure below. Run the next cell to see.
End of explanation
# Step 3: Define your function here
def myGaussian(Xvals,A,mu,sigma):
return (A/np.sqrt(2*np.pi*sigma**2))*np.exp(-((Xvals-mu)**2/(2*sigma**2)))
# Step 3.5: SANITY CHECK! Use this step as a way to check that the function you defined above is mathematically correct.
mu = 0.66 #the mean of the distribution
sigma = 0.04 #the standard deviation
A = 10;
Xvals = np.linspace(0.50,0.80,100)
Yvals = A*myGaussian(Xvals,A,mu,sigma)
plt.plot(Xvals,Yvals)
# Step 4: Use curve_fit to generate your output parameters
popt, pcov = curve_fit(myGaussian, xData, yData, p0=[0.5, 0.1, 5])
#perr = np.sqrt(np.diag(pcov))
# Step 5: Generate your model fit
xFit = np.linspace(min(xData),max(xData),100) #give this
line_fit = myGaussian(xFit, *popt)
# Step 6: Plot the best fit function and the scatter plot of data
plt.plot(xData, yData, 'r*')
plt.plot(xFit, line_fit)
Explanation: What type of function would you say this is? Think back to the distributions we've learned about today. Any ideas?
Based on what you think, define your function below.
End of explanation |
13,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Several pieces of the puzzle have come together lately to really demonstrate the power of the scientific python software packages to handle complex dynamic and controls problems (i.e. IPython notebooks, matplotlib animations, python-control, and our software packages
Step1: Setup
This example depends on the following software
Step2: We can enable mathematical rendering of the resulting equations in the notebook with the following command.
Step3: Now specify the number of links, $n$. I'll start with 5 since the Wolfram folks only showed four.
Step4: mechanics will need the generalized coordinates, generalized speeds, and the input force which are all time dependent variables and the bob masses, link lengths, and acceleration due to gravity which are all constants. Time, $t$, is also made available because we will need to differentiate with respect to time.
Step5: Now we can create and inertial reference frame $I$ and define the point, $O$, as the origin.
Step6: Secondly, we define the define the first point of the pendulum as a particle which has mass. This point can only move laterally and represents the motion of the "cart".
Step7: Now we can define the $n$ reference frames, particles, gravitational forces, and kinematical differential equations for each of the pendulum links. This is easily done with a loop.
Step8: With all of the necessary point velocities and particle masses defined, the KanesMethod class can be used to derive the equations of motion of the system automatically.
Step9: The equations of motion are quite long as can been seen below. This is the general nature of most non-simple mutlibody problems. That is why a SymPy is so useful; no more mistakes in algebra, differentiation, or copying hand written equations. Note that trigsimp can take quite a while to complete for extremely large expressions. Below we print $\tilde{M}$ and $\tilde{f}$ from $\tilde{M}\dot{u}=\tilde{f}$ to show the size of the expressions.
Step10: $\tilde{M}$ is a function of the constant parameters and the configuration.
Step11: $\tilde{f}$ is a function of the constant parameters, configuration, speeds, and the applied force.
Step12: Simulation
Now that the symbolic equations of motion are available we can simulate the pendulum's motion. We will need some more SymPy functionality and several NumPy functions, and most importantly the integration function from SciPy, odeint.
Step13: First, define some numeric values for all of the constant parameters in the problem.
Step14: Mathematica has a really nice NDSolve function for quickly integrating their symbolic differential equations. We make use of SymPy's lambdify function to do something similar, i.e. to create functions that will evaluate the "full" mass matrix, $M$, and "full" forcing vector, $f$ from $M\dot{x} = f(x, r, t)$ as a NumPy function.
Step16: To integrate the ODE's we need to define a function that returns the derivatives of the states given the current state and time.
Step17: Now that we have the right hand side function, the initial conditions are set such that the pendulum is in the vertical equilibrium and a slight initial rate is set for each speed to ensure the pendulum falls. The equations can then be integrated with SciPy's odeint function given a time series.
Step18: Plotting
The results of the simulation can be plotted with matplotlib. First, load the plotting functionality.
Step19: The coordinate trajectories are plotted below.
Step20: And the generalized speed trajectories.
Step21: Animation
matplotlib now includes very nice animation functions for animating matplotlib plots. First we import the necessary functions for creating the animation.
Step23: The following function was modeled from Jake Vanderplas's post on matplotlib animations.
Step25: Now we can create the animation of the pendulum. This animation will show the open loop dynamics.
Step26: Controller Design
The n-link pendulum can be balanced such that all of the links are inverted above the cart by applying the correct lateral force to the cart. We can design a full state feedback controller based from a linear model of the pendulum about its upright equilibrium point. We'll start by specifying the equilibrium point and parameters in dictionaries. We make sure to use SymPy types in the equilibrium point to ensure proper cancelations in the linearization.
Step27: The KanesMethod class has method that linearizes the forcing vector about generic state and input perturbation vectors. The equilibrium point and numerical constants can then be substituted in to give the linear system in this form
Step28: Now the numerical $A$ and $B$ matrices can be formed. First substitute numerical parameter values into $M$, $F_A$, and $F_B$.
Step29: Now that we have a linear system, the python-control package can be used to design an optimal controller for the system.
Step30: First we can check to see if the system is, in fact, controllable. The rank of the controllability matrix must be equal to the number of rows in $A$, but the matrix_rank algorithm is numerically ill conditioned and for certain values of $n$ this will fail, as seen below for $n=5$. Nevertheless, the system is controllable, no matter the number of links.
Step31: So now we can compute the optimal gains with a linear quadratic regulator. I chose identity matrices for the weightings for simplicity.
Step33: The gains can now be used to define the required input during simulation to stabilize the system. The input $r$ is simply the gain vector multiplied by the error in the state vector from the equilibrium point, $r(t)=K(x_{eq} - x(t))$.
Step34: Now we can simulate and animate the system to see if the controller works.
Step36: The plots show that we seem to have a stable system.
Step37: The video clearly shows that the controller can balance all $n$ of the pendulum links. The weightings in the lqr design can be tweaked to give different performance if needed.
This example shows that the free and open source scientific Python tools for dynamics are easily comparable in ability and quality to a commercial package such as Mathematica.
The IPython notebook for this example can be downloaded from https | Python Code:
from IPython.display import SVG
SVG(filename='n-pendulum-with-cart.svg')
Explanation: Introduction
Several pieces of the puzzle have come together lately to really demonstrate the power of the scientific python software packages to handle complex dynamic and controls problems (i.e. IPython notebooks, matplotlib animations, python-control, and our software packages: sympy.physics.mechanics and PyDy).
This blog post by Wolfram demonstrates Mathematica's ability to symbolically derive the equations of motion for the n-link pendulum and stabilize it with an LQR controller. This blog post inspired us to replicate the example with all free and open source software.
In this example problem, we derive the equations of motion of an n-link pendulum on a laterally sliding cart and then develop a controller to stabilize it. Balancing a single inverted pendulum is a classic problem that is often a student's first experience with non-linear dynamics and control. The problem here is extended to a general n-link pendulum in which the equations of motion quickly get messy with greater than 2 links.
The diagram below shows the general description of the problem.
End of explanation
import sympy as sm
import sympy.physics.mechanics as me
Explanation: Setup
This example depends on the following software:
IPython
NumPy
SciPy
SymPy >= 0.7.6
matplotlib
python-control
avconv
The easiest way to install the Python packages it is to use conda:
$ conda install ipython-notebook, numpy, scipy, sympy, matplotlib
$ conda install -c https://conda.binstar.org/cwrowley control
avconv should be installed as per its recommended procedure for your operating system.
Equations of Motion
We'll start by generating the equations of motion for the system with SymPy mechanics. The functionality that mechanics provides is much more in depth than Mathematica's functionality. In the Mathematica example, Lagrangian mechanics were implemented manually with Mathematica's symbolic functionality. mechanics provides an assortment of functions and classes to derive the equations of motion for arbitrarily complex (i.e. configuration constraints, nonholonomic motion constraints, etc) multibody systems in a very natural way. First we import the necessary functionality from SymPy.
End of explanation
me.init_vprinting()
Explanation: We can enable mathematical rendering of the resulting equations in the notebook with the following command.
End of explanation
n = 5
Explanation: Now specify the number of links, $n$. I'll start with 5 since the Wolfram folks only showed four.
End of explanation
q = me.dynamicsymbols('q:{}'.format(n + 1)) # Generalized coordinates
u = me.dynamicsymbols('u:{}'.format(n + 1)) # Generalized speeds
f = me.dynamicsymbols('f') # Force applied to the cart
m = sm.symbols('m:{}'.format(n + 1)) # Mass of each bob
l = sm.symbols('l:{}'.format(n)) # Length of each link
g, t = sm.symbols('g t') # Gravity and time
Explanation: mechanics will need the generalized coordinates, generalized speeds, and the input force which are all time dependent variables and the bob masses, link lengths, and acceleration due to gravity which are all constants. Time, $t$, is also made available because we will need to differentiate with respect to time.
End of explanation
I = me.ReferenceFrame('I') # Inertial reference frame
O = me.Point('O') # Origin point
O.set_vel(I, 0) # Origin's velocity is zero
Explanation: Now we can create and inertial reference frame $I$ and define the point, $O$, as the origin.
End of explanation
P0 = me.Point('P0') # Hinge point of top link
P0.set_pos(O, q[0] * I.x) # Set the position of P0
P0.set_vel(I, u[0] * I.x) # Set the velocity of P0
Pa0 = me.Particle('Pa0', P0, m[0]) # Define a particle at P0
Explanation: Secondly, we define the define the first point of the pendulum as a particle which has mass. This point can only move laterally and represents the motion of the "cart".
End of explanation
frames = [I] # List to hold the n + 1 frames
points = [P0] # List to hold the n + 1 points
particles = [Pa0] # List to hold the n + 1 particles
forces = [(P0, f * I.x - m[0] * g * I.y)] # List to hold the n + 1 applied forces, including the input force, f
kindiffs = [q[0].diff(t) - u[0]] # List to hold kinematic ODE's
for i in range(n):
Bi = I.orientnew('B' + str(i), 'Axis', [q[i + 1], I.z]) # Create a new frame
Bi.set_ang_vel(I, u[i + 1] * I.z) # Set angular velocity
frames.append(Bi) # Add it to the frames list
Pi = points[-1].locatenew('P' + str(i + 1), l[i] * Bi.x) # Create a new point
Pi.v2pt_theory(points[-1], I, Bi) # Set the velocity
points.append(Pi) # Add it to the points list
Pai = me.Particle('Pa' + str(i + 1), Pi, m[i + 1]) # Create a new particle
particles.append(Pai) # Add it to the particles list
forces.append((Pi, -m[i + 1] * g * I.y)) # Set the force applied at the point
kindiffs.append(q[i + 1].diff(t) - u[i + 1]) # Define the kinematic ODE: dq_i / dt - u_i = 0
Explanation: Now we can define the $n$ reference frames, particles, gravitational forces, and kinematical differential equations for each of the pendulum links. This is easily done with a loop.
End of explanation
kane = me.KanesMethod(I, q_ind=q, u_ind=u, kd_eqs=kindiffs) # Initialize the object
fr, frstar = kane.kanes_equations(forces, particles) # Generate EoM's fr + frstar = 0
Explanation: With all of the necessary point velocities and particle masses defined, the KanesMethod class can be used to derive the equations of motion of the system automatically.
End of explanation
sm.trigsimp(kane.mass_matrix)
Explanation: The equations of motion are quite long as can been seen below. This is the general nature of most non-simple mutlibody problems. That is why a SymPy is so useful; no more mistakes in algebra, differentiation, or copying hand written equations. Note that trigsimp can take quite a while to complete for extremely large expressions. Below we print $\tilde{M}$ and $\tilde{f}$ from $\tilde{M}\dot{u}=\tilde{f}$ to show the size of the expressions.
End of explanation
me.find_dynamicsymbols(kane.mass_matrix)
sm.trigsimp(kane.forcing)
Explanation: $\tilde{M}$ is a function of the constant parameters and the configuration.
End of explanation
me.find_dynamicsymbols(kane.forcing)
Explanation: $\tilde{f}$ is a function of the constant parameters, configuration, speeds, and the applied force.
End of explanation
import numpy as np
from numpy.linalg import solve
from scipy.integrate import odeint
Explanation: Simulation
Now that the symbolic equations of motion are available we can simulate the pendulum's motion. We will need some more SymPy functionality and several NumPy functions, and most importantly the integration function from SciPy, odeint.
End of explanation
arm_length = 1. / n # The maximum length of the pendulum is 1 meter
bob_mass = 0.01 / n # The maximum mass of the bobs is 10 grams
parameters = [g, m[0]] # Parameter definitions starting with gravity and the first bob
parameter_vals = [9.81, 0.01 / n] # Numerical values for the first two
for i in range(n): # Then each mass and length
parameters += [l[i], m[i + 1]]
parameter_vals += [arm_length, bob_mass]
Explanation: First, define some numeric values for all of the constant parameters in the problem.
End of explanation
dynamic = q + u # Make a list of the states
dynamic.append(f) # Add the input force
M_func = sm.lambdify(dynamic + parameters, kane.mass_matrix_full) # Create a callable function to evaluate the mass matrix
f_func = sm.lambdify(dynamic + parameters, kane.forcing_full) # Create a callable function to evaluate the forcing vector
Explanation: Mathematica has a really nice NDSolve function for quickly integrating their symbolic differential equations. We make use of SymPy's lambdify function to do something similar, i.e. to create functions that will evaluate the "full" mass matrix, $M$, and "full" forcing vector, $f$ from $M\dot{x} = f(x, r, t)$ as a NumPy function.
End of explanation
def right_hand_side(x, t, args):
Returns the derivatives of the states.
Parameters
----------
x : ndarray, shape(2 * (n + 1))
The current state vector.
t : float
The current time.
args : ndarray
The constants.
Returns
-------
dx : ndarray, shape(2 * (n + 1))
The derivative of the state.
r = 0.0 # The input force is always zero
arguments = np.hstack((x, r, args)) # States, input, and parameters
dx = np.array(solve(M_func(*arguments), # Solving for the derivatives
f_func(*arguments))).T[0]
return dx
Explanation: To integrate the ODE's we need to define a function that returns the derivatives of the states given the current state and time.
End of explanation
x0 = np.hstack((0.0, # q0
np.pi / 2 * np.ones(len(q) - 1), # q1...qn+1
1e-3 * np.ones(len(u)))) # u0...un+1
t = np.linspace(0.0, 10.0, num=500) # Time vector
x = odeint(right_hand_side, x0, t, args=(parameter_vals,)) # Numerical integration
Explanation: Now that we have the right hand side function, the initial conditions are set such that the pendulum is in the vertical equilibrium and a slight initial rate is set for each speed to ensure the pendulum falls. The equations can then be integrated with SciPy's odeint function given a time series.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(8.0, 6.0)
Explanation: Plotting
The results of the simulation can be plotted with matplotlib. First, load the plotting functionality.
End of explanation
lines = plt.plot(t, x[:, :x.shape[1] / 2])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[:x.shape[1] / 2])
Explanation: The coordinate trajectories are plotted below.
End of explanation
lines = plt.plot(t, x[:, x.shape[1] / 2:])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[x.shape[1] / 2:])
Explanation: And the generalized speed trajectories.
End of explanation
from matplotlib import animation
from matplotlib.patches import Rectangle
Explanation: Animation
matplotlib now includes very nice animation functions for animating matplotlib plots. First we import the necessary functions for creating the animation.
End of explanation
def animate_pendulum(t, states, length, filename=None):
Animates the n-pendulum and optionally saves it to file.
Parameters
----------
t : ndarray, shape(m)
Time array.
states: ndarray, shape(m,p)
State time history.
length: float
The length of the pendulum links.
filename: string or None, optional
If true a movie file will be saved of the animation. This may take some time.
Returns
-------
fig : matplotlib.Figure
The figure.
anim : matplotlib.FuncAnimation
The animation.
# the number of pendulum bobs
numpoints = states.shape[1] / 2
# first set up the figure, the axis, and the plot elements we want to animate
fig = plt.figure()
# some dimesions
cart_width = 0.4
cart_height = 0.2
# set the limits based on the motion
xmin = np.around(states[:, 0].min() - cart_width / 2.0, 1)
xmax = np.around(states[:, 0].max() + cart_width / 2.0, 1)
# create the axes
ax = plt.axes(xlim=(xmin, xmax), ylim=(-1.1, 1.1), aspect='equal')
# display the current time
time_text = ax.text(0.04, 0.9, '', transform=ax.transAxes)
# create a rectangular cart
rect = Rectangle([states[0, 0] - cart_width / 2.0, -cart_height / 2],
cart_width, cart_height, fill=True, color='red',
ec='black')
ax.add_patch(rect)
# blank line for the pendulum
line, = ax.plot([], [], lw=2, marker='o', markersize=6)
# initialization function: plot the background of each frame
def init():
time_text.set_text('')
rect.set_xy((0.0, 0.0))
line.set_data([], [])
return time_text, rect, line,
# animation function: update the objects
def animate(i):
time_text.set_text('time = {:2.2f}'.format(t[i]))
rect.set_xy((states[i, 0] - cart_width / 2.0, -cart_height / 2))
x = np.hstack((states[i, 0], np.zeros((numpoints - 1))))
y = np.zeros((numpoints))
for j in np.arange(1, numpoints):
x[j] = x[j - 1] + length * np.cos(states[i, j])
y[j] = y[j - 1] + length * np.sin(states[i, j])
line.set_data(x, y)
return time_text, rect, line,
# call the animator function
anim = animation.FuncAnimation(fig, animate, frames=len(t), init_func=init,
interval=t[-1] / len(t) * 1000, blit=True, repeat=False)
# save the animation if a filename is given
if filename is not None:
anim.save(filename, fps=30, writer="avconv", codec='libx264')
Explanation: The following function was modeled from Jake Vanderplas's post on matplotlib animations.
End of explanation
animate_pendulum(t, x, arm_length, filename="open-loop.mp4")
from IPython.display import HTML
html = \
<video width="640" height="480" controls>
<source src="open-loop.mp4" type="video/mp4">
Your browser does not support the video tag, check out the YouTube version instead: http://youtu.be/Nj3_npq7MZI.
</video>
HTML(html)
Explanation: Now we can create the animation of the pendulum. This animation will show the open loop dynamics.
End of explanation
equilibrium_point = [sm.S(0)] + [sm.pi / 2] * (len(q) - 1) + [sm.S(0)] * len(u)
equilibrium_dict = dict(zip(q + u, equilibrium_point))
equilibrium_dict
Explanation: Controller Design
The n-link pendulum can be balanced such that all of the links are inverted above the cart by applying the correct lateral force to the cart. We can design a full state feedback controller based from a linear model of the pendulum about its upright equilibrium point. We'll start by specifying the equilibrium point and parameters in dictionaries. We make sure to use SymPy types in the equilibrium point to ensure proper cancelations in the linearization.
End of explanation
M, F_A, F_B, r = kane.linearize(new_method=True, op_point=equilibrium_dict)
sm.simplify(M)
sm.simplify(F_A)
sm.simplify(F_B)
Explanation: The KanesMethod class has method that linearizes the forcing vector about generic state and input perturbation vectors. The equilibrium point and numerical constants can then be substituted in to give the linear system in this form: $M\dot{x}=F_Ax+F_Br$. The state and input matrices, $A$ and $B$, can then be computed by left side multiplication by the inverse of the mass matrix: $A=M^{-1}F_A$ and $B=M^{-1}F_B$.
End of explanation
parameter_dict = dict(zip(parameters, parameter_vals))
parameter_dict
M_num = sm.matrix2numpy(M.subs(parameter_dict), dtype=float)
F_A_num = sm.matrix2numpy(F_A.subs(parameter_dict), dtype=float)
F_B_num = sm.matrix2numpy(F_B.subs(parameter_dict), dtype=float)
A = np.linalg.solve(M_num, F_A_num)
B = np.linalg.solve(M_num ,F_B_num)
print(A)
print(B)
Explanation: Now the numerical $A$ and $B$ matrices can be formed. First substitute numerical parameter values into $M$, $F_A$, and $F_B$.
End of explanation
import control
from numpy.linalg import matrix_rank
Explanation: Now that we have a linear system, the python-control package can be used to design an optimal controller for the system.
End of explanation
matrix_rank(control.ctrb(A, B)) == A.shape[0]
Explanation: First we can check to see if the system is, in fact, controllable. The rank of the controllability matrix must be equal to the number of rows in $A$, but the matrix_rank algorithm is numerically ill conditioned and for certain values of $n$ this will fail, as seen below for $n=5$. Nevertheless, the system is controllable, no matter the number of links.
End of explanation
K, X, E = control.lqr(A, B, np.ones(A.shape), 1);
Explanation: So now we can compute the optimal gains with a linear quadratic regulator. I chose identity matrices for the weightings for simplicity.
End of explanation
def right_hand_side(x, t, args):
Returns the derivatives of the states.
Parameters
----------
x : ndarray, shape(2 * (n + 1))
The current state vector.
t : float
The current time.
args : ndarray
The constants.
Returns
-------
dx : ndarray, shape(2 * (n + 1))
The derivative of the state.
r = np.dot(K, equilibrium_point - x) # The controller
arguments = np.hstack((x, r, args)) # States, input, and parameters
dx = np.array(solve(M_func(*arguments), # Solving for the derivatives
f_func(*arguments))).T[0]
return dx
Explanation: The gains can now be used to define the required input during simulation to stabilize the system. The input $r$ is simply the gain vector multiplied by the error in the state vector from the equilibrium point, $r(t)=K(x_{eq} - x(t))$.
End of explanation
x0 = np.hstack((0,
np.pi / 2 * np.ones(len(q) - 1),
1 * np.ones(len(u))))
t = np.linspace(0.0, 10.0, num=500)
x = odeint(right_hand_side, x0, t, args=(parameter_vals,))
Explanation: Now we can simulate and animate the system to see if the controller works.
End of explanation
lines = plt.plot(t, x[:, :x.shape[1] / 2])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[:x.shape[1] / 2])
lines = plt.plot(t, x[:, x.shape[1] / 2:])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[x.shape[1] / 2:])
animate_pendulum(t, x, arm_length, filename="closed-loop.mp4")
from IPython.display import HTML
html = \
<video width="640" height="480" controls>
<source src="closed-loop.mp4" type="video/mp4">
Your browser does not support the video tag, check out the YouTube version instead: http://youtu.be/SpgBHqW9om0
</video>
HTML(html)
Explanation: The plots show that we seem to have a stable system.
End of explanation
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, sympy, scipy, matplotlib, control
Explanation: The video clearly shows that the controller can balance all $n$ of the pendulum links. The weightings in the lqr design can be tweaked to give different performance if needed.
This example shows that the free and open source scientific Python tools for dynamics are easily comparable in ability and quality to a commercial package such as Mathematica.
The IPython notebook for this example can be downloaded from https://github.com/pydy/pydy/tree/master/examples/npendulum. You can try out different $n$ values. I've gotten the equations of motion to compute for an open loop simulation of 10 links. My computer ran out of memory when I tried to compute for $n=50$. The controller weightings and initial conditions will probably have to be adjusted for better performance for $n>5$, but it should work.
End of explanation |
13,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Programming with OR-Tools
In this notebook, we do some basic LP solving with Google's OR-Tools. Problems used will be examples in Hamdy Taha's Operations Research
Step1: Reddy Mikks model
Given the following variables
Step2: More simple problems
A company that operates 10 hours a day manufactures two products on three sequential processes. The following data characterizes the problem
Step3: Where there are 10 hours a day dedicated to production. Process times are given in minutes per unit while profit is given in USD.
The optimal mix of the two products would be characterized by the following model | Python Code:
from ortools.linear_solver import pywraplp
Explanation: Linear Programming with OR-Tools
In this notebook, we do some basic LP solving with Google's OR-Tools. Problems used will be examples in Hamdy Taha's Operations Research: An Introduction, 9th Edition, which I have in paperback.
End of explanation
reddymikks = pywraplp.Solver('Reddy_Mikks', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)
x1 = reddymikks.NumVar(0, reddymikks.infinity(), 'x1')
x2 = reddymikks.NumVar(0, reddymikks.infinity(), 'x2')
reddymikks.Add(6*x1 + 4*x2 <= 24)
reddymikks.Add(x1 + 2*x2 <= 6)
reddymikks.Add(-x1 + x2 <= 1)
reddymikks.Add(x2 <= 2)
profit = reddymikks.Objective()
profit.SetCoefficient(x1, 5)
profit.SetCoefficient(x2, 4)
profit.SetMaximization()
status = reddymikks.Solve()
if status not in [reddymikks.OPTIMAL, reddymikks.FEASIBLE]:
raise Exception('No feasible solution found')
print("The company should produce",round(x1.solution_value(),2),"tons of exterior paint")
print("The company should produce",round(x2.solution_value(),2),"tons of interior paint")
print("The optimal profit is", profit.Value(), 'thousand USD')
Explanation: Reddy Mikks model
Given the following variables:
$\begin{aligned}
x_1 = \textrm{Tons of exterior paint produced daily} \newline
x_2 = \textrm{Tons of interior paint produced daily}
\end{aligned}$
and knowing that we want to maximize the profit, where \$5000 is the profit from exterior paint and \$4000 is the profit from a ton of interior paint, the Reddy Mikks model is:
$$\textrm{Maximize } z = 5x_1 + 4x_2$$
subject to
$$6x_1 + 4x_2 \le 24$$
$$x_1 + 2x_2 \le 6$$
$$-x_1 + x_2 \le 1$$
$$x_2 \le 2$$
$$x_1, x_2 \ge 0$$
End of explanation
import pandas as pd
problemdata = pd.DataFrame({'Process 1': [10, 5], 'Process 2':[6, 20], 'Process 3':[8, 10], 'Unit profit':[20, 30]})
problemdata.index = ['Product 1', 'Product 2']
problemdata
Explanation: More simple problems
A company that operates 10 hours a day manufactures two products on three sequential processes. The following data characterizes the problem:
End of explanation
simpleprod = pywraplp.Solver('Simple_Production', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)
x1 = simpleprod.NumVar(0, simpleprod.infinity(), 'x1')
x2 = simpleprod.NumVar(0, simpleprod.infinity(), 'x2')
for i in problemdata.columns[:-1]:
simpleprod.Add(problemdata.loc[problemdata.index[0], i]*x1 + problemdata.loc[problemdata.index[1], i]*x2 <= 600)
profit = simpleprod.Objective()
profit.SetCoefficient(x1, 20)
profit.SetCoefficient(x2, 30)
profit.SetMaximization()
status = simpleprod.Solve()
if status not in [simpleprod.OPTIMAL, simpleprod.FEASIBLE]:
raise Exception('No feasible solution found')
print("The company should produce",round(x1.solution_value(),2),"units of product 1")
print("The company should produce",round(x2.solution_value(),2),"units of product 2")
print("The optimal profit is", round(profit.Value(),2), 'USD')
Explanation: Where there are 10 hours a day dedicated to production. Process times are given in minutes per unit while profit is given in USD.
The optimal mix of the two products would be characterized by the following model:
$\begin{aligned}
x_1 = \textrm{Units of product 1} \newline
x_2 = \textrm{Units of product 2}
\end{aligned}$
$$\textrm{Maximize } z = 20x_1 + 30x_2$$
subject to
$$\begin{array}{rcl}
10x_1 + 5x_2 \le 600 \newline
6x_1 + 20x_2 \le 600 \newline
8x_1 + 10x_2 \le 600 \newline
x_1, x_2 \ge 0
\end{array}$$
(we will assume that continuous solution values are acceptable for this problem)
End of explanation |
13,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training and Serving with TFX and Vertex Pipelines
Learning objectives
Prepare example data.
Create a pipeline.
Run the pipeline on Vertex Pipelines.
Test with a prediction request.
Introduction
In this notebook, you will create and run a TFX pipeline which trains an ML model using Vertex AI Training service and publishes it to Vertex AI for serving. This notebook is based on the TFX pipeline we built in Simple TFX Pipeline for Vertex Pipelines Tutorial. If you have not read that tutorial yet, you should read it before proceeding with this notebook.
You can train models on Vertex AI using AutoML, or use custom training. In custom training, you can select many different machine types to power yourtraining jobs, enable distributed training, and use hyperparameter tuning. You can also serve prediction requests by deploying the trained model to Vertex AI Models and creating an endpoint.
In this notebook, we will use Vertex AI Training with custom jobs to train
a model in a TFX pipeline.
We will also deploy the model to serve prediction request using Vertex AI
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook
Install python packages
We will install required Python packages including TFX and KFP to author ML
pipelines and submit jobs to Vertex Pipelines.
Step1: Did you restart the runtime?
You can restart runtime with following cell.
Step2: Check the package versions.
Step3: Set up variables
We will set up some variables used to customize the pipelines below. Following
information is required
Step4: Set gcloud to use your project.
Step5: Prepare example data
We will use the same
Palmer Penguins dataset
as
Simple TFX Pipeline Tutorial.
There are four numeric features in this dataset which were already normalized
to have range [0,1]. We will build a classification model which predicts the
species of penguins.
We need to make our own copy of the dataset. Because TFX ExampleGen reads
inputs from a directory, we need to create a directory and copy dataset to it
on GCS.
Step6: Take a quick look at the CSV file.
Step10: Create a pipeline
Our pipeline will be very similar to the pipeline we created in
Simple TFX Pipeline for Vertex Pipelines Tutorial.
The pipeline will consists of three components, CsvExampleGen, Trainer and
Pusher. But we will use a special Trainer and Pusher component. The Trainer component will move
training workloads to Vertex AI, and the Pusher component will publish the
trained ML model to Vertex AI instead of a filesystem.
TFX provides a special Trainer to submit training jobs to Vertex AI Training
service. All we have to do is use Trainer in the extension module
instead of the standard Trainer component along with some required GCP
parameters.
In this tutorial, we will run Vertex AI Training jobs only using CPUs first
and then with a GPU.
TFX also provides a special Pusher to upload the model to Vertex AI Models.
Pusher will create Vertex AI Endpoint resource to serve online
perdictions, too. See
Vertex AI documentation
to learn more about online predictions provided by Vertex AI.
Write model code.
The model itself is almost similar to the model in
Simple TFX Pipeline Tutorial.
We will add _get_distribution_strategy() function which creates a
TensorFlow distribution strategy
and it is used in run_fn to use MirroredStrategy if GPU is available.
Step11: Copy the module file to GCS which can be accessed from the pipeline components.
Otherwise, you might want to build a container image including the module file
and use the image to run the pipeline and AI Platform Training jobs.
Step13: Write a pipeline definition
We will define a function to create a TFX pipeline. It has the same three
Components as in
Simple TFX Pipeline Tutorial,
but we use a Trainer and Pusher component in the GCP extension module.
tfx.extensions.google_cloud_ai_platform.Trainer behaves like a regular
Trainer, but it just moves the computation for the model training to cloud.
It launches a custom job in Vertex AI Training service and the trainer
component in the orchestration system will just wait until the Vertex AI
Training job completes.
tfx.extensions.google_cloud_ai_platform.Pusher creates a Vertex AI Model and a Vertex AI Endpoint using the
trained model.
Step14: Run the pipeline on Vertex Pipelines.
We will use Vertex Pipelines to run the pipeline as we did in
Simple TFX Pipeline for Vertex Pipelines Tutorial.
Step15: The generated definition file can be submitted using Google Cloud aiplatform
client in google-cloud-aiplatform package.
Step16: Now you can visit the link in the output above or visit 'Vertex AI > Pipelines'
in Google Cloud Console to see the
progress.
It will take around 30 mintues to complete the pipeline.
Test with a prediction request
Once the pipeline completes, you will find a deployed model at the one of the
endpoints in 'Vertex AI > Endpoints'. We need to know the id of the endpoint to
send a prediction request to the new endpoint. This is different from the
endpoint name we entered above. You can find the id at the Endpoints page in
Google Cloud Console, it looks like a very long number.
Set ENDPOINT_ID below before running it.
Step17: We use the same aiplatform client to send a request to the endpoint. We will
send a prediction request for Penguin species classification. The input is the four features that we used, and the model will return three values, because our
model outputs one value for each species.
For example, the following specific example has the largest value at index '2'
and will print '2'. | Python Code:
# Use the latest version of pip.
!pip install --upgrade pip
!pip install --upgrade "tfx[kfp]<2"
Explanation: Training and Serving with TFX and Vertex Pipelines
Learning objectives
Prepare example data.
Create a pipeline.
Run the pipeline on Vertex Pipelines.
Test with a prediction request.
Introduction
In this notebook, you will create and run a TFX pipeline which trains an ML model using Vertex AI Training service and publishes it to Vertex AI for serving. This notebook is based on the TFX pipeline we built in Simple TFX Pipeline for Vertex Pipelines Tutorial. If you have not read that tutorial yet, you should read it before proceeding with this notebook.
You can train models on Vertex AI using AutoML, or use custom training. In custom training, you can select many different machine types to power yourtraining jobs, enable distributed training, and use hyperparameter tuning. You can also serve prediction requests by deploying the trained model to Vertex AI Models and creating an endpoint.
In this notebook, we will use Vertex AI Training with custom jobs to train
a model in a TFX pipeline.
We will also deploy the model to serve prediction request using Vertex AI
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook
Install python packages
We will install required Python packages including TFX and KFP to author ML
pipelines and submit jobs to Vertex Pipelines.
End of explanation
# docs_infra: no_execute
import sys
if not 'google.colab' in sys.modules:
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Did you restart the runtime?
You can restart runtime with following cell.
End of explanation
# Import necessary liabraries and print their versions
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import kfp
print('KFP version: {}'.format(kfp.__version__))
Explanation: Check the package versions.
End of explanation
# Set the required variables
GOOGLE_CLOUD_PROJECT = 'qwiklabs-gcp-02-b8bef0a57866' # Replace this with your Project-ID
GOOGLE_CLOUD_REGION = 'us-central1' # Replace this with your bucket region
GCS_BUCKET_NAME = 'qwiklabs-gcp-02-b8bef0a57866' # Replace this with your Cloud Storage bucket
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error('Please set all required parameters.')
Explanation: Set up variables
We will set up some variables used to customize the pipelines below. Following
information is required:
GCP Project id. See
Identifying your project id.
GCP Region to run pipelines. For more information about the regions that
Vertex Pipelines is available in, see the
Vertex AI locations guide.
Google Cloud Storage Bucket to store pipeline outputs.
Enter required values in the cell below before running it.
End of explanation
!gcloud config set project {GOOGLE_CLOUD_PROJECT}
PIPELINE_NAME = 'penguin-vertex-training'
# Path to various pipeline artifact.
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' Python module.
MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' data.
DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# Name of Vertex AI Endpoint.
ENDPOINT_NAME = 'prediction-' + PIPELINE_NAME
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
Explanation: Set gcloud to use your project.
End of explanation
# Create a directory and copy the dataset
!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/
Explanation: Prepare example data
We will use the same
Palmer Penguins dataset
as
Simple TFX Pipeline Tutorial.
There are four numeric features in this dataset which were already normalized
to have range [0,1]. We will build a classification model which predicts the
species of penguins.
We need to make our own copy of the dataset. Because TFX ExampleGen reads
inputs from a directory, we need to create a directory and copy dataset to it
on GCS.
End of explanation
# Review the contents of the CSV file
# TODO 1: Your code goes here
Explanation: Take a quick look at the CSV file.
End of explanation
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and
# slightly modified run_fn() to add distribution_strategy.
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_metadata.proto.v0 import schema_pb2
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
}, _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int) -> tf.data.Dataset:
Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _make_keras_model() -> tf.keras.Model:
Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# NEW: Read `use_gpu` from the custom_config of the Trainer.
# if it uses GPU, enable MirroredStrategy.
def _get_distribution_strategy(fn_args: tfx.components.FnArgs):
if fn_args.custom_config.get('use_gpu', False):
logging.info('Using MirroredStrategy with one GPU.')
return tf.distribute.MirroredStrategy(devices=['device:GPU:0'])
return None
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
# NEW: If we have a distribution strategy, build a model in a strategy scope.
strategy = _get_distribution_strategy(fn_args)
if strategy is None:
model = _make_keras_model()
else:
with strategy.scope():
model = _make_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
Explanation: Create a pipeline
Our pipeline will be very similar to the pipeline we created in
Simple TFX Pipeline for Vertex Pipelines Tutorial.
The pipeline will consists of three components, CsvExampleGen, Trainer and
Pusher. But we will use a special Trainer and Pusher component. The Trainer component will move
training workloads to Vertex AI, and the Pusher component will publish the
trained ML model to Vertex AI instead of a filesystem.
TFX provides a special Trainer to submit training jobs to Vertex AI Training
service. All we have to do is use Trainer in the extension module
instead of the standard Trainer component along with some required GCP
parameters.
In this tutorial, we will run Vertex AI Training jobs only using CPUs first
and then with a GPU.
TFX also provides a special Pusher to upload the model to Vertex AI Models.
Pusher will create Vertex AI Endpoint resource to serve online
perdictions, too. See
Vertex AI documentation
to learn more about online predictions provided by Vertex AI.
Write model code.
The model itself is almost similar to the model in
Simple TFX Pipeline Tutorial.
We will add _get_distribution_strategy() function which creates a
TensorFlow distribution strategy
and it is used in run_fn to use MirroredStrategy if GPU is available.
End of explanation
!gsutil cp {_trainer_module_file} {MODULE_ROOT}/
Explanation: Copy the module file to GCS which can be accessed from the pipeline components.
Otherwise, you might want to build a container image including the module file
and use the image to run the pipeline and AI Platform Training jobs.
End of explanation
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
module_file: str, endpoint_name: str, project_id: str,
region: str, use_gpu: bool) -> tfx.dsl.Pipeline:
Implements the penguin pipeline with TFX.
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = # TODO 2: Your code goes here
# NEW: Configuration for Vertex AI Training.
# This dictionary will be passed as `CustomJobSpec`.
vertex_job_spec = {
'project': project_id,
'worker_pool_specs': [{
'machine_spec': {
'machine_type': 'n1-standard-4',
},
'replica_count': 1,
'container_spec': {
'image_uri': 'gcr.io/tfx-oss-public/tfx:{}'.format(tfx.__version__),
},
}],
}
if use_gpu:
# See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#acceleratortype
# for available machine types.
vertex_job_spec['worker_pool_specs'][0]['machine_spec'].update({
'accelerator_type': 'NVIDIA_TESLA_K80',
'accelerator_count': 1
})
# Trains a model using Vertex AI Training.
# NEW: We need to specify a Trainer for GCP with related configs.
trainer = tfx.extensions.google_cloud_ai_platform.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5),
custom_config={
tfx.extensions.google_cloud_ai_platform.ENABLE_VERTEX_KEY:
True,
tfx.extensions.google_cloud_ai_platform.VERTEX_REGION_KEY:
region,
tfx.extensions.google_cloud_ai_platform.TRAINING_ARGS_KEY:
vertex_job_spec,
'use_gpu':
use_gpu,
})
# NEW: Configuration for pusher.
vertex_serving_spec = {
'project_id': project_id,
'endpoint_name': endpoint_name,
# Remaining argument is passed to aiplatform.Model.deploy()
# See https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api#deploy_the_model
# for the detail.
#
# Machine type is the compute resource to serve prediction requests.
# See https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types
# for available machine types and acccerators.
'machine_type': 'n1-standard-4',
}
# Vertex AI provides pre-built containers with various configurations for
# serving.
# See https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
# for available container images.
serving_image = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-6:latest'
if use_gpu:
vertex_serving_spec.update({
'accelerator_type': 'NVIDIA_TESLA_K80',
'accelerator_count': 1
})
serving_image = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-gpu.2-6:latest'
# NEW: Pushes the model to Vertex AI.
pusher = tfx.extensions.google_cloud_ai_platform.Pusher(
model=trainer.outputs['model'],
custom_config={
tfx.extensions.google_cloud_ai_platform.ENABLE_VERTEX_KEY:
True,
tfx.extensions.google_cloud_ai_platform.VERTEX_REGION_KEY:
region,
tfx.extensions.google_cloud_ai_platform.VERTEX_CONTAINER_IMAGE_URI_KEY:
serving_image,
tfx.extensions.google_cloud_ai_platform.SERVING_ARGS_KEY:
vertex_serving_spec,
})
components = [
example_gen,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=components)
Explanation: Write a pipeline definition
We will define a function to create a TFX pipeline. It has the same three
Components as in
Simple TFX Pipeline Tutorial,
but we use a Trainer and Pusher component in the GCP extension module.
tfx.extensions.google_cloud_ai_platform.Trainer behaves like a regular
Trainer, but it just moves the computation for the model training to cloud.
It launches a custom job in Vertex AI Training service and the trainer
component in the orchestration system will just wait until the Vertex AI
Training job completes.
tfx.extensions.google_cloud_ai_platform.Pusher creates a Vertex AI Model and a Vertex AI Endpoint using the
trained model.
End of explanation
import os
PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'
runner = tfx.orchestration.experimental.KubeflowV2DagRunner(
config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),
output_filename=PIPELINE_DEFINITION_FILE)
_ = runner.run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
module_file=os.path.join(MODULE_ROOT, _trainer_module_file),
endpoint_name=ENDPOINT_NAME,
project_id=GOOGLE_CLOUD_PROJECT,
region=GOOGLE_CLOUD_REGION,
# We will use CPUs only for now.
use_gpu=False))
Explanation: Run the pipeline on Vertex Pipelines.
We will use Vertex Pipelines to run the pipeline as we did in
Simple TFX Pipeline for Vertex Pipelines Tutorial.
End of explanation
# docs_infra: no_execute
from google.cloud import aiplatform
from google.cloud.aiplatform import pipeline_jobs
import logging
logging.getLogger().setLevel(logging.INFO)
aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)
# Create a job to submit the pipeline
job = # TODO 3: Your code goes here
job.submit()
Explanation: The generated definition file can be submitted using Google Cloud aiplatform
client in google-cloud-aiplatform package.
End of explanation
ENDPOINT_ID='8646374722876997632' # Replace this with your ENDPOINT_ID
if not ENDPOINT_ID:
from absl import logging
logging.error('Please set the endpoint id.')
Explanation: Now you can visit the link in the output above or visit 'Vertex AI > Pipelines'
in Google Cloud Console to see the
progress.
It will take around 30 mintues to complete the pipeline.
Test with a prediction request
Once the pipeline completes, you will find a deployed model at the one of the
endpoints in 'Vertex AI > Endpoints'. We need to know the id of the endpoint to
send a prediction request to the new endpoint. This is different from the
endpoint name we entered above. You can find the id at the Endpoints page in
Google Cloud Console, it looks like a very long number.
Set ENDPOINT_ID below before running it.
End of explanation
# docs_infra: no_execute
import numpy as np
# The AI Platform services require regional API endpoints.
client_options = {
'api_endpoint': GOOGLE_CLOUD_REGION + '-aiplatform.googleapis.com'
}
# Initialize client that will be used to create and send requests.
client = # TODO 4: Your code goes here
# Set data values for the prediction request.
# Our model expects 4 feature inputs and produces 3 output values for each
# species. Note that the output is logit value rather than probabilities.
# See the model code to understand input / output structure.
instances = [{
'culmen_length_mm':[0.71],
'culmen_depth_mm':[0.38],
'flipper_length_mm':[0.98],
'body_mass_g': [0.78],
}]
endpoint = client.endpoint_path(
project=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_REGION,
endpoint=ENDPOINT_ID,
)
# Send a prediction request and get response.
response = client.predict(endpoint=endpoint, instances=instances)
# Uses argmax to find the index of the maximum value.
print('species:', np.argmax(response.predictions[0]))
Explanation: We use the same aiplatform client to send a request to the endpoint. We will
send a prediction request for Penguin species classification. The input is the four features that we used, and the model will return three values, because our
model outputs one value for each species.
For example, the following specific example has the largest value at index '2'
and will print '2'.
End of explanation |
13,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p style="text-align
Step2: 1. Implementar o algoritmo K-means
Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.
1.1 Inicializar os centróides
A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.
Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição.
Dica
Step3: Teste a função criada e visualize os centróides que foram calculados.
Step5: 1.2 Definir os clusters
Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.
1.2.1 Função de distância
Codifique a função de distância euclidiana entre dois pontos (a, b).
Definido pela equação
Step6: Teste a função criada.
Step8: 1.2.2 Calcular o centroide mais próximo
Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer.
Dica
Step9: Teste a função criada
Step11: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
Step12: Teste a função criada visualizando os cluster formados.
Step14: 1.3 Métrica de avaliação
Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.
O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.
$$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$
A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes
Step15: Teste a função codificada executando o código abaixo.
Step17: 1.4 Atualizar os clusters
Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
Step18: Visualize os clusters formados
Step19: Execute a função de atualização e visualize novamente os cluster formados
Step20: 2. K-means
2.1 Algoritmo completo
Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
Step21: Verifique o resultado do algoritmo abaixo!
Step22: 2.2 Comparar com algoritmo do Scikit-Learn
Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.
Dica
Step23: 3. Método do cotovelo
Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
Step24: 4. Dataset Real
Exercícios
1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2].
[1] http | Python Code:
# import libraries
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
from matplotlib import pyplot as plt
# load the data with pandas
dataset = pd.read_csv('dataset.csv', header=None)
dataset = np.array(dataset)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.show()
Explanation: <p style="text-align: center;">Clusterização e algoritmo K-means</p>
Organizar dados em agrupamentos é um dos modos fundamentais de compreensão e aprendizado. Como por exemplo, os organismos em um sistema biológico são classificados em domínio, reino, filo, classe, etc. A análise de agrupamento é o estudo formal de métodos e algoritmos para agrupar objetos de acordo com medidas ou características semelhantes. A análise de cluster, em sua essência, não utiliza rótulos de categoria que marcam objetos com identificadores anteriores, ou seja, rótulos de classe. A ausência de informação de categoria distingue o agrupamento de dados (aprendizagem não supervisionada) da classificação ou análise discriminante (aprendizagem supervisionada). O objetivo da clusterização é encontrar estruturas em dados e, portanto, é de natureza exploratória.
A técnica de Clustering tem uma longa e rica história em uma variedade de campos científicos. Um dos algoritmos de clusterização mais populares e simples, o K-means, foi publicado pela primeira vez em 1955. Apesar do K-means ter sido proposto há mais de 50 anos e milhares de algoritmos de clustering terem sido publicados desde então, o K-means é ainda amplamente utilizado.
Fonte: Anil K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognition Letters, Volume 31, Issue 8, 2010
Objetivo
Implementar as funções do algoritmo KMeans passo-a-passo
Comparar a implementação com o algoritmo do Scikit-Learn
Entender e codificar o Método do Cotovelo
Utilizar o K-means em um dataset real
Carregando os dados de teste
Carregue os dados disponibilizados, e identifique visualmente em quantos grupos os dados parecem estar distribuídos.
End of explanation
def calculate_initial_centers(dataset, k):
Inicializa os centróides iniciais de maneira arbitrária
Argumentos:
dataset -- Conjunto de dados - [m,n]
k -- Número de centróides desejados
Retornos:
centroids -- Lista com os centróides calculados - [k,n]
#### CODE HERE ####
### END OF CODE ###
return centroids
Explanation: 1. Implementar o algoritmo K-means
Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.
1.1 Inicializar os centróides
A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.
Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html
End of explanation
k = 3
centroids = calculate_initial_centers(dataset, k)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100)
plt.show()
Explanation: Teste a função criada e visualize os centróides que foram calculados.
End of explanation
def euclidean_distance(a, b):
Calcula a distância euclidiana entre os pontos a e b
Argumentos:
a -- Um ponto no espaço - [1,n]
b -- Um ponto no espaço - [1,n]
Retornos:
distance -- Distância euclidiana entre os pontos
#### CODE HERE ####
### END OF CODE ###
return distance
Explanation: 1.2 Definir os clusters
Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.
1.2.1 Função de distância
Codifique a função de distância euclidiana entre dois pontos (a, b).
Definido pela equação:
$$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$
$$ dist(a, b) = \sqrt{\sum_{i=1}^{n}(a_i-b_i)^{2}} $$
End of explanation
a = np.array([1, 5, 9])
b = np.array([3, 7, 8])
if (euclidean_distance(a,b) == 3):
print("Distância calculada corretamente!")
else:
print("Função de distância incorreta")
Explanation: Teste a função criada.
End of explanation
def nearest_centroid(a, centroids):
Calcula o índice do centroid mais próximo ao ponto a
Argumentos:
a -- Um ponto no espaço - [1,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_index -- Índice do centróide mais próximo
#### CODE HERE ####
### END OF CODE ###
return nearest_index
Explanation: 1.2.2 Calcular o centroide mais próximo
Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html
End of explanation
# Seleciona um ponto aleatório no dataset
index = np.random.randint(dataset.shape[0])
a = dataset[index,:]
# Usa a função para descobrir o centroid mais próximo
idx_nearest_centroid = nearest_centroid(a, centroids)
# Plota os dados ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], s=10)
# Plota o ponto aleatório escolhido em uma cor diferente
plt.scatter(a[0], a[1], c='magenta', s=30)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
# Plota o centroid mais próximo com uma cor diferente
plt.scatter(centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],
marker='^', c='springgreen', s=100)
# Cria uma linha do ponto escolhido para o centroid selecionado
plt.plot([a[0], centroids[idx_nearest_centroid,0]],
[a[1], centroids[idx_nearest_centroid,1]],c='orange')
plt.annotate('CENTROID', (centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],))
plt.show()
Explanation: Teste a função criada
End of explanation
def all_nearest_centroids(dataset, centroids):
Calcula o índice do centroid mais próximo para cada
ponto do dataset
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_indexes -- Índices do centróides mais próximos - [m,1]
#### CODE HERE ####
### END OF CODE ###
return nearest_indexes
Explanation: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
End of explanation
nearest_indexes = all_nearest_centroids(dataset, centroids)
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
plt.show()
Explanation: Teste a função criada visualizando os cluster formados.
End of explanation
def inertia(dataset, centroids, nearest_indexes):
Soma das distâncias quadradas das amostras para o
centro do cluster mais próximo.
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
inertia -- Soma total do quadrado da distância entre
os dados de um cluster e seu centróide
#### CODE HERE ####
### END OF CODE ###
return inertia
Explanation: 1.3 Métrica de avaliação
Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.
O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.
$$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$
A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes:
A inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares.
A inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos.
Fonte: https://scikit-learn.org/stable/modules/clustering.html
Para podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente.
$$inertia = \sum_{i=0}^{n}\min_{c_j \in C} (dist(x_i, c_j))^2$$
End of explanation
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])
tmp_centroide = np.array([[2,3,4]])
tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)
if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:
print("Inertia calculada corretamente!")
else:
print("Função de inertia incorreta!")
# Use a função para verificar a inertia dos seus clusters
inertia(dataset, centroids, nearest_indexes)
Explanation: Teste a função codificada executando o código abaixo.
End of explanation
def update_centroids(dataset, centroids, nearest_indexes):
Atualiza os centroids
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
centroids -- Lista com centróides atualizados - [k,n]
#### CODE HERE ####
### END OF CODE ###
return centroids
Explanation: 1.4 Atualizar os clusters
Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
End of explanation
nearest_indexes = all_nearest_centroids(dataset, centroids)
# Plota os os cluster ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
for data in dataframe:
plt.plot([centroid[0], data[0]], [centroid[1], data[1]],
c='lightgray', alpha=0.3)
plt.show()
Explanation: Visualize os clusters formados
End of explanation
centroids = update_centroids(dataset, centroids, nearest_indexes)
Explanation: Execute a função de atualização e visualize novamente os cluster formados
End of explanation
class KMeans():
def __init__(self, n_clusters=8, max_iter=300):
self.n_clusters = n_clusters
self.max_iter = max_iter
def fit(self,X):
# Inicializa os centróides
self.cluster_centers_ = [None]
# Computa o cluster de cada amostra
self.labels_ = [None]
# Calcula a inércia inicial
old_inertia = [None]
for index in [None]:
#### CODE HERE ####
### END OF CODE ###
return self
def predict(self, X):
return [None]
Explanation: 2. K-means
2.1 Algoritmo completo
Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
End of explanation
kmeans = KMeans(n_clusters=3)
kmeans.fit(dataset)
print("Inércia = ", kmeans.inertia_)
plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_)
plt.scatter(kmeans.cluster_centers_[:,0],
kmeans.cluster_centers_[:,1], marker='^', c='red', s=100)
plt.show()
Explanation: Verifique o resultado do algoritmo abaixo!
End of explanation
#### CODE HERE ####
Explanation: 2.2 Comparar com algoritmo do Scikit-Learn
Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.
Dica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans
End of explanation
#### CODE HERE ####
Explanation: 3. Método do cotovelo
Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
End of explanation
#### CODE HERE ####
Explanation: 4. Dataset Real
Exercícios
1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2].
[1] http://archive.ics.uci.edu/ml/datasets/iris
[2] http://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation
Dica: você pode utilizar as métricas completeness e homogeneity.
2 - Tente melhorar o resultado obtido na questão anterior utilizando uma técnica de mineração de dados. Explique a diferença obtida.
Dica: você pode tentar normalizar os dados [3].
- [3] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html
3 - Qual o número de clusteres (K) você escolheu na questão anterior? Desenvolva o Método do Cotovelo sem usar biblioteca e descubra o valor de K mais adequado. Após descobrir, utilize o valor obtido no algoritmo do K-means.
4 - Utilizando os resultados da questão anterior, refaça o cálculo das métricas e comente os resultados obtidos. Houve uma melhoria? Explique.
End of explanation |
13,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 1
Imports
Step3: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic
Step5: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
Step7: Write a function sort_word_counts that return a list of sorted word counts
Step8: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt
Step9: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research... | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
Explanation: Algorithms Exercise 1
Imports
End of explanation
def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'):
Split a string into a list of words, removing punctuation and stop words.
all_words= []
for line in s.splitlines():
words = line.split(" ")
all_words.extend(words)
for words in all_words:
filter(all_words, punctuation)
return all_words
tokenize("There is no cow level \nWow, sally that was great.")
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
['this', 'way', 'that', 'things', 'will', 'end']
wasteland =
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
assert tokenize(wasteland, stop_words='is the of and') == \
['april','cruellest','month','breeding','lilacs','out','dead','land',
'mixing','memory','desire','stirring','dull','roots','with','spring',
'rain']
Explanation: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:
Split the string into lines using splitlines.
Split each line into a list of words and merge the lists for each line.
Use Python's builtin filter function to remove all punctuation.
If stop_words is a list, remove all occurences of the words in the list.
If stop_words is a space delimeted string of words, split them and remove them.
Remove any remaining empty words.
Make all words lowercase.
End of explanation
def count_words(data):
Return a word count dictionary from the list of words in data.
# YOUR CODE HERE
raise NotImplementedError()
assert count_words(tokenize('this and the this from and a a a')) == \
{'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
Explanation: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
End of explanation
def sort_word_counts(wc):
Return a list of 2-tuples of (word, count), sorted by count descending.
# YOUR CODE HERE
raise NotImplementedError()
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
[('a', 4), ('this', 3), ('and', 2), ('the', 1)]
Explanation: Write a function sort_word_counts that return a list of sorted word counts:
Each element of the list should be a (word, count) tuple.
The list should be sorted by the word counts, with the higest counts coming first.
To perform this sort, look at using the sorted function with a custom key and reverse
argument.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert swc[0]==('i',43)
assert len(swc)==848
Explanation: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:
Read the file into a string.
Tokenize with stop words of 'the of and a to in is it that as'.
Perform a word count, the sort and save the result in a variable named swc.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the dotplot
Explanation: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
End of explanation |
13,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
# Defensive programming (1)
How much time do you spend writing software? How much time do you spend
debugging that software? It turns out that it is very easy to spend lots
of time fixing bugs and less time than you would like writing new software
to do new science. This is a problem that is fairly well understood by
the software engineering community, but many scientists don't take advantage
of this knowledge. This afternoon we will take a brief look at some of the
tools and technique to make your debugging less painful.
We'll also think a bit about how you may know if your programmes are correct.
This is a much harder but important problem. Even minor errors in research
code can lead to the retraction of papers, as happened to Geoffrey Chang
in 2006 (see http
Step1: "Wrong" input
Step2: What is python telling us?
That something went wrong, where it went wrong, what went
wrong, and what the programme was doing at the time. This
is an exception.
Exception class (e.g ZeroDivisionError)
Some further information (e.g. float division by zero)
File (or cell) name and line number of each function in the call stack (e.g. in mean_cell_volume at line ---> 19 inside cell ipython-input-...)
We can create these ourselves when we run code
Step3: What if we get the wrong answer?
This is a more difficult problem to spot - the avarage volume cannot be 0.0!
Step4: The reason is that there is a bug in cell_volume.
Step5: The volume should always be positive. We can check for this. This kind of check is known as an assertion.
Step6: We can write these more easily with the assert statment. It is
good practice to put these in your code when you write it (and you
know what it does, and what assumptions you have made). These act
as a form of documentation as well as a form of protection.
Step7: We can think about three types of assert statment | Python Code:
def cell_volume(X, Y, Z):
# Return the volume of a unit cell
# described by lattice vectors X, Y and Z
# The volume is given by the determinant of
# the matrix formed by sticking the three
# vectors together. i.e.
#
# | X[0] Y[0] Z[0] |
# V = | X[1] Y[1] Z[1] |
# | X[2] Y[2] Z[2] |
#
# V = X[0].Y[1].Z[2] + Y[0].Z[1].X[2]
# + X[2].Y[0].Z[1] - Z[0].Y[1].X[2]
# - Y[0].X[1].Z[2] - X[0].Z[1].Y[2]
volume = (X[0]*Y[1]*Z[2] + Y[0]*Z[1]*X[2] + X[2]*Y[0]*Z[1]
- Z[0]*Y[1]*X[2] - Y[0]*X[1]*Z[2] - X[0]*Z[1]*Y[2])
return volume
cell_volume([4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0, 6.0])
def mean_cell_volume(cell_list):
# Return the avarage volume of a list
# of unit cells. Each element of cell_list
# should be a list of three lattice vectors,
# each with three components. The volume of
# each cell is calculated and summed before
# being devided by the number of cells to give
# the mean volume.
num_cells = 0
sum_volume = 0.0
for cell in cell_list:
X = cell[0]
Y = cell[1]
Z = cell[2]
sum_volume = sum_volume + cell_volume(X, Y, Z)
num_cells = num_cells + 1
mean_volume = sum_volume/num_cells
return mean_volume
mean_cell_volume([[[4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0, 6.0]],
[[10.0, 0.0, 0.0], [0.0, 4.0, 0.0], [0.0, 0.0, 6.0]],
[[6.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0, 4.0]]])
Explanation: # Defensive programming (1)
How much time do you spend writing software? How much time do you spend
debugging that software? It turns out that it is very easy to spend lots
of time fixing bugs and less time than you would like writing new software
to do new science. This is a problem that is fairly well understood by
the software engineering community, but many scientists don't take advantage
of this knowledge. This afternoon we will take a brief look at some of the
tools and technique to make your debugging less painful.
We'll also think a bit about how you may know if your programmes are correct.
This is a much harder but important problem. Even minor errors in research
code can lead to the retraction of papers, as happened to Geoffrey Chang
in 2006 (see http://dx.doi.org/10.1126/science.314.5807.1856). Chang did
nothing malicious and committed no fraud, but because of a minor software
error had two retract five papers just before Christmas.
NB: This notebook is designed for teaching about exceptions and error testing. It includes deliberate errors. There are probably accidental errors too.
Mean cell volume
First, we will look at how one programme can produce
the wrong answer, and how we can avoid this happening
when we use it.
End of explanation
mean_cell_volume([[[4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0, 6.0]],
[[4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0]],
[[4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0, 6.0]]])
mean_cell_volume([])
Explanation: "Wrong" input
End of explanation
raise Exception("something went wrong")
Explanation: What is python telling us?
That something went wrong, where it went wrong, what went
wrong, and what the programme was doing at the time. This
is an exception.
Exception class (e.g ZeroDivisionError)
Some further information (e.g. float division by zero)
File (or cell) name and line number of each function in the call stack (e.g. in mean_cell_volume at line ---> 19 inside cell ipython-input-...)
We can create these ourselves when we run code:
End of explanation
mean_cell_volume([[[4.0, 0.0, 0.0], [0.0, -10.0, 0.0], [0.0, 0.0, 6.0]],
[[4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0, 6.0]],
[[-4.0, 0.0, 0.0], [0.0, -10.0, 0.0], [0.0, 0.0, 6.0]],
[[-4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0, 6.0]]])
Explanation: What if we get the wrong answer?
This is a more difficult problem to spot - the avarage volume cannot be 0.0!
End of explanation
cell_volume([4.0, 0.0, 0.0], [0.0, -10.0, 0.0], [0.0, 0.0, 6.0])
Explanation: The reason is that there is a bug in cell_volume.
End of explanation
volume = cell_volume([4.0, 0.0, 0.0], [0.0, -10.0, 0.0], [0.0, 0.0, 6.0])
if (volume < 0.0):
raise AssertionError("The volume must be positive")
print volume
Explanation: The volume should always be positive. We can check for this. This kind of check is known as an assertion.
End of explanation
volume = cell_volume([4.0, 0.0, 0.0], [0.0, -10.0, 0.0], [0.0, 0.0, 6.0])
assert volume >= 0.0, "The volume must be positive"
print volume
Explanation: We can write these more easily with the assert statment. It is
good practice to put these in your code when you write it (and you
know what it does, and what assumptions you have made). These act
as a form of documentation as well as a form of protection.
End of explanation
def cell_volume(X, Y, Z):
# Return the volume of a unit cell
# described by lattice vectors X, Y and Z
# The volume is given by the determinant of
# the matrix formed by sticking the three
# vectors together. i.e.
#
# | X[0] Y[0] Z[0] |
# V = | X[1] Y[1] Z[1] |
# | X[2] Y[2] Z[2] |
#
# V = X[0].Y[1].Z[2] + Y[0].Z[1].X[2]
# + X[2].Y[0].Z[1] - Z[0].Y[1].X[2]
# - Y[0].X[1].Z[2] - X[0].Z[1].Y[2]
assert len(X) == 3, "X must be a three-vector"
assert len(Y) == 3, "Y must be a three-vector"
assert len(Z) == 3, "Z must be a three-vector"
volume = (X[0]*Y[1]*Z[2] + Y[0]*Z[1]*X[2] + X[2]*Y[0]*Z[1]
- Z[0]*Y[1]*X[2] - Y[0]*X[1]*Z[2] - X[0]*Z[1]*Y[2])
assert volume >= 0.0, "The calculated volume must be positive"
return volume
def mean_cell_volume(cell_list):
# Return the avarage volume of a list
# of unit cells. Each element of cell_list
# should be a list of three lattice vectors,
# each with three components. The volume of
# each cell is calculated and summed before
# being devided by the number of cells to give
# the mean volume.
num_cells = 0
sum_volume = 0.0
for cell in cell_list:
X = cell[0]
Y = cell[1]
Z = cell[2]
sum_volume = sum_volume + cell_volume(X, Y, Z)
num_cells = num_cells + 1
assert num_cells >= 1, "One or more cells must be provided"
mean_volume = sum_volume/num_cells
return mean_volume
mean_cell_volume([[[4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0, 6.0]],
[[4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0]],
[[4.0, 0.0, 0.0], [0.0, 10.0, 0.0], [0.0, 0.0, 6.0]]])
mean_cell_volume([])
cell_volume([4.0, 0.0, 0.0], [0.0, -10.0, 0.0], [0.0, 0.0, 6.0])
Explanation: We can think about three types of assert statment:
precondition - something that must be true at
the start of a function in order for it to work correctly.
invariant - something that is always true at a
particular point inside a piece of code.
postcondition - something that the function guarantees is true when it finishes.
Lets think of some and add these to the functions above. My collection is inserted below.
End of explanation |
13,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
reviews
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
for word in row[0].split(' '):
total_counts[word] += 1
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[-60:])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {}
for idx, word in enumerate(vocab):
word2idx[word] = idx
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
vocab_len = len(vocab)
def text_to_vector(text):
wordvec = np.zeros(vocab_len, dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
wordvec[idx] += 1
return wordvec
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
net = tflearn.input_data([None, vocab_len]) # Input
net = tflearn.fully_connected(net, 200, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 25, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
test_sentence("This movie is not bad at all! I cannot stop watching it!")
Explanation: Try out your own text!
End of explanation |
13,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 4
Step1: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3
Step2: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
Step3: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step4: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5
Step5: Note
Step6: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint
Step7: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
Step8: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
Step9: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
Step10: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0
Step11: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
Step12: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]
Step13: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following | Python Code:
import graphlab
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
def polynomial_sframe(feature, degree):
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
sales = sales.sort(['sqft_living','price'])
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
l2_small_penalty = 1e-5
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
train_valid_shuffled[0:10] # rows 0 to 9
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
print int(round(validation4['price'].mean(), 0))
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
print int(round(train4['price'].mean(), 0))
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation |
13,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Try not to peek at the solutions when you go through the exercises. ;-)
First let's make sure this notebook works well in both Python 2 and Python 3
Step1: Techniques for Training Deep Nets
Using He initialization and the ELU activation function (with the help of a partial())
Step2: Exercise 9
In this exercise, you will add a 50% dropout rate to the following neural network model below.
9.1) Add a training placeholder, of type tf.bool.
Tip
Step3: 9.3) Update the following training code to feed the value of the training placeholder, where appropriate, then run the code and see if the model performs better than without dropout.
Step4: Try not to peek at the solution below before you have done the exercise!
Step5: 9.3)
Step6: Early Stopping
Step7: Saving the model to disk so often slows down training. Let's save to RAM instead | Python Code:
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.__version__
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("tmp/data/")
Explanation: Try not to peek at the solutions when you go through the exercises. ;-)
First let's make sure this notebook works well in both Python 2 and Python 3:
End of explanation
from functools import partial
n_inputs = 28 * 28
n_hidden1 = 100
n_hidden2 = 100
n_outputs = 10
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("inputs"):
X = tf.placeholder(tf.float32, shape=[None, n_inputs], name="X")
y = tf.placeholder(tf.int32, shape=[None], name="y")
he_init = tf.contrib.layers.variance_scaling_initializer()
dense_layer = partial(tf.layers.dense,
kernel_initializer=he_init,
activation=tf.nn.elu)
hidden1 = dense_layer(X, n_hidden1, name="hidden1")
hidden2 = dense_layer(hidden1, n_hidden2, name="hidden2")
logits = dense_layer(hidden2, n_outputs, activation=None, name="output")
Y_proba = tf.nn.softmax(logits)
with tf.name_scope("train"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
with tf.name_scope("init_and_save"):
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 20
batch_size = 50
with tf.Session(graph=graph) as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: mnist.validation.images, y: mnist.validation.labels})
print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val)
save_path = saver.save(sess, "./my_mnist_model")
Explanation: Techniques for Training Deep Nets
Using He initialization and the ELU activation function (with the help of a partial()):
End of explanation
n_inputs = 28 * 28
n_hidden1 = 100
n_hidden2 = 100
n_outputs = 10
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("inputs"):
X = tf.placeholder(tf.float32, shape=[None, n_inputs], name="X")
y = tf.placeholder(tf.int32, shape=[None], name="y")
he_init = tf.contrib.layers.variance_scaling_initializer()
dense_layer = partial(tf.layers.dense,
kernel_initializer=he_init,
activation=tf.nn.elu)
hidden1 = dense_layer(X, n_hidden1, name="hidden1")
hidden2 = dense_layer(hidden1, n_hidden2, name="hidden2")
logits = dense_layer(hidden2, n_outputs, activation=None, name="output")
Y_proba = tf.nn.softmax(logits)
with tf.name_scope("train"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
with tf.name_scope("init_and_save"):
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Explanation: Exercise 9
In this exercise, you will add a 50% dropout rate to the following neural network model below.
9.1) Add a training placeholder, of type tf.bool.
Tip: you can use tf.placeholder_with_default() to make this False by default.
9.2) Add a dropout layer between the input layer and the first hidden layer, using tf.layers.dropout().
End of explanation
n_epochs = 20
batch_size = 50
with tf.Session(graph=graph) as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: mnist.validation.images, y: mnist.validation.labels})
print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val)
save_path = saver.save(sess, "./my_mnist_model")
Explanation: 9.3) Update the following training code to feed the value of the training placeholder, where appropriate, then run the code and see if the model performs better than without dropout.
End of explanation
n_inputs = 28 * 28
n_hidden1 = 100
n_hidden2 = 100
n_outputs = 10
dropout_rate = 0.5 # <= CHANGED
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("inputs"):
X = tf.placeholder(tf.float32, shape=[None, n_inputs], name="X")
y = tf.placeholder(tf.int32, shape=[None], name="y")
training = tf.placeholder_with_default(False, shape=[], name='training') # <= CHANGED
X_drop = tf.layers.dropout(X, dropout_rate, training=training) # <= CHANGED
he_init = tf.contrib.layers.variance_scaling_initializer()
dense_layer = partial(tf.layers.dense,
kernel_initializer=he_init,
activation=tf.nn.elu)
hidden1 = dense_layer(X_drop, n_hidden1, name="hidden1") # <= CHANGED
hidden2 = dense_layer(hidden1, n_hidden2, name="hidden2")
logits = dense_layer(hidden2, n_outputs, activation=None, name="output")
Y_proba = tf.nn.softmax(logits)
with tf.name_scope("train"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
with tf.name_scope("init_and_save"):
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Explanation: Try not to peek at the solution below before you have done the exercise! :)
Exercise 9 - Solution
9.1-2)
End of explanation
n_epochs = 20
batch_size = 50
with tf.Session(graph=graph) as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch, training: True}) # <= CHANGED
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: mnist.validation.images, y: mnist.validation.labels})
print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val)
save_path = saver.save(sess, "./my_mnist_model")
Explanation: 9.3)
End of explanation
n_epochs = 1000
batch_size = 50
best_acc_val = 0
check_interval = 100
checks_since_last_progress = 0
max_checks_without_progress = 100
with tf.Session(graph=graph) as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch, training: True})
if iteration % check_interval == 0:
acc_val = accuracy.eval(feed_dict={X: mnist.validation.images[:2000], y: mnist.validation.labels[:2000]})
if acc_val > best_acc_val:
best_acc_val = acc_val
checks_since_last_progress = 0
saver.save(sess, "./my_best_model_so_far")
else:
checks_since_last_progress += 1
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: mnist.validation.images[2000:], y: mnist.validation.labels[2000:]})
print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val, "Best validation accuracy:", best_acc_val)
if checks_since_last_progress > max_checks_without_progress:
print("Early stopping!")
saver.restore(sess, "./my_best_model_so_far")
break
acc_test = accuracy.eval(feed_dict={X: mnist.test.images[2000:], y: mnist.test.labels[2000:]})
print("Final accuracy on test set:", acc_test)
save_path = saver.save(sess, "./my_mnist_model")
Explanation: Early Stopping
End of explanation
def get_model_params():
gvars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
return {gvar.op.name: value for gvar, value in zip(gvars, tf.get_default_session().run(gvars))}
def restore_model_params(model_params):
gvar_names = list(model_params.keys())
assign_ops = {gvar_name: tf.get_default_graph().get_operation_by_name(gvar_name + "/Assign")
for gvar_name in gvar_names}
init_values = {gvar_name: assign_op.inputs[1] for gvar_name, assign_op in assign_ops.items()}
feed_dict = {init_values[gvar_name]: model_params[gvar_name] for gvar_name in gvar_names}
tf.get_default_session().run(assign_ops, feed_dict=feed_dict)
n_epochs = 1000
batch_size = 50
best_acc_val = 0
check_interval = 100
checks_since_last_progress = 0
max_checks_without_progress = 100
best_model_params = None
with tf.Session(graph=graph) as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch, training: True})
if iteration % check_interval == 0:
acc_val = accuracy.eval(feed_dict={X: mnist.validation.images[:2000], y: mnist.validation.labels[:2000]})
if acc_val > best_acc_val:
best_acc_val = acc_val
checks_since_last_progress = 0
best_model_params = get_model_params()
else:
checks_since_last_progress += 1
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: mnist.validation.images[2000:], y: mnist.validation.labels[2000:]})
print(epoch, "Train accuracy:", acc_train, "Validation accuracy:", acc_val, "Best validation accuracy:", best_acc_val)
if checks_since_last_progress > max_checks_without_progress:
print("Early stopping!")
break
if best_model_params:
restore_model_params(best_model_params)
acc_test = accuracy.eval(feed_dict={X: mnist.test.images[2000:], y: mnist.test.labels[2000:]})
print("Final accuracy on test set:", acc_test)
save_path = saver.save(sess, "./my_mnist_model")
Explanation: Saving the model to disk so often slows down training. Let's save to RAM instead:
End of explanation |
13,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Elemento Beam
Fundamento teórico
El elemento Beam (viga) es un elemento finito bidimensional donde las coordenadas locales y globales coinciden. Está caracterizado por una función de forma lineal. El elemento Beam tiene un modulo de elasticidad E, momento de inercia I y longitud L. Cada elemento Beam tiene dos nodos y se asume horizontal como se muestra en la figura. En este caso la matriz de rigidez del elemento está dada por la matriz siguiente, asumiendo que la deformación axial es despreciable
Step2: Ejemplo 2. Determine los desplazamientos nodales y rotaciones, fuerzas nodales globales, y fuerzas en elementos para la viga mostrada en la figura. Se ha discretizado la viga como se indica en la numeración nodal. La viga está fija en los nodos 1 y 5, y tiene un soporte de rodillo en el nodo 3. Las cargas verticales de 10 000 lb cada una son aplicadas en los nodos 2 y 4. Sea E=300x10<sup>6</sup> psi and I=500 in<sup>4</sup>.
<img src="src/beam-element/logan_E42.PNG" width="400px"> </img>
Step4: Ejemplo 3.
Step6: Ejemplo 4 | Python Code:
%matplotlib inline
import numpy as np
from nusa import *
import itertools
import matplotlib.pyplot as plt
def pairwise(iterable):
#~ "s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = itertools.tee(iterable)
next(b, None)
return zip(a, b)
# Input data
E = 210e9 # Pa
I = 1e-5
L = 1
P = 10e3
nelm = 10
parts = np.linspace(0,L,nelm)
nodos = []
for xc in parts:
cn = Node((xc,0))
nodos.append(cn)
elementos = []
for x in pairwise(nodos):
ni,nj = x[0], x[1]
ce = Beam((ni,nj),E,I)
elementos.append(ce)
m = BeamModel()
for n in nodos: m.add_node(n)
for e in elementos: m.add_element(e)
m.add_constraint(nodos[0], ux=0, uy=0, ur=0)
m.add_force(nodos[-1], (-P,))
m.plot_model()
m.solve()
m.plot_disp(1)
xx = np.linspace(0,L)
d = ((-P*xx**2.0)/(6.0*E*I))*(3*L - xx)
plt.plot(xx,d)
plt.axis("auto")
plt.xlim(0,1.1*L)
Explanation: Elemento Beam
Fundamento teórico
El elemento Beam (viga) es un elemento finito bidimensional donde las coordenadas locales y globales coinciden. Está caracterizado por una función de forma lineal. El elemento Beam tiene un modulo de elasticidad E, momento de inercia I y longitud L. Cada elemento Beam tiene dos nodos y se asume horizontal como se muestra en la figura. En este caso la matriz de rigidez del elemento está dada por la matriz siguiente, asumiendo que la deformación axial es despreciable:
$$
k = \frac{EI}{L^3}
\begin{bmatrix}
12 & 6L & -12 & 6L \
6L & 4L^2 & -6L & 2L^2 \
-12 & -6L & 12 & -6L \
6L & 2L^2 & -6L & 4L^2
\end{bmatrix}
$$
<img src="src/beam-element/beam_element.PNG" width="200px"> </img>
Está claro que el elemento Beam tiene cuatro grados de libertad, dos en cada nodo: un desplazamiento transversal y una rotación. La convención de signos utilizada es la tradicional: los desplazamientos son positivos hacia arriba y las rotaciones cuando son antihorario.
Ejemplos resueltos
Ejemplo 1. Viga en voladizo
End of explanation
Logan, D. (2007). A first course in the finite element analysis.
Example 4.2 , pp. 166.
from nusa.core import *
from nusa.model import *
from nusa.element import *
# Input data
E = 30e6
I = 500.0
P = 10e3
L = 10*(12.0) # ft -> in
# Model
m1 = BeamModel("Beam Model")
# Nodes
n1 = Node((0,0))
n2 = Node((10*12,0))
n3 = Node((20*12,0))
n4 = Node((30*12,0))
n5 = Node((40*12,0))
# Elements
e1 = Beam((n1,n2),E,I)
e2 = Beam((n2,n3),E,I)
e3 = Beam((n3,n4),E,I)
e4 = Beam((n4,n5),E,I)
# Add elements
for nd in (n1,n2,n3,n4,n5): m1.add_node(nd)
for el in (e1,e2,e3,e4): m1.add_element(el)
m1.add_force(n2,(-P,))
m1.add_force(n4,(-P,))
m1.add_constraint(n1, ux=0,uy=0,ur=0) # fixed
m1.add_constraint(n5, ux=0,uy=0,ur=0) # fixed
m1.add_constraint(n3, uy=0, ur=0) # fixed
m1.add_constraint(n2, ur=0)
m1.add_constraint(n4, ur=0)
m1.plot_model()
m1.solve() # Solve model
# Desplazamientos nodales y rotaciones
print("Desplazamientos nodales y rotaciones")
print("UY \t UR")
for node in m1.get_nodes():
print("{0} \t {1}".format(node.uy, node.ur))
# Fuerzas nodales globales
print("\nFuerzas nodales globales")
print("FY \t M")
for node in m1.get_nodes():
print("{0} \t {1}".format(node.fy, node.m))
# Fuerzas en elementos
print("\nFuerzas en elementos")
for element in m1.get_elements():
print("\nFY:\n{0} \n M:\n{1}\n".format(element.fy, element.m))
# Dibujando diagramas de cortante y momento flexionante
m1.plot_shear_diagram()
m1.plot_moment_diagram()
Explanation: Ejemplo 2. Determine los desplazamientos nodales y rotaciones, fuerzas nodales globales, y fuerzas en elementos para la viga mostrada en la figura. Se ha discretizado la viga como se indica en la numeración nodal. La viga está fija en los nodos 1 y 5, y tiene un soporte de rodillo en el nodo 3. Las cargas verticales de 10 000 lb cada una son aplicadas en los nodos 2 y 4. Sea E=300x10<sup>6</sup> psi and I=500 in<sup>4</sup>.
<img src="src/beam-element/logan_E42.PNG" width="400px"> </img>
End of explanation
Beer & Johnston. (2012) Mechanics of materials.
Problem 9.13 , pp. 568.
# Input data
E = 29e6
I = 291 # W14x30
P = 35e3
L1 = 5*12 # in
L2 = 10*12 #in
# Model
m1 = BeamModel("Beam Model")
# Nodes
n1 = Node((0,0))
n2 = Node((L1,0))
n3 = Node((L1+L2,0))
# Elements
e1 = Beam((n1,n2),E,I)
e2 = Beam((n2,n3),E,I)
# Add elements
for nd in (n1,n2,n3): m1.add_node(nd)
for el in (e1,e2): m1.add_element(el)
m1.add_force(n2, (-P,))
m1.add_constraint(n1, ux=0, uy=0) # fixed
m1.add_constraint(n3, uy=0) # fixed
m1.solve() # Solve model
m1.plot_shear_diagram()
m1.plot_moment_diagram()
n1.fy
Explanation: Ejemplo 3.
End of explanation
Beer & Johnston. (2012) Mechanics of materials.
Problem 9.13 , pp. 568.
# Datos
P1 = 3e3
P2 = 3e3
M1 = 450
E = 200e9
I = (1/12)*(50e-3)*(60e-3)**3
n1 = Node((0,0))
n2 = Node((0.3,0))
n3 = Node((0.5,0))
n4 = Node((0.7,0))
n5 = Node((1,0))
e1 = Beam((n1,n2), E, I)
e2 = Beam((n2,n3), E, I)
e3 = Beam((n3,n4), E, I)
e4 = Beam((n4,n5), E, I)
model_cp = BeamModel()
for nodo in (n1,n2,n3,n4,n5): model_cp.add_node(nodo)
for el in (e1,e2,e3,e4): model_cp.add_element(el)
model_cp.add_constraint(n1, ux=0, uy=0)
model_cp.add_constraint(n5, uy=0)
model_cp.add_force(n2, (-P1,))
model_cp.add_force(n4, (-P2,))
model_cp.add_moment(n3, (-M1,))
model_cp.solve()
model_cp.plot_shear_diagram()
model_cp.plot_moment_diagram()
uy = []
for n in model_cp.get_nodes():
uy.append(n.fy)
plt.plot(uy)
Explanation: Ejemplo 4
End of explanation |
13,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hints to know
Step1: Example code block
print("Hello man"
Example maths
$$
y = \frac{a}{b+c}
$$
$$
\int f(x) dx
$$
$$
\int f(x)\,dx
$$
Detailed info for LaTex in jupyter
Timing code
Step2: Note
Step3: Debugging with %pdb | Python Code:
print("Hello man")
Explanation: Hints to know
End of explanation
def fibo(n):
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n-1) + fibo(n-2)
%timeit fibo(20)
Explanation: Example code block
print("Hello man"
Example maths
$$
y = \frac{a}{b+c}
$$
$$
\int f(x) dx
$$
$$
\int f(x)\,dx
$$
Detailed info for LaTex in jupyter
Timing code
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 1, 300)
for w in range(2, 6, 2):
plt.plot(x, np.sin(np.pi*x)*np.sin(2*w*np.pi*x))
Explanation: Note: Also it's possible to use %%timeit for the whole code block
Plotting
End of explanation
%pdb
numbers = "hello"
sum(numbers)
Explanation: Debugging with %pdb
End of explanation |
13,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstrate using the simulator for a surface simulation, deterministic
integration.
Run time
Step1: Perform the simulation
Step2: Plot pretty pictures of what we just did | Python Code:
from tvb.datatypes.cortex import Cortex
from tvb.datatypes.local_connectivity import LocalConnectivity
from tvb.simulator.lab import *
Explanation: Demonstrate using the simulator for a surface simulation, deterministic
integration.
Run time: approximately 35 s (geodist step of local Connect) + ~5 min (workstation circa 2010).
Memory requirement: < 1 GB
End of explanation
LOG.info("Configuring...")
#Initialise a Model, Coupling, and Connectivity.
oscillator = models.Generic2dOscillator()
white_matter = connectivity.Connectivity(load_default=True)
white_matter.speed = numpy.array([4.0])
white_matter_coupling = coupling.Linear(a=2 ** -9)
#Initialise an Integrator
heunint = integrators.HeunDeterministic(dt=2 ** -4)
#Initialise some Monitors with period in physical time
mon_tavg = monitors.TemporalAverage(period=2 ** -1)
mon_savg = monitors.SpatialAverage(period=2 ** -2)
mon_eeg = monitors.EEG(period=2 ** -2)
#Bundle them
what_to_watch = (mon_tavg, mon_savg, mon_eeg)
#Initialise a surface:
#First define the function describing the "local" connectivity.
grey_matter = LocalConnectivity(cutoff=40.0)
grey_matter.equation.parameters['sigma'] = 10.0
grey_matter.equation.parameters['amp'] = 1.0
#then a scaling factor, to adjust the strength of the local connectivity
local_coupling_strength = numpy.array([-0.0115])
#finally, create a default cortex that includes the custom local connectivity.
default_cortex = Cortex(load_default=True)
default_cortex.local_connectivity = grey_matter
default_cortex.coupling_strength = local_coupling_strength
#Initialise Simulator -- Model, Connectivity, Integrator, Monitors, and surface.
sim = simulator.Simulator(model=oscillator, connectivity=white_matter,
integrator=heunint, monitors=what_to_watch,
surface=default_cortex)
sim.configure()
LOG.info("Starting simulation...")
#Perform the simulation
tavg_data = []
tavg_time = []
savg_data = []
savg_time = []
eeg_data = []
eeg_time = []
for tavg, savg, eeg in sim(simulation_length=2 ** 6):
if not tavg is None:
tavg_time.append(tavg[0])
tavg_data.append(tavg[1])
if not savg is None:
savg_time.append(savg[0])
savg_data.append(savg[1])
if not eeg is None:
eeg_time.append(eeg[0])
eeg_data.append(eeg[1])
LOG.info("finished simulation")
Explanation: Perform the simulation
End of explanation
#Make the lists numpy.arrays for easier use.
TAVG = numpy.array(tavg_data)
SAVG = numpy.array(savg_data)
EEG = numpy.array(eeg_data)
#Plot region averaged time series
figure(3)
plot(savg_time, SAVG[:, 0, :, 0])
title("Region average")
#Plot EEG time series
figure(4)
plot(eeg_time, EEG[:, 0, :, 0])
title("EEG")
#Surface movie, requires mayavi.malb
if IMPORTED_MAYAVI:
st = surface_timeseries(sim.surface, TAVG[:, 0, :, 0])
plot_local_connectivity(default_cortex)
#Show them
show()
Explanation: Plot pretty pictures of what we just did
End of explanation |
13,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Remote texting demo
Start a Mosquitto container first. For example
Step1: Start client
Step2: Prepare messages
Step3: Send out messages and get asynchonous results
Step4: Stop the demo | Python Code:
import os
import sys
import time
sys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\codes', 'client')))
sys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\codes', 'node')))
sys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\codes', 'shared')))
sys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\codes', 'micropython')))
import client
from collections import OrderedDict
Explanation: Remote texting demo
Start a Mosquitto container first. For example:
- Use codes\_demo\1_start_broker.sh to start a Mosquitto container on Raspberry Pi.
- Config files are in mqtt_config\mqtt.
- set allow_anonymous true in mqtt_config\mqtt\config\mosquitto.conf to allow anonymous client.
Getting Started
What this notebook does:
- Using a client on PC
- List connected nodes
- Send text to remote node and display on OLED screen.
End of explanation
the_client = client.Client()
the_client.start()
while not the_client.status['Is connected']:
time.sleep(1)
print('Node not ready yet.')
Explanation: Start client
End of explanation
# messages _____________________________________________
messages = OrderedDict()
messages['show text'] = {'message_type': 'command',
'command': 'show text',
'kwargs': {'text': 'Hello World!',
'x': 0, 'y': 0,
'clear_first': True,
'show_now': True,
'hold_seconds': 5}}
Explanation: Prepare messages
End of explanation
print('\n[______________ Sending messages ______________]\n')
remote_node = 'NodeMCU_8a00'
# send out the messages
for message in messages.values():
time.sleep(0.1) # PyCharm needs this delay.
the_client.request(remote_node, message)
Explanation: Send out messages and get asynchonous results
End of explanation
# Stopping
the_client.stop()
the_client = None
print('\n[________________ Demo stopped ________________]\n')
Explanation: Stop the demo
End of explanation |
13,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Units and Quantities
Objectives
Use units
Create functions that accept quantities as arguments
Create new units
Basics
How do we define a Quantity and which parts does it have?
Step1: Quantities can be converted to other units systems or factors by using to()
Step2: We can do arithmetic operations when the quantities have the compatible units
Step3: Quantities can also be combined, for example to measure speed
Step4: <div style='background
Step5: Composed units
Many units are compositions of others, for example, one could create new combinationes for ease of use
Step6: and others are already a composition
Step7: Sometime we get no units quantitites
Step8: What happen if we add a number to this?
Step9: Equivalencies
Some conversions are not done by a conversion factor as between miles and kilometers, for example converting between wavelength and frequency.
Step10: Other built-in equivalencies are
Step11: Printing the quantities
Step12: Arrays
Quantities can also be applied to arrays
Step13: Plotting quantities
To work nicely with matplotlib we need to do as follows
Step14: Creating functions with quantities as units
We want to have functions that contain the information of the untis, and with them we can be sure that we will be always have the right result.
Step15: <div style='background
Step16: Create your own units
Some times we want to create our own units
Step17: <div style='background | Python Code:
from astropy import units as u
# Define a quantity length
# print it
# Type of quantity
# Type of unit
# Quantity
# value
# unit
# information
Explanation: Units and Quantities
Objectives
Use units
Create functions that accept quantities as arguments
Create new units
Basics
How do we define a Quantity and which parts does it have?
End of explanation
# Convert it to: km, lyr
Explanation: Quantities can be converted to other units systems or factors by using to()
End of explanation
# arithmetic with distances
Explanation: We can do arithmetic operations when the quantities have the compatible units:
End of explanation
# calculate a speed
# decompose it
Explanation: Quantities can also be combined, for example to measure speed
End of explanation
#1
#2
#3
Explanation: <div style='background:#B1E0A8; padding:10px 10px 10px 10px;'>
<H2> Challenges </H2>
<ol>
<li> Convert the speed in imperial units (miles/hour) using: <br>
```from astropy.units import imperial```
</li>
<li> Calculate whether a pint is more than half litre<br>
<emph>You can compare quantities as comparing variables.</emph> <br>
Something strange? Check what deffinition of <a href='https://en.wikipedia.org/wiki/Pint'>pint</a> astropy is using.
</li>
<li> Does units work with areas? calculate the area of a rectangle of 3 km of side and 5 meter of width. Show them in m^2 and convert them to yards^2</li>
</div>
End of explanation
# create a composite unit
# and in the imperial system
Explanation: Composed units
Many units are compositions of others, for example, one could create new combinationes for ease of use:
End of explanation
# what can be converted from s-1?
# or Jules?
# Unity of R
Explanation: and others are already a composition:
End of explanation
# no units
Explanation: Sometime we get no units quantitites
End of explanation
# arithmetic with no units
# final value of a no unit quantity
Explanation: What happen if we add a number to this?
End of explanation
# converting spectral quantities
# but doing it right
Explanation: Equivalencies
Some conversions are not done by a conversion factor as between miles and kilometers, for example converting between wavelength and frequency.
End of explanation
# finding the equivalencies
# but also using other systems
Explanation: Other built-in equivalencies are:
- parallax()
- Doppler (dopplr_radio, doppler_optical, doppler_relativistic)
- spectral flux density
- brigthness temperature
- temperature energy
- and you can build your own
End of explanation
# Printing values with different formats
Explanation: Printing the quantities
End of explanation
# different ways of defining a quantity for a single value
# now with lists
# and arrays
# and its arithmetics
# angles are smart!
Explanation: Arrays
Quantities can also be applied to arrays
End of explanation
# allowing for plotting
from astropy.visualization import quantity_support
quantity_support()
# loading matplotlib
%matplotlib inline
from matplotlib import pyplot as plt
# Ploting the previous array
Explanation: Plotting quantities
To work nicely with matplotlib we need to do as follows:
End of explanation
# Create a function for the Kinetic energy
# run with and without units
Explanation: Creating functions with quantities as units
We want to have functions that contain the information of the untis, and with them we can be sure that we will be always have the right result.
End of explanation
#4
# run it for some values
# on Mars:
Explanation: <div style='background:#B1E0A8; padding:10px 10px 10px 10px;'>
<H2> Challenges </H2>
<ol start=4>
<li> Create a function that calculates potential energy where *g* defaults to Earth value,
but could be used for different planets.
Test it for any of the *g* values for any other
<a href="http://www.physicsclassroom.com/class/circles/Lesson-3/The-Value-of-g">planet</a>.
</li>
</ol>
</div>
End of explanation
# Create units for a laugh scale
Explanation: Create your own units
Some times we want to create our own units:
End of explanation
#5
Explanation: <div style='background:#B1E0A8; padding:10px 10px 10px 10px;'>
<H2> Challenges </H2>
<ol start=5>
<li> Convert the area calculated before `rectangle_area` in <a href="https://en.wikipedia.org/wiki/Hectare">Hectare</a>
(1 hectare = 100 ares; 1 are = 100 m2).
</li>
</ol>
</div>
End of explanation |
13,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Turn magnitudes into colors
Step1: Filter out bad data
Step2: Create classification labels
Step3: Load the IDs of the narrowband population
Step4: Setup locations of images
Step5: Copy over pre-downloaded images
Step6: Get the images from the quarry
For technical details, see
Step7: Make the request via curl
1)
First you need to setup you authentication information. Add it to a file like galaxy_images_training/curl_netrc which should look like
Step8: Remove incomplete dirs, then transfer to main google drive directory
Step9: Now link these new images to the project data directory
Step10: Check that the directory contents are correct | Python Code:
combined["g_minus_r"] = combined.gcmodel_mag - combined.rcmodel_mag
combined["r_minus_i"] = combined.rcmodel_mag - combined.icmodel_mag
combined["i_minus_z"] = combined.icmodel_mag - combined.zcmodel_mag
combined["z_minus_y"] = combined.zcmodel_mag - combined.ycmodel_mag
Explanation: Turn magnitudes into colors
End of explanation
mask = np.isfinite(combined["g_minus_r"]) & np.isfinite(combined["r_minus_i"]) \
& np.isfinite(combined["i_minus_z"]) & np.isfinite(combined["z_minus_y"]) \
& np.isfinite(combined["icmodel_mag"]) \
& (~combined.gcmodel_flux_flags) & (~combined.rcmodel_flux_flags) \
& (~combined.icmodel_flux_flags) & (~combined.zcmodel_flux_flags) \
& (~combined.ycmodel_flux_flags)
combined = combined[mask]
Explanation: Filter out bad data
End of explanation
low_z = (combined.photo_z < .15)
low_mass = (combined.log_mass > 8) & (combined.log_mass < 9)
combined["low_z_low_mass"] = (low_z & low_mass)
combined.low_z_low_mass.mean()
Explanation: Create classification labels
End of explanation
target_ids = pd.read_csv("../narrowband_deconfusion/target_galaxies-HSC_ids.csv")
target_ids.head()
contaminant_ids = pd.read_csv("../narrowband_deconfusion/contaminant_galaxies-HSC_ids.csv")
contaminant_ids.head()
Explanation: Load the IDs of the narrowband population
End of explanation
targets_path = pathlib.Path.home() / "dwarfz" \
/ "galaxies_narrowband" \
/ "target"
targets_path.mkdir(parents=True, exist_ok=True)
contaminants_path = pathlib.Path.home() / "dwarfz" \
/ "galaxies_narrowband" \
/ "contaminant"
contaminants_path.mkdir(parents=True, exist_ok=True)
Explanation: Setup locations of images
End of explanation
google_dir = pathlib.Path.home() / "Google Drive" \
/ "HSC_COSMOS_galaxies" \
/ "galaxies"
predownloaded_ids = {int(image_dir.name)
for image_dir in google_dir.iterdir()
if image_dir.is_dir()}
i = 0
for (_, _, HSC_id) in target_ids.itertuples():
if HSC_id in predownloaded_ids:
image_dir = google_dir / str(HSC_id)
new_dir = targets_path / image_dir.name
if not new_dir.is_dir():
new_dir.symlink_to(image_dir, target_is_directory=True)
i += 1
print("symlinked {} galaxies".format(i))
j = 0
for (_, _, HSC_id) in contaminant_ids.itertuples():
if HSC_id in predownloaded_ids:
image_dir = google_dir / str(HSC_id)
new_dir = contaminants_path / image_dir.name
if not new_dir.is_dir():
new_dir.symlink_to(image_dir, target_is_directory=True)
j += 1
print("symlinked {} galaxies".format(j))
# num galaxies remaining to download
target_ids.shape[0] + contaminant_ids.shape[0] - i - j
target_ids_to_download = set(target_ids.HSC_id) - predownloaded_ids
contaminant_ids_to_download = set(contaminant_ids.HSC_id) - predownloaded_ids
ids_to_download = target_ids_to_download | contaminant_ids_to_download
Explanation: Copy over pre-downloaded images
End of explanation
galaxy_coords = combined[["catalog_2_ids", "ra", "dec"]]
galaxy_coords = galaxy_coords.rename(columns={"catalog_2_ids":"HSC_index"})
galaxy_coords = galaxy_coords.set_index("HSC_index")
galaxy_coords = galaxy_coords.loc[ids_to_download]
galaxy_coords.head()
width = "20asec"
filters = ["HSC-G", "HSC-R", "HSC-I", "HSC-Z", "HSC-Y"]
rerun = "pdr1_deep"
quarry_input_dir = pathlib.Path("galaxy_images_training") \
/ "quarry_input_files"
quarry_input_dir.mkdir(exist_ok=True)
quarry_name_format = "tmp_quarry_{:>04d}.txt"
batch_i = 0
files_in_batch = 0
max_files_per_batch = 1000
tmp_filename = quarry_input_dir / quarry_name_format.format(batch_i)
f = open(tmp_filename, mode="w")
print("#? ra dec filter sw sh rerun", file=f)
print_formatter = " {galaxy.ra:.6f}deg {galaxy.dec:.6f}deg {filter} {width} {width} {rerun} # {galaxy.Index}"
for galaxy in galaxy_coords.itertuples():
for filter in filters:
print(print_formatter.format(galaxy=galaxy,
width=width,
filter=filter,
rerun=rerun),
file=f)
files_in_batch += 1
if files_in_batch == max_files_per_batch:
f.close()
files_in_batch = 0
batch_i += 1
tmp_filename = quarry_input_dir / quarry_name_format.format(batch_i)
f = open(tmp_filename, mode="w")
print("#? ra dec filter sw sh rerun", file=f)
f.close()
!head -n 10 $tmp_filename
!wc -l $tmp_filename
!ls galaxy_images_training/quarry_input_files/ | wc -l
!ls -lh galaxy_images_training/quarry_input_files/ | head -n 10
Explanation: Get the images from the quarry
For technical details, see: https://hsc-release.mtk.nao.ac.jp/das_quarry/manual.html
I'll be downloading these directly into the google drive folder. Then, when everything is complete, I'll just symlink them into the appropriate project folders, split by target and contaminant.
Create a coordinates list
End of explanation
filenames = sorted(quarry_input_dir.iterdir())
min_batch_number_to_pull = 1
max_batch_number_to_pull = 100
new_data_dir = targets_path.parent / "staging"
new_data_dir.mkdir(exist_ok=True)
for i, filename in enumerate(filenames):
if i < min_batch_number_to_pull:
continue
if i >= max_batch_number_to_pull:
break
print("Currently processing file: {}".format(os.path.basename(filename)), end="\r", flush=True)
os.system(("curl -k --netrc-file galaxy_images_training/curl_netrc "
"https://hsc-release.mtk.nao.ac.jp/das_quarry/cgi-bin/quarryImage "
"--form list=@{filename} "
"| tar -xvf -").format(filename=filename))
arch_dirs = list(pathlib.Path.cwd().glob("arch-*"))
assert(len(arch_dirs)==1)
arch_dir = arch_dirs[0]
with open(filename, "r") as f:
_ = f.readline() # skip header
line_number = 1 # 1 indexed, and then also with header
for line in f:
line_number += 1
HSC_id = int(line.split("#")[-1].strip())
HSC_dir = new_data_dir / str(HSC_id)
HSC_dir.mkdir(exist_ok=True)
image_filenames = list(arch_dir.glob(
str(line_number) + "-cutout-HSC-?-????-pdr1_deep.fits"
))
if len(image_filenames) == 0:
continue
elif len(image_filenames) >1:
raise RuntimeError("Too many files for line {} id {}".format(
line_number, HSC_id,
))
image_filename = image_filenames[0]
# rename with HSC id and move to within `new_data_dir`
image_filename.rename(
HSC_dir / image_filename.name.replace(
"{}-cutout".format(line_number),
"{}-cutout".format(HSC_id),
)
)
arch_dir.rmdir()
Explanation: Make the request via curl
1)
First you need to setup you authentication information. Add it to a file like galaxy_images_training/curl_netrc which should look like:
machine hsc-release.mtk.nao.ac.jp login <your username> password <your password>
This allows you to script the curl calls, without being prompted for your password each time
2a)
The curl call (in (2b)) will spit out files into a somewhat unpredicatably named directory, like arch-170928-231223. You should rename this to match the batch suffix. You really should do this right away, so you don't get confused. In general I add the rename onto the same line as the curl call:
curl ... | tar xvf - && mv arch-* quarry_files_a
This only works if it finds one arch- directory, but you really shouldn't have multiple arch directories at any given time; that's a recipe for getting your galaxies mixed up.
2b)
Here's the actual curl invocation:
curl --netrc-file galaxy_images_training/curl_netrc https://hsc-release.mtk.nao.ac.jp/das_quarry/cgi-bin/quarryImage --form list=@<coord list filename> | tar xvf -
End of explanation
staging_dir = google_dir.parent / "staging"
num_removed = 0
for staged_dir in staging_dir.iterdir():
if not staged_dir.is_dir(): continue
num_images = len({*staged_dir.glob("*.fits")})
if num_images>5:
raise ValueError("{} has {} fits files".format(staged_dir, num_images))
elif num_images < 5:
print("too few images in {} (n={}); removing".format(
staged_dir,
num_images,
))
num_removed += 1
send2trash.send2trash(str(staged_dir))
else:
staged_dir.rename(staged_dir.parent.parent / "galaxies" / staged_dir.name)
num_removed
Explanation: Remove incomplete dirs, then transfer to main google drive directory
End of explanation
pre_linked_ids = {int(path.name) for path in contaminants_path.iterdir() if path.is_dir()}
pre_linked_ids |= {int(path.name) for path in targets_path.iterdir() if path.is_dir()}
len(pre_linked_ids)
narrowband_ids = set(target_ids.HSC_id) | set(contaminant_ids.HSC_id)
len(narrowband_ids)
all_downloaded_ids = {int(path.name) for path in google_dir.iterdir()
if path.is_dir()}
len(all_downloaded_ids)
num_to_link = 0
already_linked = 0
missing = 0
for HSC_id in narrowband_ids:
if HSC_id in pre_linked_ids:
already_linked += 1
if HSC_id not in all_downloaded_ids:
missing += 1
if HSC_id in target_ids.HSC_id.values:
class_path = targets_path
elif HSC_id in contaminant_ids.HSC_id.values:
class_path = contaminants_path
else:
raise ValueError("HSC id {} in neither targets nor contaminants".format(HSC_id))
image_dir = google_dir / str(HSC_id)
new_dir = class_path / image_dir.name
if not new_dir.is_dir():
# new_dir.symlink_to(image_dir, target_is_directory=True)
pass
num_to_link += 1
print("just linked: ", num_to_link)
print("previously linked: ", already_linked)
print("missing: ", missing)
Explanation: Now link these new images to the project data directory
End of explanation
for path in targets_path.iterdir():
if not path.is_dir():
continue
HSC_id = int(path.name)
if HSC_id not in target_ids.HSC_id.values:
raise ValueError("HSC id {} should not be in target path".format(HSC_id))
for path in contaminants_path.iterdir():
if not path.is_dir():
continue
HSC_id = int(path.name)
if HSC_id not in contaminant_ids.HSC_id.values:
raise ValueError("HSC id {} should not be in contaminant path".format(HSC_id))
Explanation: Check that the directory contents are correct
End of explanation |
13,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
STA 208
Step1: Load the following medical dataset with 750 patients. The response variable is survival dates (Y), the predictors are 104 measurements measured at a specific time (numerical variables have been standardized).
Step2: The response variable is Y for 2.1-2.3 and Z for 2.4. | Python Code:
import numpy as np
import pandas as pd
# dataset path
data_dir = "."
Explanation: STA 208: Homework 2
This is based on the material in Chapters 3, 4.4 of 'Elements of Statistical Learning' (ESL), in addition to lectures 4-6. Chunzhe Zhang came up with the dataset and the analysis in the second section.
Instructions
We use a script that extracts your answers by looking for cells in between the cells containing the exercise statements (beginning with Exercise X.X). So you
MUST add cells in between the exercise statements and add answers within them and
MUST NOT modify the existing cells, particularly not the problem statement
To make markdown, please switch the cell type to markdown (from code) - you can hit 'm' when you are in command mode - and use the markdown language. For a brief tutorial see: https://daringfireball.net/projects/markdown/syntax
In the conceptual exercises you should provide an explanation, with math when necessary, for any answers. When answering with math you should use basic LaTeX, as in
$$E(Y|X=x) = \int_{\mathcal{Y}} f_{Y|X}(y|x) dy = \int_{\mathcal{Y}} \frac{f_{Y,X}(y,x)}{f_{X}(x)} dy$$
for displayed equations, and $R_{i,j} = 2^{-|i-j|}$ for inline equations. (To see the contents of this cell in markdown, double click on it or hit Enter in escape mode.) To see a list of latex math symbols see here: http://web.ift.uib.no/Teori/KURS/WRK/TeX/symALL.html
1. Conceptual Exercises
Exercise 1.1. (5 pts) Ex. 3.29 in ESL
Exercise 1.2 (5 pts) Ex. 3.30 in ESL
Exercise 1.3 (5 pts) $Y \in {0,1}$ follows an exponential family model with natural parameter $\eta$ if
$$P(Y=y) = \exp\left( y \eta - \psi(\eta) \right).$$
Show that when $\eta = x^\top \beta$ then $Y$ follows a logistic regression model.
2. Data Analysis
End of explanation
sample_data = pd.read_csv(data_dir+"/hw2.csv", delimiter=',')
sample_data.head()
sample_data.V1 = sample_data.V1.eq('Yes').mul(1)
Explanation: Load the following medical dataset with 750 patients. The response variable is survival dates (Y), the predictors are 104 measurements measured at a specific time (numerical variables have been standardized).
End of explanation
X = np.array(sample_data.iloc[:,range(2,104)])
y = np.array(sample_data.iloc[:,0])
z = np.array(sample_data.iloc[:,1])
Explanation: The response variable is Y for 2.1-2.3 and Z for 2.4.
End of explanation |
13,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: The first step in creating your own class is to use the class keyword, then the name of the class as shown in Figure 4. In this course the class parent will always be object
Step2: Creating an instance of a class Circle
Let’s create the object RedCircle of type Circle to do the following
Step3: We can use the dir command to get a list of the object's methods. Many of them are default Python methods.
Step4: We can look at the data attributes of the object
Step5: We can change the object's data attributes
Step6: We can draw the object by using the method drawCircle()
Step7: We can increase the radius of the circle by applying the method add_radius(). Let increases the radius by 2 and then by 5
Step8: Let’s create a blue circle. As the default colour is blue, all we have to do is specify what the radius is
Step9: As before we can access the attributes of the instance of the class by using the dot notation
Step10: We can draw the object by using the method drawCircle()
Step11: Compare the x and y axis of the figure to the figure for RedCircle; they are different.
The Rectangle Class
Let's create a class rectangle with the attributes of height, width and colour. We will only add the method to draw the rectangle object
Step12: Let’s create the object SkinnyBlueRectangle of type Rectangle. Its width will be 2 and height will be 3, and the colour will be blue
Step13: As before we can access the attributes of the instance of the class by using the dot notation
Step14: We can draw the object
Step15: Let’s create the object “FatYellowRectangle” of type Rectangle
Step16: We can access the attributes of the instance of the class by using the dot notation
Step17: We can draw the object | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <a href="http://cocl.us/topNotebooksPython101Coursera"><img src = "https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png" width = 750, align = "center"></a>
<a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
<h1, align=center>PYTHON OBJECTS AND CLASSES</h1>
Welcome!
Objects in programming are like objects in real life. Like life, there are different classes of objects. In this notebook, we will create two classes called Circle and Rectangle. By the end of this notebook, you will have a better idea about :
-what a class is
-what an attribute is
-what a method is
Don’t worry if you don’t get it the first time, as much of the terminology is confusing. Don’t forget to do the practice tests in the notebook.
Introduction
Creating a Class
The first part of creating a class is giving it a name: In this notebook, we will create two classes, Circle and Rectangle. We need to determine all the data that make up that class, and we call that an attribute. Think about this step as creating a blue print that we will use to create objects. In figure 1 we see two classes, circle and rectangle. Each has their attributes, they are variables. The class circle has the attribute radius and colour, while the rectangle has the attribute height and width. Let’s use the visual examples of these shapes before we get to the code, as this will help you get accustomed to the vocabulary.
<a ><img src = "https://ibm.box.com/shared/static/h2w03relr84lb8ofto2zk0dp9naiykfg.png" width = 500, align = "center"></a>
<h4 align=center>
#### Figure 1: Classes circle and rectangle, and each has their own attributes. The class circle has the attribute radius and colour, the rectangle has the attribute height and width.
#### Instances of a Class: Objects and Attributes
An instance of an object is the realisation of a class, and in figure 2 we see three instances of the class circle. We give each object a name: red circle, yellow circle and green circle. Each object has different attributes, so let's focus on the attribute of colour for each object.
<a ><img src = "https://ibm.box.com/shared/static/bz20uxc78sbv8knixnl3a52z2u2r74zp.png" width = 500, align = "center"></a>
<h4 align=center>
Figure 2: Three instances of the class circle or three objects of type circle.
The colour attribute for the red circle is the colour red, for the green circle object the colour attribute is green, and for the yellow circle the colour attribute is yellow.
#### Methods
Methods give you a way to change or interact with the object; they are functions that interact with objects. For example, let’s say we would like to increase the radius by a specified amount of a circle. We can create a method called **add_radius(r)** that increases the radius by **r**. This is shown in figure 3, where after applying the method to the "orange circle object", the radius of the object increases accordingly. The “dot” notation means to apply the method to the object, which is essentially applying a function to the information in the object.
<a ><img src = "https://ibm.box.com/shared/static/53b39xh7snepk0my8z7t9n9wzres4drf.png" width = 500, align = "center"></a>
<h4 align=center>
Figure 3: Applying the method “add_radius” to the object orange circle object .
# Creating a Class
Now we are going to create a class circle, but first, we are going to import a library to draw the objects:
End of explanation
class Circle(object):
def __init__(self,radius=3,color='blue'):
self.radius=radius
self.color=color
def add_radius(self,r):
self.radius=self.radius+r
return(self.radius)
def drawCircle(self):
plt.gca().add_patch(plt.Circle((0, 0), radius=self.radius, fc=self.color))
plt.axis('scaled')
plt.show()
Explanation: The first step in creating your own class is to use the class keyword, then the name of the class as shown in Figure 4. In this course the class parent will always be object:
<a ><img src = "https://ibm.box.com/shared/static/q9394f3aip7lbu4k1yct5pczst5ec3sk.png" width = 400, align = "center"></a>
<h4 align=center>
Figure 4: Three instances of the class circle or three objects of type circle.
The next step is a special method called a constructor **__init__**, which is used to initialize the object. The input are data attributes. The term **self** contains all the attributes in the set. For example the **self.color** gives the value of the attribute colour and **self.radius** will give you the radius of the object. We also have the method **add_radius()** with the parameter **r**, the method adds the value of **r** to the attribute radius. To access the radius we use the sintax **self.radius**. The labeled syntax is summarized in Figure 5:
<a ><img src = "https://ibm.box.com/shared/static/25j0jezklf6snhh3ps61d0djzwx8kgwa.png" width = 600, align = "center"></a>
<h4 align=center>
Figure 5: Labeled syntax of the object circle.
The actual object is shown below. We include the method drawCircle to display the image of a circle. We set the default radius to 3 and the default colour to blue:
End of explanation
RedCircle=Circle(10,'red')
Explanation: Creating an instance of a class Circle
Let’s create the object RedCircle of type Circle to do the following:
End of explanation
dir(RedCircle)
Explanation: We can use the dir command to get a list of the object's methods. Many of them are default Python methods.
End of explanation
RedCircle.radius
RedCircle.color
Explanation: We can look at the data attributes of the object:
End of explanation
RedCircle.radius=1
RedCircle.radius
Explanation: We can change the object's data attributes:
End of explanation
RedCircle.drawCircle()
Explanation: We can draw the object by using the method drawCircle():
End of explanation
print('Radius of object:',RedCircle.radius)
RedCircle.add_radius(2)
print('Radius of object of after applying the method add_radius(2):',RedCircle.radius)
RedCircle.add_radius(5)
print('Radius of object of after applying the method add_radius(5):',RedCircle.radius)
Explanation: We can increase the radius of the circle by applying the method add_radius(). Let increases the radius by 2 and then by 5:
End of explanation
BlueCircle=Circle(radius=100)
Explanation: Let’s create a blue circle. As the default colour is blue, all we have to do is specify what the radius is:
End of explanation
BlueCircle.radius
BlueCircle.color
Explanation: As before we can access the attributes of the instance of the class by using the dot notation:
End of explanation
BlueCircle.drawCircle()
Explanation: We can draw the object by using the method drawCircle():
End of explanation
class Rectangle(object):
def __init__(self,width=2,height =3,color='r'):
self.height=height
self.width=width
self.color=color
def drawRectangle(self):
import matplotlib.pyplot as plt
plt.gca().add_patch(plt.Rectangle((0, 0),self.width, self.height ,fc=self.color))
plt.axis('scaled')
plt.show()
Explanation: Compare the x and y axis of the figure to the figure for RedCircle; they are different.
The Rectangle Class
Let's create a class rectangle with the attributes of height, width and colour. We will only add the method to draw the rectangle object:
End of explanation
SkinnyBlueRectangle= Rectangle(2,10,'blue')
Explanation: Let’s create the object SkinnyBlueRectangle of type Rectangle. Its width will be 2 and height will be 3, and the colour will be blue:
End of explanation
SkinnyBlueRectangle.height
SkinnyBlueRectangle.width
SkinnyBlueRectangle.color
Explanation: As before we can access the attributes of the instance of the class by using the dot notation:
End of explanation
SkinnyBlueRectangle.drawRectangle()
Explanation: We can draw the object:
End of explanation
FatYellowRectangle = Rectangle(20,5,'yellow')
Explanation: Let’s create the object “FatYellowRectangle” of type Rectangle :
End of explanation
FatYellowRectangle.height
FatYellowRectangle.width
FatYellowRectangle.color
Explanation: We can access the attributes of the instance of the class by using the dot notation:
End of explanation
FatYellowRectangle.drawRectangle()
Explanation: We can draw the object:
End of explanation |
13,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsing nemeth.json in python to integrate in pybrl
nemeth.json is a file which includes the Nemeth code in order to translate LaTeX files. I found it in the latex2nemeth project by Antonis Tsolomitis and Andreas Papasalοuros, which translates LaTeX files into Braille.
I needed to convert this file in order to integrate it into pybrl, which means
Step1: In the JSON file, there are three categories of symbols | Python Code:
# Import the dependencies
import six # Python 2 and 3 compatibility
import json # Load/Save JSON
import pybrl as brl # pybrl
# Load the JSON file
jdata = {}
with open("nemeth.json", 'r') as f:
jdata = json.load(f)
jdata.keys()
Explanation: Parsing nemeth.json in python to integrate in pybrl
nemeth.json is a file which includes the Nemeth code in order to translate LaTeX files. I found it in the latex2nemeth project by Antonis Tsolomitis and Andreas Papasalοuros, which translates LaTeX files into Braille.
I needed to convert this file in order to integrate it into pybrl, which means:
- Changing the values which are in unicode, into the raise-dot representation that is used in pybrl. For instance, the plus (+) sign ⠮ needs to be converted into "011101".
- The keys from the JSON file, need to be converted/used within pybrl later in order to do the math translation.
The latter won't be done now, but the conversion of the values is more crucial for now. As of the latest commit, pybrl can also translate unicode braille symbols into the raise-dot representation, which will be used in order to parse the JSON file.
The final result of this notebook is already saved in nemeth.dict.
The purpose of this notebook:
I want to illustrate how to manipulate such data using functions within pybrl, in order to integrate new languages or symbols. Hopefully, this will help understanding how data is used within the program.
End of explanation
# Convert the symbols
raise_dot = {}
for key,val in six.iteritems(jdata['mathSymbols']):
tmp_repr = brl.fromUnicodeSymbols(val)[0] # Get the raise-dot representation list. e.g. ['000101', '100000']
# Remember that whatever is represented by multiple braille cells, can be represented by concatenating the two representations.
# So, we just join the list:
raise_dot[key] = "".join(tmp_repr)
# Let's see the result:
raise_dot
# Save the data into nemeth.dict
with open("nemeth.dict", 'w') as f:
json.dump(raise_dot, f)
Explanation: In the JSON file, there are three categories of symbols:
- theoremSymbols are the numeric symbols 0-9
- letters are English and Greek characters and symbols used in standard text. These aren't needed because they are already included in pybrl and are used in different modules of the program.
- mathSymbols include the nemeth symbols that we need.
End of explanation |
13,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align
Step8: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are
Step10: <p style="color
Step11: run your solution on all test_images and make copies into the test_images directory).
Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos
Step12: Let's try the one with the solid white lane on the right first ...
<p style="color
Step14: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step16: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
Step18: Reflections
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
### <p style="color | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image
Explanation: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
End of explanation
import math
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((*img.shape, 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, λ)
# Step 1: Detect segments on a single image
def detect_seg(frame):
#grayscale
gray = grayscale(frame)
x_end = gray.shape[1] #960
y_end = gray.shape[0] #540
# Gaussian blur
kernel_size = 5
blur_gray = gaussian_blur(gray,kernel_size)
# Edge contours
low_threshold = 50
high_threshold =150
edges = canny(blur_gray, low_threshold, high_threshold)
# Marked region
horizon_line = 320
vertices = np.array([[(55,y_end),(450,horizon_line),(490,horizon_line),(x_end,y_end)]], dtype = np.int32)
marked_region = region_of_interest(edges,vertices)
# Lines
line_edges = hough_lines(marked_region, rho = 2, theta = np.pi/180,\
threshold = 10, min_line_len =50, \
max_line_gap = 20)
frame = weighted_img(line_edges,frame)
return frame
# Result of Step 2
plt.imshow(detect_seg(image))
Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
# Test on video of segments lane detecction
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
result = detect_seg(image)
return result
yellow_output_seg = 'yellow_seg.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output_seg, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output_seg))
# Step2: Extrapolate lanes from the horizon to the base of frame
def extrapo(x1, y1, x2, y2, horizon_line, frame_height):
if x1 == x2:
if y1 < y2:
y1 = horizon_line
y2 = frame_height
else:
y2 = horizon_line
y1 = frame_height
if y1 < y2:
slope = (y2-y1)/(x2-x1)
x1 = ((horizon_line-y1)/slope)+ x1
y1 = horizon_line
x2 = ((frame_height-y2)/slope) + x2
y2 = frame_height
else:
slope = (y2-y1)/(x2-x1)
x2 = ((horizon_line-y2)/slope) + x2
y2 = horizon_line
x1 = ((frame_height - y1)/slope) + x1
y1 = frame_height
return x1, y1, x2, y2
#Step 3: Inital extend lane detection function
def detect(frame):
#grayscale
gray = grayscale(frame)
x_end = gray.shape[1] #960
y_end = gray.shape[0] #540
# Gaussian blur
kernel_size = 3
blur_gray = gaussian_blur(gray,kernel_size)
# Edge contours
low_threshold = 50
high_threshold =150
edges = canny(blur_gray, low_threshold, high_threshold)
# Marked region
horizon_line = 0.6*y_end
vertices = np.array([[(0,y_end),(x_end/2-20,horizon_line),(x_end/2+20,horizon_line),(x_end,y_end)]], dtype = np.int32)
marked_region = region_of_interest(edges,vertices)
# Lines
line_edges = cv2.HoughLinesP(marked_region, rho = 2, theta = np.pi/180,\
threshold = 10, minLineLength =45, \
maxLineGap = 20)
# Set up a slope threshold to filter useless lines
#min_slope = abs((y_end - horizon_line)*2/x_end)
min_theta = 0.45
#plt.imshow(line_edges)
if line_edges is not None:
left_bound = None #left-most of right line
right_bound = None #right_most of left line
dist1 = x_end / 2 # centre line of frame x = x_end
dist2 = x_end / 2
for line in line_edges:
for x1, y1, x2, y2 in line:
slope = (y2-y1)/(x2-x1)
theta = np.abs(np.arctan2((y2-y1),(x2-x1)))
if theta > min_theta:
if slope > 0: # right lane
dist = x1+(y_end-y1)/(y2-y1)*(x2-x1)-x_end/2 #baseline y= y_end, dist between centra frame & right lane
if dist < dist1:
right_bound = (x1, y1, x2, y2)
dist1 = dist
if slope < 0:
dist = x_end/2 - x1 - (y_end-y1)/(y2-y1)*(x2-x1)
if dist < dist2:
left_bound = (x1, y1, x2, y2)
dist2 = dist
line_img = np.zeros((*gray.shape,3), dtype=np.uint8)
if left_bound is not None:
left_bound = extrapo(left_bound[0], left_bound[1], left_bound[2], left_bound[3], horizon_line, y_end)
left_bound = list(map(int,left_bound))
cv2.line(line_img, (left_bound[0],left_bound[1]), (left_bound[2], left_bound[3]), [255,0,0], 8)
if right_bound is not None:
right_bound = extrapo(right_bound[0], right_bound[1], right_bound[2], right_bound[3], horizon_line, y_end)
right_bound = list(map(int, right_bound))
cv2.line(line_img, (right_bound[0],right_bound[1]), (right_bound[2], right_bound[3]), [255,0,0], 12)
frame = weighted_img(line_img,frame)
return frame
# Step 4: Test on sigle image & result
plt.imshow(detect(image))
import os
files = os.listdir("test_images/")
newpath = "./test_images_copy/"
if not os.path.exists(newpath):
os.makedirs(newpath)
print(newpath)
# Step 5: Run on all test_image and save results
for pic in files:
frame = mpimg.imread("test_images/"+pic)
lanes = detect(frame)
filename = pic[:-4]+'_copy'+'.jpg'
cv2.imwrite(newpath+filename,lanes)
Explanation: <p style="color:red;"> NOTE: First result " yellow_seg.mp4 " (wrt Critera II) - which is equivalent to the given video sample " raw-lines-example.mp4 "</p>
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image with lines are drawn on lanes)
result = detect(image)
return result
Explanation: run your solution on all test_images and make copies into the test_images directory).
Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
End of explanation
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
<p style="color:red;"> NOTE: Second result wrt Critera III - connect/extrapolate lane segments together;</p>
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
Explanation: Reflections
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
### <p style="color:red;">REFRLECTIONS</p>
1. Pipleline
<p>Step1: Grayscale</p>
<p>Step2: Gauss filter</p>
<p>Step3: Canny detection</p>
<p>Step4: Set regions by a horizon line and a minimum slope wrt this horizon line</p>
<p>Step5: Devide left/right line according to the slope value (positive/negative sign)</p>
<p>Step6: Get the most right bound from left lines group; get most left bound from right lines group</p>
<p>Step7: Scale the line accroding to the bound in fixed regions</p>
2. How to improve
How to improve the robustness ?
Get desired slope from candidate line edges, in particular for curve lanes
In my case, the lane is not very continous cause the bound of lane changes a lot. A better way is to get a mean slope from left/right line group to keep smoomthness from frame to frame. But my way to get bound is still useful, because in real driving, we mostly consider the lane inner bound.
Reduce the perspective effect of frame
The perspective effect means that in the video, lanes appear to converge in horizontal line. In fact, the horizontla line is tricky to select
Filter the illumination disturbance (ex. shadows in challenge video)
In the challegne video, it's obvious that if there are shadows, the algorithm does not work well. It seems difficult to solve this disturbation from the point of changing filter (gaussian, median). Maybe tracking method, or other filter way ? I'm not sure
3. Conclusion
The marked-region based lane detection is useful to the straight lane but not very suitable for a cuve lane. The region of interst is tricky to define. At the same time, the parameters in the function cv2.HoughLineP asks for trial and errors, mainly setting minLineLength of about 20~30% of the total line length, maxLineGap between 5~10% of the expected total line length.
Submission
If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation |
13,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step3: Naive forecasting
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step4: Trend and Seasonality
Step5: All right, this looks realistic enough for now. Let's try to forecast it. We will split it into two periods
Step6: Naive Forecast
Step7: Let's zoom in on the start of the validation period
Step8: You can see that the naive forecast lags 1 step behind the time series.
Now let's compute the mean absolute error between the forecasts and the predictions in the validation period | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
Just an arbitrary pattern, you can change it if you wish
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
Repeats the same pattern at each period
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
Explanation: Naive forecasting
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c02_naive_forecasting.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c02_naive_forecasting.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Setup
End of explanation
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
Explanation: Trend and Seasonality
End of explanation
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
Explanation: All right, this looks realistic enough for now. Let's try to forecast it. We will split it into two periods: the training period and the validation period (in many cases, you would also want to have a test period). The split will be at time step 1000.
End of explanation
naive_forecast = series[split_time - 1:-1]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, naive_forecast, label="Forecast")
Explanation: Naive Forecast
End of explanation
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150, label="Series")
plot_series(time_valid, naive_forecast, start=1, end=151, label="Forecast")
Explanation: Let's zoom in on the start of the validation period:
End of explanation
errors = naive_forecast - x_valid
abs_errors = np.abs(errors)
mae = abs_errors.mean()
mae
Explanation: You can see that the naive forecast lags 1 step behind the time series.
Now let's compute the mean absolute error between the forecasts and the predictions in the validation period:
End of explanation |
13,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Python as a Calculator
Let's try some simple python commands
Numbers
The interpreter acts as a simple calculator
Step1: With Python, use ** operator to calculate powers.
Step2: Use equal sign(=) to assign a value to variable like math variable.
Step3: If a variable is not defined (assigned a value), trying to use it will give you an error
Step4: List
Python knows a number of compound data types, used to group together other values. The most versatile is the list, which can be written as a list of comma-separated values (items) between square brackets. Lists might contain items of different types, but usually the items all have the same type.
Step5: List can be indexed, with the first item having index 0.
Step6: In addition to indexing, slicing is also supported. While indexing is used to obtain individual item, slicing allows you to obtain sub-list
Step7: You can replace the item in the list and add new items at the end of the list.
Step8: List support the + operation.
Step9: Strings
Besides numbers, Python can also manipulate strings, which can be expressed in several ways. They can be enclosed in single quotes ('...') or double quotes ("...") with the same result. \ can be used to escape quotes
Step10: String also support indexing and slicing like list. You can see the string as a list of charaters.
Step11: Python Iteration
Step12: Bool Type
Like most languages, Python also has bool Type
Step13: Use bool operators to create bool type
Step14: Other functions may also create bool type.
Step15: Dict
Dict is another very important data structure in python
Comparing to list, it is a unordered sets or objects
Moreover, the item are accessed via keys and not via their position
Step16: Conditional Statement
Conditional statement are used to execute code fragments based on a condition.
The most commonly used construct is if-else and while statement
Step17: Function
Step18: Exercises 1
Please construct a function named productEven which imput a list of integers and return the product of the even numbers in the list.
For example
Step19: Exercises 2
Write a code which will find all such numbers which are divisible by 7 but are not a multiple of 5,
between 2000 and 3200 (both included).
Step20: Exercises 3
Write a python program to count the number 4 in a given list.
Step21: Exercises 4
Write a Python program to compute the greatest common divisor (GCD) of two positive integers. | Python Code:
4
2 + 2
50 - 5*6
(50-5)*6
8/5
8//5 # Floor division discards the fractional part
8%5 # The % operator return the remainder of the division
Explanation: Using Python as a Calculator
Let's try some simple python commands
Numbers
The interpreter acts as a simple calculator: you can type an expression at it and it will write the value. Expression syntax is straightforward: the operators +, -, * and / work just like in most other languages (for example, Pascal or C); parentheses (()) can be used for grouping.
End of explanation
5 ** 3 # 5 squared
type(4)
type(1.3)
Explanation: With Python, use ** operator to calculate powers.
End of explanation
width = 30
width
width = 30
height = 2
width * height
Explanation: Use equal sign(=) to assign a value to variable like math variable.
End of explanation
s = 3 * 4
s # Try to access an undefined variable
print(5)
print(width)
Explanation: If a variable is not defined (assigned a value), trying to use it will give you an error:
End of explanation
empty_list = []
empty_list
type(empty_list)
squares = [1, 4, 9, 16, 25]
squares
Explanation: List
Python knows a number of compound data types, used to group together other values. The most versatile is the list, which can be written as a list of comma-separated values (items) between square brackets. Lists might contain items of different types, but usually the items all have the same type.
End of explanation
squares[0]
squares[1]
len(squares)
squares[len(squares) - 1]
squares[-1] # Last item
squares[-2] # Second-last item
Explanation: List can be indexed, with the first item having index 0.
End of explanation
squares = [1, 2, 3, 4, 5]
squares[1:4] # Get sub-list from the 1 to 4 with step size 1
squares[1:4:2] # Get sub-list from 1 to 4 with step size 2
squares[:4] # from begining of the list to 4 with step size 1
squares[1:] # from 1 to the end of the list with step size 1
squares[::2] # from begining to the end with step size 2
Explanation: In addition to indexing, slicing is also supported. While indexing is used to obtain individual item, slicing allows you to obtain sub-list:
Format of slicing: begin:end:step_size
End of explanation
names = []
names.append('John')
names
names.append('Paul')
names
names[1] = 'Alice'
names
names2 = ['Ryan', 'Tang']
names2
names = names + names2
names
names.append(['Fang', 'HE'])
names
squares.append(11)
squares
squares[1:3] = [12, 13, 14]
squares
len(squares) # calculate the number of item in the list
Explanation: You can replace the item in the list and add new items at the end of the list.
End of explanation
list1 = [1, 2]
list2 = [3, 4]
list3 = list1 + list2
list3
list1 = [1, 2, 3]
list2 = [x**2 for x in list1]
list2
Explanation: List support the + operation.
End of explanation
'Hello World!'
type('hello')
'doesn\'t'
s = 'Hello World'
s
Explanation: Strings
Besides numbers, Python can also manipulate strings, which can be expressed in several ways. They can be enclosed in single quotes ('...') or double quotes ("...") with the same result. \ can be used to escape quotes:
End of explanation
s[1:]
s[-1]
s1 = 'Hello '
s2 = 'World!'
s3 = s1 + s2
s3
s3.isdigit()
s3.isupper()
s3.lower()
Explanation: String also support indexing and slicing like list. You can see the string as a list of charaters.
End of explanation
squares = [1, 4, 9, 16, 25]
for s in squares:
print(s)
for i in range(10):
print(i)
for i in range(1, 10):
print(i)
for i in range(1, 10, 2):
print(i)
N = 8
fact = 1
for i in range(1, N+1):
fact = fact * i
fact
# For any i, we list the integer less than i
for i in range(5):
print("i=", i)
l = []
for j in range(0,i):
l.append(j)
print(l)
Explanation: Python Iteration
End of explanation
True
False
type(True)
Explanation: Bool Type
Like most languages, Python also has bool Type: True and False.
End of explanation
3 > 1
3 < 1
3 == 1
3 == 3
3 != 2
b = 3 > 4
b
Explanation: Use bool operators to create bool type
End of explanation
even_number = [2, 4, 6, 8, 10, 12]
even_number
6 in even_number
3 in even_number
Explanation: Other functions may also create bool type.
End of explanation
empty = {}
empty
type(empty)
food = {'ham': 'yes',
'egg': 'yes',
'spam': 'no'}
food
food['ham']
food['spam'] = 'yes'
food
len(food)
# Accessing non Existing keys
food['apple']
# Merge dictionary
food2 = {'beef': 'yes',
'shoe': 'no'}
food.update(food2)
food
# Iterating over Dictionary
for key in food:
print(key)
print(food[key])
print('\n')
Explanation: Dict
Dict is another very important data structure in python
Comparing to list, it is a unordered sets or objects
Moreover, the item are accessed via keys and not via their position
End of explanation
N = 10
if N%2 == 0:
print('Even')
else:
print('Odd')
N = 9
if N%2 == 0:
print('Even')
else:
print('Odd')
N = 9
if N%2 == 0 and N > 10:
print('A Even number greater than 10')
else:
print('smaller than 10 or odd number')
N = 0
while N <= 10:
if N%2 == 0:
print(N, 'is Even.')
else:
print(N, ' is Odd.')
N = N+1
Explanation: Conditional Statement
Conditional statement are used to execute code fragments based on a condition.
The most commonly used construct is if-else and while statement
End of explanation
# basic function
def add(x, y):
return x+y
add(2, 3)
# function with optional parameters
def add(x, y=5):
return x + y
add(3)
add(3, 9)
# Keyword Paramters
def add(x, y=5):
return x + y
add(3, y=10)
def isEven(N):
if N%2 == 0:
return True
else:
return False
isEven(10)
isEven(9)
Explanation: Function
End of explanation
def productEven(l):
ret = 1
for item in l:
if item % 2 == 0:
ret = ret * item
return ret
productEven([2, 3, 8, 7])
Explanation: Exercises 1
Please construct a function named productEven which imput a list of integers and return the product of the even numbers in the list.
For example:
pruductEven([2, 3, 8, 7]) gives 16
End of explanation
result_l = []
for i in range(2000, 3201):
if i%7 == 0 and i%5 != 0:
result_l.append(i)
result_l
Explanation: Exercises 2
Write a code which will find all such numbers which are divisible by 7 but are not a multiple of 5,
between 2000 and 3200 (both included).
End of explanation
l = [4, 2, 5, 6, 4]
count = 0
for item in l:
if item == 4:
count = count + 1
count
Explanation: Exercises 3
Write a python program to count the number 4 in a given list.
End of explanation
def gcd(a, b):
for i in range(max(a, b)+1, 0, -1):
if a%i == 0 and b%i == 0:
return i
gcd(4, 6)
Explanation: Exercises 4
Write a Python program to compute the greatest common divisor (GCD) of two positive integers.
End of explanation |
13,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Correlação Cruzada - biblioteca Scipy signal.correlate2d
Passos para simulação de template matching que tem como objetivo encontrar a localização do olho direito da modelo (Lena) da imagem alvo
Step1: Encontrar a correspondência, através da correlação cruzada
calcular a média do array da imagem alvo e subtraí-la da imagem alvo
calcular a média do template e subtraí-la do template
Utilizar a função signal.correlate2d do Scipy para calcular a correlação cruzada entre a imagem alvo e o template, passando como parâmetro
Step2: Plotando os resultados
Exibir a corrêspondecia através de um marcador vermelho das coordenadas x e y calculadas acima | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from scipy import signal
img = mpimg.imread('../figures/lena_greyscale.png')
arr = np.asarray(img)
template = np.copy(arr[240:290, 240:290])
#plt.figure(figsize=(15,10))
plt.subplot(1,2,1)
plt.title('Template - olhos de Lena')
plt.imshow(template, cmap='gray')
#plt.axis(off)
plt.figure(figsize=(15,10))
plt.subplot(1,2,2)
plt.title('Imagem alvo')
plt.imshow(arr, cmap='gray')
plt.show()
Explanation: Correlação Cruzada - biblioteca Scipy signal.correlate2d
Passos para simulação de template matching que tem como objetivo encontrar a localização do olho direito da modelo (Lena) da imagem alvo:
Template e imagem alvo:
Leitura da imagem alvo em escala de cinza
Cópia de uma região de interesse da imagem alvo, que será o template
Ambos template e imagem alvo devem ser convertidos para array
End of explanation
arr = arr - arr.mean()
template -= template.mean()
#arr = arr + np.random.randn(*arr.shape) * 100 # add noise
corr = signal.correlate2d(arr, template, boundary='fill', mode='same')
y, x = np.unravel_index(np.argmax(corr), corr.shape) # Encontrar a correspondencia: Converte um índice simples ou uma matriz de índices simples em uma tupla de matrizes de coordenadas
print(np.argmax(corr))
print(corr.shape)
print(y,x)
#print(template)
#print(corr)
#plt.imshow(template, cmap='gray')
Explanation: Encontrar a correspondência, através da correlação cruzada
calcular a média do array da imagem alvo e subtraí-la da imagem alvo
calcular a média do template e subtraí-la do template
Utilizar a função signal.correlate2d do Scipy para calcular a correlação cruzada entre a imagem alvo e o template, passando como parâmetro: a imagem alvo, o template, boundary='symm' que indica condições de contorno simétricas, mode='same' que indica que a saída é do mesmo tamanho que a imagem alvo e centralizada em relação à saída "completa".
Encontrar as coordenadas onde está localizada a correspondencia, através do índices dos valores máximos ao longo do eixo x e y.
End of explanation
fig, (ax_orig, ax_template, ax_corr, ax_result) = plt.subplots(4, 1, figsize=(6, 15))
ax_orig.imshow(arr, cmap='gray')
ax_orig.set_title('Original')
#ax_orig.set_axis_off()
ax_template.imshow(template, cmap='gray')
ax_template.set_title('Template')
ax_template.set_axis_off()
ax_corr.imshow(corr, cmap='gray')
ax_corr.set_title('Cross-correlation')
ax_corr.set_axis_off()
ax_result.imshow(img, cmap='gray')
ax_result.set_title('Resultado da correspondência')
ax_result.set_axis_off()
ax_result.plot(x, y, 'ro')
fig.show()
%matplotlib inline
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import Axes3D
h1,g2=np.histogram(arr,bins=255)
h2,g2=np.histogram(corr,bins=255)
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.plot(h1)
plt.subplot(1,2,2)
plt.plot(h2)
plt.show()
plt.axes(projection='3d')
import timeit
t = timeit.Timer('char in text', setup='text = "sample string"; char = "g"')
t.timeit()
%timeit 1+1
Explanation: Plotando os resultados
Exibir a corrêspondecia através de um marcador vermelho das coordenadas x e y calculadas acima
End of explanation |
13,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I would want to get rid of all rows missing any of the data before running the sampler.
Step1: Do Geweke Test to see if trace has converged. Burn-in looks to be around 2000.
Step2: Make predictions | Python Code:
train = train[train['longitude'] > 1]
train = train[train['latitude'] < 0]
train = train[train['construction_year'] != 0]
train = train[train['gps_height'] != 0]
features = ['longitude','latitude']
trainLoc = train[features]
#hasLocIdx = train['longitude']>1
#trainLoc = trainLoc[hasLocIdx] #remove rows with empty location data
#trainLoc.head()
#hasLocIdx.head()
trainID = train['id']
trainLabelsLoc = trainLabels[trainLabels['id'].isin(trainID)] #only keep labels corresponding to rows with non-empty location data
nNeighbors = 60
clf = KNeighborsClassifier(n_neighbors=nNeighbors,weights='distance',algorithm='auto')
clf.fit(trainLoc[features], trainLabelsLoc['status_group'])
#preds = clf.predict(test[features])
features=['construction_year', 'gps_height']
trainFeatures = train[features]
trainFeatures['age'] = 2015 - trainFeatures['construction_year']
trainFeatures = trainFeatures.drop('construction_year', 1)
#trainFeatures.head()
kNNFunctional = clf.predict_proba(trainFeatures)[:,0]
#TODO: get more meaningful data out of kNN algorithm?
x = np.vstack((trainFeatures['age'],trainFeatures['gps_height'],kNNFunctional))
trainLabels = trainLabels[trainLabels['id'].isin(trainID)] #get rid of Labels for which there is no corresponding data
numFeatures = 3
numBeta = numFeatures + 1 #1 more for the constant
#TODO: Convert labels into numbers
trainLabelsVect = pd.get_dummies(trainLabels['status_group'])
trainLabelsVect['functionality'] = trainLabelsVect['functional'] + 0.5*trainLabelsVect['functional needs repair']
#B, sig2= mcmc(np.zeros(numBeta), 0.2, x, trainLabelsVect['functionality'], 100, 0, 0.5, 0.5)
B, sig2= mcmc(np.zeros(numBeta), 0.2, x, trainLabelsVect['functionality'], 10000, 0, 0.1, 0.1)
plt.plot(B[:,0])
plt.plot(B[:,1])
plt.plot(B[:,2])
plt.plot(B[:,3])
print np.mean(B[:,1])
plt.plot(sig2)
Explanation: I would want to get rid of all rows missing any of the data before running the sampler.
End of explanation
# autocorrelation with lag t
def rhot(x, t):
n = len(x)
return np.corrcoef(x[0:(n-t)], x[t:n])[0,1]
# Geweke function
def Geweke(trace, intervals, length):
# take two parts of the chain.
# subsample lenght
nsl=length
jump = int(0.9*len(trace)/(2*intervals))
first = 0.1*len(trace)
z =np.empty(intervals)
for k in np.arange(0, intervals):
# beg of each sub samples
bega=first+k*jump
begb = len(trace)/2 + k*jump
sub_trace_a = trace[bega:bega+nsl]
sub_trace_b = trace[begb:begb+nsl]
theta_a = np.mean(sub_trace_a)
theta_b = np.mean(sub_trace_b)
rho_a, rho_b = 1.0, 1.0
# only compute autocorrelation at lag 1-0.1*nsl.
for i in xrange(int(0.1*nsl)):
rho_a += 2*rhot(sub_trace_a, i+1)
rho_b += 2*rhot(sub_trace_b, i+1)
# estimate the variance roughly
var_a = np.var(sub_trace_a)*rho_a/length
var_b = np.var(sub_trace_b)*rho_b/length
z[k] = (theta_a-theta_b)/np.sqrt( var_a + var_b)
return z
burnin = 2000
gewekeB0 = Geweke(B[burnin:,0], 10, 5000)
gewekeB1 = Geweke(B[burnin:,1], 10, 5000)
gewekeB2 = Geweke(B[burnin:,2], 10, 5000)
gewekeB3 = Geweke(B[burnin:,3], 10, 5000)
gewekesig2 = Geweke(sig2[3000:], 10, 300)
plt.figure(1)
plt.subplot(211)
plt.plot(gewekeB0)
plt.plot(gewekeB1)
plt.plot(gewekeB2)
plt.plot(gewekeB3)
plt.ylim((-2,2))
plt.subplot(212)
plt.plot(gewekesig2)
plt.ylim((-5,10))
plt.show()
Explanation: Do Geweke Test to see if trace has converged. Burn-in looks to be around 2000.
End of explanation
# show posterior mean and variance
def post_ana(res):
mean = res.mean(axis = 1)
sd = np.zeros(4)
sd[0] = np.std(res[:, 0])
sd[1] = np.std(res[:, 1])
sd[2] = np.std(res[:, 2])
sd[3] = np.std(res[:, 3])
print "The posterior mean and standard deviation for beta0 are "+str(mean[0])+" and "+str(sd[0])+"."
print "The posterior mean and standard deviation for beta are "+str(mean[1])+" and "+str(sd[1])+"."
print "The posterior mean and standard deviation for sigma2 are "+str(mean[2])+" and "+str(sd[2])+"."
# compute the predictive intervals at each x
def predict(res, x, y, m):
n = len(x)
print "n: ", n
print "x.shape: ", (x.shape)
x = x.reshape((n,1))
print "x.shape: ", (x.shape)
count = 0
#print "res shape: ",res[:10]
print "res.shape: ", (res.shape)
#res = res.reshape((m,5))
#print "res.shape: ", (res.shape)
#Result = np.zeros((n, res.shape[1]*m))
Result = np.zeros((n,(res.shape[1]*m)))
print "Result: ", Result.shape
#print "res[2,0]: ", res[3,0]
for i in xrange(res.shape[1]):
Result[:,(i*m):(i*m+m)] = np.random.normal(scale = 0.001, size=m*n).reshape((n,m)) + np.repeat(res[0,i]+res[1,i]*x,m, axis=1)
#Result[3,(i*m):(i*m+m)] = np.random.normal(scale = 0.001, size=m*n).reshape((n,m)) + np.repeat(res[0,i]+res[1,i]*x,m, axis=1)
bounds = np.zeros((n,2))
for i in xrange(n):
bounds[i,:] = np.percentile(Result[i,:], [2.5,97.5])
if y[i] < bounds[i,1] and y[i] > bounds[i,0]:
count += 1
print "There are "+str(count) +" yis out of "+str(n) +" that falls in the predictive interval."
return bounds
b = x[0,:]
c = np.vstack(b)
print c.shape
#(2,3)
print B[:10,2]
#def mcmc(b_init, sig2, x, y, N, burnin, be, sig2e):
res_gaussian_flat = mcmc(np.zeros(numBeta), 0.2, x, trainLabelsVect['functionality'], 10000, 0, 0.1, 0.1)
#bounds_gaussian_flat = predict(B[:,2], c, trainLabelsVect['functionality'], 26563)
#np.zeros(numBeta), 0.2, x, ,
#post_ana(res_gaussian_flat)
#get rid of extra columns just to see if I can make the data fit the same format as the original code from HW7 and just get it working
elim = [3,4]
np.delete(res_gaussian_flat, elim)
bounds_gaussian_flat = predict(res_gaussian_flat, x[0,:], trainLabelsVect['functionality'], 100)
#bounds_gaussian_flat = predict(res_gaussian_flat, x, y, m, "gaussian")
Explanation: Make predictions:
End of explanation |
13,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XGBoost Training on AI Platform
This notebook uses the Census Income Data Set to demonstrate how to train a model on Ai Platform.
How to bring your model to AI Platform
Getting your model ready for training can be done in 3 steps
Step1: The data
The Census Income Data Set that this sample
uses for training is provided by the UC Irvine Machine Learning
Repository. We have hosted the data on a public GCS bucket gs
Step2: Part 2
Step3: Part 3
Step4: Submit the training job.
Step5: [Optional] StackDriver Logging
You can view the logs for your training job | Python Code:
%env PROJECT_ID <YOUR_PROJECT_ID>
%env BUCKET_ID <YOUR_BUCKET_ID>
%env REGION <REGION>
%env TRAINER_PACKAGE_PATH ./census_training
%env MAIN_TRAINER_MODULE census_training.train
%env JOB_DIR <gs://YOUR_BUCKET_ID/xgb_job_dir>
%env RUNTIME_VERSION 1.9
%env PYTHON_VERSION 3.5
! mkdir census_training
Explanation: XGBoost Training on AI Platform
This notebook uses the Census Income Data Set to demonstrate how to train a model on Ai Platform.
How to bring your model to AI Platform
Getting your model ready for training can be done in 3 steps:
1. Create your python model file
1. Add code to download your data from Google Cloud Storage so that AI Platform can use it
1. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model
1. Prepare a package
1. Submit the training job
Prerequisites
Before you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform.
Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.
AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.
Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.
Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.
Part 0: Setup
Create a project on GCP
Create a Google Cloud Storage Bucket
Enable AI Platform Training and Prediction and Compute Engine APIs
Install Cloud SDK
[Optional] Install XGBoost
[Optional] Install scikit-learn
[Optional] Install pandas
[Optional] Install Google API Python Client
These variables will be needed for the following steps.
* TRAINER_PACKAGE_PATH <./census_training> - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path.
* MAIN_TRAINER_MODULE <census_training.train> - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name>
* JOB_DIR <gs://$BUCKET_ID/xgb_job_dir> - The path to a Google Cloud Storage location to use for job output.
* RUNTIME_VERSION <1.9> - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
* PYTHON_VERSION <3.5> - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
Replace:
* PROJECT_ID <YOUR_PROJECT_ID> - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.
* BUCKET_ID <YOUR_BUCKET_ID> - with the bucket id you created above.
* JOB_DIR <gs://YOUR_BUCKET_ID/xgb_job_dir> - with the bucket id you created above.
* REGION <REGION> - select a region from here or use the default 'us-central1'. The region is where the model will be deployed.
End of explanation
%%writefile ./census_training/train.py
# [START setup]
import datetime
import os
import subprocess
from sklearn.preprocessing import LabelEncoder
import pandas as pd
from google.cloud import storage
import xgboost as xgb
# TODO: REPLACE 'BUCKET_CREATED_ABOVE' with your GCS BUCKET_ID
BUCKET_ID = 'torryyang-xgb-models'
# [END setup]
# ---------------------------------------
# 1. Add code to download the data from GCS (in this case, using the publicly hosted data).
# AI Platform will then be able to use the data when training your model.
# ---------------------------------------
# [START download-data]
census_data_filename = 'adult.data.csv'
# Public bucket holding the census data
bucket = storage.Client().bucket('cloud-samples-data')
# Path to the data inside the public bucket
data_dir = 'ml-engine/census/data/'
# Download the data
blob = bucket.blob(''.join([data_dir, census_data_filename]))
blob.download_to_filename(census_data_filename)
# [END download-data]
# ---------------------------------------
# This is where your model code would go. Below is an example model using the census dataset.
# ---------------------------------------
# [START define-and-load-data]
# these are the column labels from the census data files
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# categorical columns contain data that need to be turned into numerical values before being used by XGBoost
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Load the training census dataset
with open(census_data_filename, 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# remove column we are trying to predict ('income-level') from features list
train_features = raw_training_data.drop('income-level', axis=1)
# create training labels list
train_labels = (raw_training_data['income-level'] == ' >50K')
# [END define-and-load-data]
# [START categorical-feature-conversion]
# Since the census data set has categorical features, we need to convert
# them to numerical values.
# convert data in categorical columns to numerical values
encoders = {col:LabelEncoder() for col in CATEGORICAL_COLUMNS}
for col in CATEGORICAL_COLUMNS:
train_features[col] = encoders[col].fit_transform(train_features[col])
# [END categorical-feature-conversion]
# [START load-into-dmatrix-and-train]
# load data into DMatrix object
dtrain = xgb.DMatrix(train_features, train_labels)
# train model
bst = xgb.train({}, dtrain, 20)
# [END load-into-dmatrix-and-train]
# ---------------------------------------
# 2. Export and save the model to GCS
# ---------------------------------------
# [START export-to-gcs]
# Export the model to a file
model = 'model.bst'
bst.save_model(model)
# Upload the model to GCS
bucket = storage.Client().bucket(BUCKET_ID)
blob = bucket.blob('{}/{}'.format(
datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'),
model))
blob.upload_from_filename(model)
# [END export-to-gcs]
Explanation: The data
The Census Income Data Set that this sample
uses for training is provided by the UC Irvine Machine Learning
Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/census/data/.
Training file is adult.data.csv
Evaluation file is adult.test.csv (not used in this notebook)
Note: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS.
Disclaimer
This dataset is provided by a third party. Google provides no representation,
warranty, or other guarantees about the validity or any other aspects of this dataset.
Part 1: Create your python model file
First, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a XGBoost model. However, there are two key differences:
1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data.
1. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions.
The code in this file loads the data into a pandas DataFrame and pre-processes the data with scikit-learn. This data is then loaded into a DMatrix and used to train a model. Lastly, the model is saved to a file that can be uploaded to AI Platform's prediction service.
REPLACE Line 18: BUCKET_ID = 'true-ability-192918' with your GCS BUCKET_ID
Note: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.
End of explanation
%%writefile ./census_training/__init__.py
# Note that __init__.py can be an empty file.
Explanation: Part 2: Create Trainer Package
Before you can run your trainer application with AI Platform, your code and any dependencies must be placed in a Google Cloud Storage location that your Google Cloud Platform project can access. You can find more info here
End of explanation
! gcloud config set project $PROJECT_ID
Explanation: Part 3: Submit Training Job
Next we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags:
job-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: census_training_$(date +"%Y%m%d_%H%M%S")
job-dir - The path to a Google Cloud Storage location to use for job output.
package-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated.
module-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name.
region - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'.
runtime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
python-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
scale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use.
Note: Check to make sure gcloud is set to the current PROJECT_ID
End of explanation
! gcloud ml-engine jobs submit training census_training_$(date +"%Y%m%d_%H%M%S") \
--job-dir $JOB_DIR \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--region $REGION \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
--scale-tier BASIC
Explanation: Submit the training job.
End of explanation
! gsutil ls gs://$BUCKET_ID/census_*
Explanation: [Optional] StackDriver Logging
You can view the logs for your training job:
1. Go to https://console.cloud.google.com/
1. Select "Logging" in left-hand pane
1. Select "Cloud ML Job" resource from the drop-down
1. In filter by prefix, use the value of $JOB_NAME to view the logs
[Optional] Verify Model File in GCS
View the contents of the destination model folder to verify that model file has indeed been uploaded to GCS.
Note: The model can take a few minutes to train and show up in GCS.
End of explanation |
13,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch 5. Interactive Data Analysis
This notebook introduces carrying out interactive data analysis of data in BigQuery using a Jupyter Notebook managed by Vertex AI Workbench.
This cell, for example, is a mark-down cell. Which is why you are seeing text. The cell that follows is a Python code cell. The output of that cell is whatever is printed out from it.
Step1: Relative path
I created this notebook in 05_bqnotebook folder of the git repo for the book. So, you might see a path that ends in that. But the path will start with /home/jupyter which is mapped to a local folder if you are running this in a container.
Step2: What's installed?
Step3: Installing dependencies
Regular Python dependencies can be installed using pip
Step4: Juypter magic
Step5: The %%bigquery cell magic will return SQL results in a Pandas DataFrame
Step7: Calls to BigQuery
We can also directly query BigQuery with the Python library
Step8: Let's draw a Probability Distribution Function (PDF) of different arrival delays. In a Notebook we can assign the output of a cell magic query to a variable, in this case df
Step9: Plotting distributions
Step10: Oddball values
Step11: Filtering Data on Occurence Frequency
Step12: Arrival delay conditioned on departure delay
Step13: Creating training/evaluation dataset | Python Code:
a = 3
b = a + 5
print("a={} b={}".format(a,b))
Explanation: Ch 5. Interactive Data Analysis
This notebook introduces carrying out interactive data analysis of data in BigQuery using a Jupyter Notebook managed by Vertex AI Workbench.
This cell, for example, is a mark-down cell. Which is why you are seeing text. The cell that follows is a Python code cell. The output of that cell is whatever is printed out from it.
End of explanation
!pwd
Explanation: Relative path
I created this notebook in 05_bqnotebook folder of the git repo for the book. So, you might see a path that ends in that. But the path will start with /home/jupyter which is mapped to a local folder if you are running this in a container.
End of explanation
%pip freeze
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from google.cloud import bigquery
Explanation: What's installed?
End of explanation
%pip install pytz
Explanation: Installing dependencies
Regular Python dependencies can be installed using pip
End of explanation
%%html
This cell will print out a <b> HTML </b> string.
Explanation: Juypter magic
End of explanation
%%bigquery
SELECT
COUNTIF(arr_delay >= 15)/COUNT(arr_delay) AS frac_delayed
FROM dsongcp.flights_tzcorr
Explanation: The %%bigquery cell magic will return SQL results in a Pandas DataFrame:
End of explanation
bq = bigquery.Client()
sql =
SELECT
COUNTIF(arr_delay >= 15)/COUNT(arr_delay) AS frac_delayed
FROM dsongcp.flights_tzcorr
bq.query(sql).to_dataframe()
Explanation: Calls to BigQuery
We can also directly query BigQuery with the Python library:
End of explanation
%%bigquery df
SELECT ARR_DELAY, DEP_DELAY
FROM dsongcp.flights_tzcorr
WHERE DEP_DELAY >= 10
type(df)
df.describe()
sns.set_style("whitegrid")
# sns.set(font_scale = 1.5)
ax = sns.violinplot(data=df, x='ARR_DELAY', inner='box', orient='h')
ax.axes.set_xlim(-50, 300);
Explanation: Let's draw a Probability Distribution Function (PDF) of different arrival delays. In a Notebook we can assign the output of a cell magic query to a variable, in this case df:
End of explanation
%%bigquery df
SELECT ARR_DELAY, DEP_DELAY
FROM dsongcp.flights_tzcorr
df.describe()
df['ontime'] = df['DEP_DELAY'] < 10
df[df['ARR_DELAY'] > 0].head()
sns.set_style("whitegrid")
ax = sns.violinplot(data=df, x='ARR_DELAY', y='ontime', inner='box', orient='h')
ax.set_xlim(-50, 200);
ax = sns.violinplot(data=df, x='ARR_DELAY', y='ontime',
inner='box', orient='h', gridsize=1000)
ax.set_xlim(-50, 200)
Explanation: Plotting distributions
End of explanation
%%bigquery depdelay
SELECT
DEP_DELAY,
AVG(ARR_DELAY) AS arrival_delay,
COUNT(ARR_DELAY) AS numflights
FROM
dsongcp.flights_tzcorr
GROUP BY
DEP_DELAY
ORDER BY
DEP_DELAY
len(depdelay)
depdelay[:5]
depdelay[55:60]
Explanation: Oddball values
End of explanation
%%bigquery df
DECLARE total_flights INT64;
SET total_flights = (
SELECT COUNT(*) FROM dsongcp.flights_tzcorr
);
CREATE TEMPORARY FUNCTION linear_fit(NUM_TOTAL INT64, THRESH INT64)
RETURNS STRUCT<thresh INT64, num_removed INT64, lm FLOAT64>
AS ((
SELECT AS STRUCT
THRESH,
(NUM_TOTAL - SUM(numflights)) AS num_removed,
ROUND(AVG(arrival_delay * numflights) / AVG(dep_delay * numflights), 2) AS lm
FROM
(
SELECT
DEP_DELAY,
AVG(ARR_DELAY) AS arrival_delay,
STDDEV(ARR_DELAY) AS stddev_arrival_delay,
COUNT(ARR_DELAY) AS numflights
FROM
dsongcp.flights_tzcorr
GROUP BY
DEP_DELAY
)
WHERE numflights > THRESH
))
;
SELECT linear_fit(total_flights, 1000) stats
UNION ALL SELECT linear_fit(total_flights, 500)
UNION ALL SELECT linear_fit(total_flights, 370)
UNION ALL SELECT linear_fit(total_flights, 300)
UNION ALL SELECT linear_fit(total_flights, 200)
UNION ALL SELECT linear_fit(total_flights, 100)
UNION ALL SELECT linear_fit(total_flights, 22)
UNION ALL SELECT linear_fit(total_flights, 10)
UNION ALL SELECT linear_fit(total_flights, 5)
ORDER BY stats.thresh DESC
df['stats'].map(lambda x: (x['thresh'], x['num_removed'], x['lm']))
Explanation: Filtering Data on Occurence Frequency
End of explanation
%%bigquery depdelay
SELECT
DEP_DELAY,
AVG(ARR_DELAY) AS arrival_delay,
STDDEV(ARR_DELAY) AS stddev_arrival_delay,
COUNT(ARR_DELAY) AS numflights
FROM
dsongcp.flights_tzcorr
GROUP BY
DEP_DELAY
HAVING numflights > 370
ORDER BY DEP_DELAY
depdelay[:5]
ax = depdelay.plot(kind='line', x='DEP_DELAY',
y='arrival_delay', yerr='stddev_arrival_delay')
Z_30 = 0.52
depdelay['arr_delay_30'] = (Z_30 * depdelay['stddev_arrival_delay']) \
+ depdelay['arrival_delay']
ax = plt.axes()
depdelay.plot(kind='line', x='DEP_DELAY', y='arr_delay_30',
ax=ax, ylim=(0,30), xlim=(0,30), legend=False)
ax.set_xlabel('Departure Delay (minutes)')
ax.set_ylabel('> 30% prob of this\n Arrival Delay (minutes)');
x = np.arange(0, 30)
y = np.ones_like(x) * 15
ax.plot(x, y, 'r.');
y = np.arange(0, 30)
x = np.ones_like(y) * 13
ax.plot(x, y, 'g.');
%%bigquery depdelay
SELECT
DEP_DELAY,
APPROX_QUANTILES(ARR_DELAY, 101)[OFFSET(70)] AS arrival_delay,
COUNT(ARR_DELAY) AS numflights
FROM
dsongcp.flights_tzcorr
GROUP BY
DEP_DELAY
HAVING numflights > 370
ORDER BY DEP_DELAY
ax = plt.axes()
depdelay.plot(kind='line', x='DEP_DELAY', y='arrival_delay',
ax=ax, ylim=(0,30), xlim=(0,30), legend=False)
ax.set_xlabel('Departure Delay (minutes)')
ax.set_ylabel('> 30% prob of this\n Arrival Delay (minutes)');
x = np.arange(0, 30)
y = np.ones_like(x) * 15
ax.plot(x, y, 'r.');
y = np.arange(0, 30)
x = np.ones_like(y) * 16
ax.plot(x, y, 'g.');
Explanation: Arrival delay conditioned on departure delay
End of explanation
%%bigquery
SELECT
FL_DATE,
IF(ABS(MOD(FARM_FINGERPRINT(CAST(FL_DATE AS STRING)), 100)) < 70,
'True', 'False') AS is_train_day
FROM (
SELECT
DISTINCT(FL_DATE) AS FL_DATE
FROM
dsongcp.flights_tzcorr)
ORDER BY
FL_DATE
LIMIT 5
%%bigquery
CREATE OR REPLACE TABLE dsongcp.trainday AS
SELECT
FL_DATE,
IF(ABS(MOD(FARM_FINGERPRINT(CAST(FL_DATE AS STRING)), 100)) < 70,
'True', 'False') AS is_train_day
FROM (
SELECT
DISTINCT(FL_DATE) AS FL_DATE
FROM
dsongcp.flights_tzcorr)
ORDER BY
FL_DATE
%%bigquery depdelay
SELECT
DEP_DELAY,
APPROX_QUANTILES(ARR_DELAY, 101)[OFFSET(70)] AS arrival_delay,
COUNT(ARR_DELAY) AS numflights
FROM
dsongcp.flights_tzcorr
JOIN dsongcp.trainday USING(FL_DATE)
WHERE is_train_day = 'True'
GROUP BY
DEP_DELAY
HAVING numflights > 370
ORDER BY DEP_DELAY
ax = plt.axes()
depdelay.plot(kind='line', x='DEP_DELAY', y='arrival_delay',
ax=ax, ylim=(0,30), xlim=(0,30), legend=False)
ax.set_xlabel('Departure Delay (minutes)')
ax.set_ylabel('> 30% prob of this\n Arrival Delay (minutes)');
x = np.arange(0, 30)
y = np.ones_like(x) * 15
ax.plot(x, y, 'r.');
y = np.arange(0, 30)
x = np.ones_like(y) * 16
ax.plot(x, y, 'g.');
%%bigquery df_eval
SELECT
SUM(IF(DEP_DELAY < 16
AND arr_delay < 15, 1, 0)) AS correct_nocancel,
SUM(IF(DEP_DELAY < 16
AND arr_delay >= 15, 1, 0)) AS wrong_nocancel,
SUM(IF(DEP_DELAY >= 16
AND arr_delay < 15, 1, 0)) AS wrong_cancel,
SUM(IF(DEP_DELAY >= 16
AND arr_delay >= 15, 1, 0)) AS correct_cancel
FROM (
SELECT
DEP_DELAY,
ARR_DELAY
FROM
dsongcp.flights_tzcorr
JOIN dsongcp.trainday USING(FL_DATE)
WHERE is_train_day = 'False'
)
print(df_eval['correct_nocancel'] /
(df_eval['correct_nocancel'] + df_eval['wrong_nocancel']))
print(df_eval['correct_cancel'] /
(df_eval['correct_cancel'] + df_eval['wrong_cancel']))
df_eval.head()
%%bigquery df_eval
SELECT
SUM(IF(DEP_DELAY = 15
AND arr_delay < 15, 1, 0)) AS correct_nocancel,
SUM(IF(DEP_DELAY = 15
AND arr_delay >= 15, 1, 0)) AS wrong_nocancel,
SUM(IF(DEP_DELAY = 16
AND arr_delay < 15, 1, 0)) AS wrong_cancel,
SUM(IF(DEP_DELAY = 16
AND arr_delay >= 15, 1, 0)) AS correct_cancel
FROM (
SELECT
DEP_DELAY,
ARR_DELAY
FROM
dsongcp.flights_tzcorr
JOIN dsongcp.trainday USING(FL_DATE)
WHERE is_train_day = 'False'
)
df_eval.head()
print(df_eval['correct_nocancel'] / (df_eval['correct_nocancel'] + df_eval['wrong_nocancel']))
print(df_eval['correct_cancel'] / (df_eval['correct_cancel'] + df_eval['wrong_cancel']))
Explanation: Creating training/evaluation dataset
End of explanation |
13,578 | Given the following text description, write Python code to implement the functionality described.
Description:
Cumulative product of digits of all numbers in the given range
Function to get product of digits ; Function to find the product of digits of all natural numbers in range L to R ; Iterate between L to R ; Driver Code
| Python Code:
def getProduct(n ) :
product = 1
while(n != 0 ) :
product = product *(n % 10 )
n = int(n / 10 )
return product
def productinRange(l , r ) :
if(r - l > 9 ) :
return 0
else :
p = 1
for i in range(l , r + 1 ) :
p = p * getProduct(i )
return p
l = 11
r = 15
print(productinRange(l , r ) , end = ' ' )
l = 1
r = 15
print(productinRange(l , r ) )
|
13,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: tf.data
Step2: Basic mechanics
<a id="basic-mechanics"/>
To create an input pipeline, you must start with a data source. For example,
to construct a Dataset from data in memory, you can use
tf.data.Dataset.from_tensors() or tf.data.Dataset.from_tensor_slices().
Alternatively, if your input data is stored in a file in the recommended
TFRecord format, you can use tf.data.TFRecordDataset().
Once you have a Dataset object, you can transform it into a new Dataset by
chaining method calls on the tf.data.Dataset object. For example, you can
apply per-element transformations such as Dataset.map, and multi-element
transformations such as Dataset.batch. Refer to the documentation for
tf.data.Dataset for a complete list of transformations.
The Dataset object is a Python iterable. This makes it possible to consume its
elements using a for loop
Step3: Or by explicitly creating a Python iterator using iter and consuming its
elements using next
Step4: Alternatively, dataset elements can be consumed using the reduce
transformation, which reduces all elements to produce a single result. The
following example illustrates how to use the reduce transformation to compute
the sum of a dataset of integers.
Step5: <!-- TODO(jsimsa)
Step6: The Dataset transformations support datasets of any structure. When using the
Dataset.map, and Dataset.filter transformations,
which apply a function to each element, the element structure determines the
arguments of the function
Step7: Reading input data
Consuming NumPy arrays
Refer to the Loading NumPy arrays tutorial for more examples.
If all of your input data fits in memory, the simplest way to create a Dataset
from them is to convert them to tf.Tensor objects and use
Dataset.from_tensor_slices.
Step8: Note
Step9: The Dataset.from_generator constructor converts the python generator to a fully functional tf.data.Dataset.
The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional args argument, which is passed as the callable's arguments.
The output_types argument is required because tf.data builds a tf.Graph internally, and graph edges require a tf.dtype.
Step10: The output_shapes argument is not required but is highly recommended as many TensorFlow operations do not support tensors with an unknown rank. If the length of a particular axis is unknown or variable, set it as None in the output_shapes.
It's also important to note that the output_shapes and output_types follow the same nesting rules as other dataset methods.
Here is an example generator that demonstrates both aspects
Step11: The first output is an int32 the second is a float32.
The first item is a scalar, shape (), and the second is a vector of unknown length, shape (None,)
Step12: Now it can be used like a regular tf.data.Dataset. Note that when batching a dataset with a variable shape, you need to use Dataset.padded_batch.
Step13: For a more realistic example, try wrapping preprocessing.image.ImageDataGenerator as a tf.data.Dataset.
First download the data
Step14: Create the image.ImageDataGenerator
Step15: Consuming TFRecord data
Refer to the Loading TFRecords tutorial for an end-to-end example.
The tf.data API supports a variety of file formats so that you can process
large datasets that do not fit in memory. For example, the TFRecord file format
is a simple record-oriented binary format that many TensorFlow applications use
for training data. The tf.data.TFRecordDataset class enables you to
stream over the contents of one or more TFRecord files as part of an input
pipeline.
Here is an example using the test file from the French Street Name Signs (FSNS).
Step16: The filenames argument to the TFRecordDataset initializer can either be a
string, a list of strings, or a tf.Tensor of strings. Therefore if you have
two sets of files for training and validation purposes, you can create a factory
method that produces the dataset, taking filenames as an input argument
Step17: Many TensorFlow projects use serialized tf.train.Example records in their TFRecord files. These need to be decoded before they can be inspected
Step18: Consuming text data
Refer to the Load text tutorial for an end-to-end example.
Many datasets are distributed as one or more text files. The
tf.data.TextLineDataset provides an easy way to extract lines from one or more
text files. Given one or more filenames, a TextLineDataset will produce one
string-valued element per line of those files.
Step19: Here are the first few lines of the first file
Step20: To alternate lines between files use Dataset.interleave. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation
Step21: By default, a TextLineDataset yields every line of each file, which may
not be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the Dataset.skip() or
Dataset.filter transformations. Here, you skip the first line, then filter to
find only survivors.
Step22: Consuming CSV data
Refer to the Loading CSV Files and Loading Pandas DataFrames tutorials for more examples.
The CSV file format is a popular format for storing tabular data in plain text.
For example
Step23: If your data fits in memory the same Dataset.from_tensor_slices method works on dictionaries, allowing this data to be easily imported
Step24: A more scalable approach is to load from disk as necessary.
The tf.data module provides methods to extract records from one or more CSV files that comply with RFC 4180.
The tf.data.experimental.make_csv_dataset function is the high-level interface for reading sets of CSV files. It supports column type inference and many other features, like batching and shuffling, to make usage simple.
Step25: You can use the select_columns argument if you only need a subset of columns.
Step26: There is also a lower-level experimental.CsvDataset class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column.
Step27: If some columns are empty, this low-level interface allows you to provide default values instead of column types.
Step28: By default, a CsvDataset yields every column of every line of the file,
which may not be desirable, for example if the file starts with a header line
that should be ignored, or if some columns are not required in the input.
These lines and fields can be removed with the header and select_cols
arguments respectively.
Step29: Consuming sets of files
There are many datasets distributed as a set of files, where each file is an example.
Step30: Note
Step31: The files in each class directory are examples
Step32: Read the data using the tf.io.read_file function and extract the label from the path, returning (image, label) pairs
Step33: <!--
TODO(mrry)
Step34: While tf.data tries to propagate shape information, the default settings of Dataset.batch result in an unknown batch size because the last batch may not be full. Note the Nones in the shape
Step35: Use the drop_remainder argument to ignore that last batch, and get full shape propagation
Step36: Batching tensors with padding
The above recipe works for tensors that all have the same size. However, many
models (including sequence models) work with input data that can have varying size
(for example, sequences of different lengths). To handle this case, the
Dataset.padded_batch transformation enables you to batch tensors of
different shapes by specifying one or more dimensions in which they may be
padded.
Step37: The Dataset.padded_batch transformation allows you to set different padding
for each dimension of each component, and it may be variable-length (signified
by None in the example above) or constant-length. It is also possible to
override the padding value, which defaults to 0.
<!--
TODO(mrry)
Step38: Applying the Dataset.repeat() transformation with no arguments will repeat
the input indefinitely.
The Dataset.repeat transformation concatenates its
arguments without signaling the end of one epoch and the beginning of the next
epoch. Because of this a Dataset.batch applied after Dataset.repeat will yield batches that straddle epoch boundaries
Step39: If you need clear epoch separation, put Dataset.batch before the repeat
Step40: If you would like to perform a custom computation (for example, to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch
Step41: Randomly shuffling input data
The Dataset.shuffle() transformation maintains a fixed-size
buffer and chooses the next element uniformly at random from that buffer.
Note
Step42: Since the buffer_size is 100, and the batch size is 20, the first batch contains no elements with an index over 120.
Step43: As with Dataset.batch the order relative to Dataset.repeat matters.
Dataset.shuffle doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next
Step44: But a repeat before a shuffle mixes the epoch boundaries together
Step45: Preprocessing data
The Dataset.map(f) transformation produces a new dataset by applying a given
function f to each element of the input dataset. It is based on the
map() function
that is commonly applied to lists (and other structures) in functional
programming languages. The function f takes the tf.Tensor objects that
represent a single element in the input, and returns the tf.Tensor objects
that will represent a single element in the new dataset. Its implementation uses
standard TensorFlow operations to transform one element into another.
This section covers common examples of how to use Dataset.map().
Decoding image data and resizing it
<!-- TODO(markdaoust)
Step46: Write a function that manipulates the dataset elements.
Step47: Test that it works.
Step48: Map it over the dataset.
Step49: Applying arbitrary Python logic
For performance reasons, use TensorFlow operations for
preprocessing your data whenever possible. However, it is sometimes useful to
call external Python libraries when parsing your input data. You can use the tf.py_function operation in a Dataset.map transformation.
For example, if you want to apply a random rotation, the tf.image module only has tf.image.rot90, which is not very useful for image augmentation.
Note
Step50: To use this function with Dataset.map the same caveats apply as with Dataset.from_generator, you need to describe the return shapes and types when you apply the function
Step51: Parsing tf.Example protocol buffer messages
Many input pipelines extract tf.train.Example protocol buffer messages from a
TFRecord format. Each tf.train.Example record contains one or more "features",
and the input pipeline typically converts these features into tensors.
Step52: You can work with tf.train.Example protos outside of a tf.data.Dataset to understand the data
Step53: <a id="time_series_windowing"></a>
Time series windowing
For an end-to-end time series example see
Step54: Typically, models based on this sort of data will want a contiguous time slice.
The simplest approach would be to batch the data
Step55: Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other
Step56: To predict a whole window instead of a fixed offset you can split the batches into two parts
Step57: To allow some overlap between the features of one batch and the labels of another, use Dataset.zip
Step58: Using window
While using Dataset.batch works, there are situations where you may need finer control. The Dataset.window method gives you complete control, but requires some care
Step59: The Dataset.flat_map method can take a dataset of datasets and flatten it into a single dataset
Step60: In nearly all cases, you will want to Dataset.batch the dataset first
Step61: Now, you can see that the shift argument controls how much each window moves over.
Putting this together you might write this function
Step62: Then it's easy to extract labels, as before
Step63: Resampling
When working with a dataset that is very class-imbalanced, you may want to resample the dataset. tf.data provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.
Note
Step64: Now, check the distribution of classes, it is highly skewed
Step65: A common approach to training with an imbalanced dataset is to balance it. tf.data includes a few methods which enable this workflow
Step66: To use tf.data.Dataset.sample_from_datasets pass the datasets, and the weight for each
Step67: Now the dataset produces examples of each class with a 50/50 probability
Step68: Rejection resampling
One problem with the above Dataset.sample_from_datasets approach is that
it needs a separate tf.data.Dataset per class. You could use Dataset.filter
to create those two datasets, but that results in all the data being loaded twice.
The tf.data.Dataset.rejection_resample method can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.
The rejection_resample method takes a class_func argument. This class_func is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.
The goal here is to balance the label distribution, and the elements of creditcard_ds are already (features, label) pairs. So the class_func just needs to return those labels
Step69: The resampling method deals with individual examples, so in this case you must unbatch the dataset before applying that method.
The method needs a target distribution, and optionally an initial distribution estimate as inputs.
Step70: The rejection_resample method returns (class, example) pairs where the class is the output of the class_func. In this case, the example was already a (feature, label) pair, so use map to drop the extra copy of the labels
Step71: Now the dataset produces examples of each class with a 50/50 probability
Step72: Iterator Checkpointing
Tensorflow supports taking checkpoints so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as Dataset.shuffle and Dataset.prefetch require buffering elements within the iterator.
To include your iterator in a checkpoint, pass the iterator to the tf.train.Checkpoint constructor.
Step73: Note
Step74: Passing a dataset of (feature, label) pairs is all that's needed for Model.fit and Model.evaluate
Step75: If you pass an infinite dataset, for example by calling Dataset.repeat, you just need to also pass the steps_per_epoch argument
Step76: For evaluation you can pass the number of evaluation steps
Step77: For long datasets, set the number of steps to evaluate
Step78: The labels are not required when calling Model.predict.
Step79: But the labels are ignored if you do pass a dataset containing them | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import tensorflow as tf
import pathlib
import os
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
np.set_printoptions(precision=4)
Explanation: tf.data: Build TensorFlow input pipelines
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/data"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/data.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/data.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The tf.data API enables you to build complex input pipelines from simple,
reusable pieces. For example, the pipeline for an image model might aggregate
data from files in a distributed file system, apply random perturbations to each
image, and merge randomly selected images into a batch for training. The
pipeline for a text model might involve extracting symbols from raw text data,
converting them to embedding identifiers with a lookup table, and batching
together sequences of different lengths. The tf.data API makes it possible to
handle large amounts of data, read from different data formats, and perform
complex transformations.
The tf.data API introduces a tf.data.Dataset abstraction that represents a
sequence of elements, in which each element consists of one or more components.
For example, in an image pipeline, an element might be a single training
example, with a pair of tensor components representing the image and its label.
There are two distinct ways to create a dataset:
A data source constructs a Dataset from data stored in memory or in
one or more files.
A data transformation constructs a dataset from one or more
tf.data.Dataset objects.
End of explanation
dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])
dataset
for elem in dataset:
print(elem.numpy())
Explanation: Basic mechanics
<a id="basic-mechanics"/>
To create an input pipeline, you must start with a data source. For example,
to construct a Dataset from data in memory, you can use
tf.data.Dataset.from_tensors() or tf.data.Dataset.from_tensor_slices().
Alternatively, if your input data is stored in a file in the recommended
TFRecord format, you can use tf.data.TFRecordDataset().
Once you have a Dataset object, you can transform it into a new Dataset by
chaining method calls on the tf.data.Dataset object. For example, you can
apply per-element transformations such as Dataset.map, and multi-element
transformations such as Dataset.batch. Refer to the documentation for
tf.data.Dataset for a complete list of transformations.
The Dataset object is a Python iterable. This makes it possible to consume its
elements using a for loop:
End of explanation
it = iter(dataset)
print(next(it).numpy())
Explanation: Or by explicitly creating a Python iterator using iter and consuming its
elements using next:
End of explanation
print(dataset.reduce(0, lambda state, value: state + value).numpy())
Explanation: Alternatively, dataset elements can be consumed using the reduce
transformation, which reduces all elements to produce a single result. The
following example illustrates how to use the reduce transformation to compute
the sum of a dataset of integers.
End of explanation
dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10]))
dataset1.element_spec
dataset2 = tf.data.Dataset.from_tensor_slices(
(tf.random.uniform([4]),
tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))
dataset2.element_spec
dataset3 = tf.data.Dataset.zip((dataset1, dataset2))
dataset3.element_spec
# Dataset containing a sparse tensor.
dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]))
dataset4.element_spec
# Use value_type to see the type of value represented by the element spec
dataset4.element_spec.value_type
Explanation: <!-- TODO(jsimsa): Talk about `tf.function` support. -->
<a id="dataset_structure"></a>
Dataset structure
A dataset produces a sequence of elements, where each element is
the same (nested) structure of components. Individual components
of the structure can be of any type representable by
tf.TypeSpec, including tf.Tensor, tf.sparse.SparseTensor,
tf.RaggedTensor, tf.TensorArray, or tf.data.Dataset.
The Python constructs that can be used to express the (nested)
structure of elements include tuple, dict, NamedTuple, and
OrderedDict. In particular, list is not a valid construct for
expressing the structure of dataset elements. This is because
early tf.data users felt strongly about list inputs (for example, when passed
to tf.data.Dataset.from_tensors) being automatically packed as
tensors and list outputs (for example, return values of user-defined
functions) being coerced into a tuple. As a consequence, if you
would like a list input to be treated as a structure, you need
to convert it into tuple and if you would like a list output
to be a single component, then you need to explicitly pack it
using tf.stack.
The Dataset.element_spec property allows you to inspect the type
of each element component. The property returns a nested structure
of tf.TypeSpec objects, matching the structure of the element,
which may be a single component, a tuple of components, or a nested
tuple of components. For example:
End of explanation
dataset1 = tf.data.Dataset.from_tensor_slices(
tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32))
dataset1
for z in dataset1:
print(z.numpy())
dataset2 = tf.data.Dataset.from_tensor_slices(
(tf.random.uniform([4]),
tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))
dataset2
dataset3 = tf.data.Dataset.zip((dataset1, dataset2))
dataset3
for a, (b,c) in dataset3:
print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c))
Explanation: The Dataset transformations support datasets of any structure. When using the
Dataset.map, and Dataset.filter transformations,
which apply a function to each element, the element structure determines the
arguments of the function:
End of explanation
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255
dataset = tf.data.Dataset.from_tensor_slices((images, labels))
dataset
Explanation: Reading input data
Consuming NumPy arrays
Refer to the Loading NumPy arrays tutorial for more examples.
If all of your input data fits in memory, the simplest way to create a Dataset
from them is to convert them to tf.Tensor objects and use
Dataset.from_tensor_slices.
End of explanation
def count(stop):
i = 0
while i<stop:
yield i
i += 1
for n in count(5):
print(n)
Explanation: Note: The above code snippet will embed the features and labels arrays
in your TensorFlow graph as tf.constant() operations. This works well for a
small dataset, but wastes memory---because the contents of the array will be
copied multiple times---and can run into the 2GB limit for the tf.GraphDef
protocol buffer.
Consuming Python generators
Another common data source that can easily be ingested as a tf.data.Dataset is the python generator.
Caution: While this is a convenient approach it has limited portability and scalability. It must run in the same python process that created the generator, and is still subject to the Python GIL.
End of explanation
ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), )
for count_batch in ds_counter.repeat().batch(10).take(10):
print(count_batch.numpy())
Explanation: The Dataset.from_generator constructor converts the python generator to a fully functional tf.data.Dataset.
The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional args argument, which is passed as the callable's arguments.
The output_types argument is required because tf.data builds a tf.Graph internally, and graph edges require a tf.dtype.
End of explanation
def gen_series():
i = 0
while True:
size = np.random.randint(0, 10)
yield i, np.random.normal(size=(size,))
i += 1
for i, series in gen_series():
print(i, ":", str(series))
if i > 5:
break
Explanation: The output_shapes argument is not required but is highly recommended as many TensorFlow operations do not support tensors with an unknown rank. If the length of a particular axis is unknown or variable, set it as None in the output_shapes.
It's also important to note that the output_shapes and output_types follow the same nesting rules as other dataset methods.
Here is an example generator that demonstrates both aspects: it returns tuples of arrays, where the second array is a vector with unknown length.
End of explanation
ds_series = tf.data.Dataset.from_generator(
gen_series,
output_types=(tf.int32, tf.float32),
output_shapes=((), (None,)))
ds_series
Explanation: The first output is an int32 the second is a float32.
The first item is a scalar, shape (), and the second is a vector of unknown length, shape (None,)
End of explanation
ds_series_batch = ds_series.shuffle(20).padded_batch(10)
ids, sequence_batch = next(iter(ds_series_batch))
print(ids.numpy())
print()
print(sequence_batch.numpy())
Explanation: Now it can be used like a regular tf.data.Dataset. Note that when batching a dataset with a variable shape, you need to use Dataset.padded_batch.
End of explanation
flowers = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
Explanation: For a more realistic example, try wrapping preprocessing.image.ImageDataGenerator as a tf.data.Dataset.
First download the data:
End of explanation
img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20)
images, labels = next(img_gen.flow_from_directory(flowers))
print(images.dtype, images.shape)
print(labels.dtype, labels.shape)
ds = tf.data.Dataset.from_generator(
lambda: img_gen.flow_from_directory(flowers),
output_types=(tf.float32, tf.float32),
output_shapes=([32,256,256,3], [32,5])
)
ds.element_spec
for images, labels in ds.take(1):
print('images.shape: ', images.shape)
print('labels.shape: ', labels.shape)
Explanation: Create the image.ImageDataGenerator
End of explanation
# Creates a dataset that reads all of the examples from two files.
fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001")
Explanation: Consuming TFRecord data
Refer to the Loading TFRecords tutorial for an end-to-end example.
The tf.data API supports a variety of file formats so that you can process
large datasets that do not fit in memory. For example, the TFRecord file format
is a simple record-oriented binary format that many TensorFlow applications use
for training data. The tf.data.TFRecordDataset class enables you to
stream over the contents of one or more TFRecord files as part of an input
pipeline.
Here is an example using the test file from the French Street Name Signs (FSNS).
End of explanation
dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])
dataset
Explanation: The filenames argument to the TFRecordDataset initializer can either be a
string, a list of strings, or a tf.Tensor of strings. Therefore if you have
two sets of files for training and validation purposes, you can create a factory
method that produces the dataset, taking filenames as an input argument:
End of explanation
raw_example = next(iter(dataset))
parsed = tf.train.Example.FromString(raw_example.numpy())
parsed.features.feature['image/text']
Explanation: Many TensorFlow projects use serialized tf.train.Example records in their TFRecord files. These need to be decoded before they can be inspected:
End of explanation
directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
file_names = ['cowper.txt', 'derby.txt', 'butler.txt']
file_paths = [
tf.keras.utils.get_file(file_name, directory_url + file_name)
for file_name in file_names
]
dataset = tf.data.TextLineDataset(file_paths)
Explanation: Consuming text data
Refer to the Load text tutorial for an end-to-end example.
Many datasets are distributed as one or more text files. The
tf.data.TextLineDataset provides an easy way to extract lines from one or more
text files. Given one or more filenames, a TextLineDataset will produce one
string-valued element per line of those files.
End of explanation
for line in dataset.take(5):
print(line.numpy())
Explanation: Here are the first few lines of the first file:
End of explanation
files_ds = tf.data.Dataset.from_tensor_slices(file_paths)
lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3)
for i, line in enumerate(lines_ds.take(9)):
if i % 3 == 0:
print()
print(line.numpy())
Explanation: To alternate lines between files use Dataset.interleave. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation:
End of explanation
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic_lines = tf.data.TextLineDataset(titanic_file)
for line in titanic_lines.take(10):
print(line.numpy())
def survived(line):
return tf.not_equal(tf.strings.substr(line, 0, 1), "0")
survivors = titanic_lines.skip(1).filter(survived)
for line in survivors.take(10):
print(line.numpy())
Explanation: By default, a TextLineDataset yields every line of each file, which may
not be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the Dataset.skip() or
Dataset.filter transformations. Here, you skip the first line, then filter to
find only survivors.
End of explanation
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
df = pd.read_csv(titanic_file)
df.head()
Explanation: Consuming CSV data
Refer to the Loading CSV Files and Loading Pandas DataFrames tutorials for more examples.
The CSV file format is a popular format for storing tabular data in plain text.
For example:
End of explanation
titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df))
for feature_batch in titanic_slices.take(1):
for key, value in feature_batch.items():
print(" {!r:20s}: {}".format(key, value))
Explanation: If your data fits in memory the same Dataset.from_tensor_slices method works on dictionaries, allowing this data to be easily imported:
End of explanation
titanic_batches = tf.data.experimental.make_csv_dataset(
titanic_file, batch_size=4,
label_name="survived")
for feature_batch, label_batch in titanic_batches.take(1):
print("'survived': {}".format(label_batch))
print("features:")
for key, value in feature_batch.items():
print(" {!r:20s}: {}".format(key, value))
Explanation: A more scalable approach is to load from disk as necessary.
The tf.data module provides methods to extract records from one or more CSV files that comply with RFC 4180.
The tf.data.experimental.make_csv_dataset function is the high-level interface for reading sets of CSV files. It supports column type inference and many other features, like batching and shuffling, to make usage simple.
End of explanation
titanic_batches = tf.data.experimental.make_csv_dataset(
titanic_file, batch_size=4,
label_name="survived", select_columns=['class', 'fare', 'survived'])
for feature_batch, label_batch in titanic_batches.take(1):
print("'survived': {}".format(label_batch))
for key, value in feature_batch.items():
print(" {!r:20s}: {}".format(key, value))
Explanation: You can use the select_columns argument if you only need a subset of columns.
End of explanation
titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string]
dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True)
for line in dataset.take(10):
print([item.numpy() for item in line])
Explanation: There is also a lower-level experimental.CsvDataset class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column.
End of explanation
%%writefile missing.csv
1,2,3,4
,2,3,4
1,,3,4
1,2,,4
1,2,3,
,,,
# Creates a dataset that reads all of the records from two CSV files, each with
# four float columns which may have missing values.
record_defaults = [999,999,999,999]
dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults)
dataset = dataset.map(lambda *items: tf.stack(items))
dataset
for line in dataset:
print(line.numpy())
Explanation: If some columns are empty, this low-level interface allows you to provide default values instead of column types.
End of explanation
# Creates a dataset that reads all of the records from two CSV files with
# headers, extracting float data from columns 2 and 4.
record_defaults = [999, 999] # Only provide defaults for the selected columns
dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3])
dataset = dataset.map(lambda *items: tf.stack(items))
dataset
for line in dataset:
print(line.numpy())
Explanation: By default, a CsvDataset yields every column of every line of the file,
which may not be desirable, for example if the file starts with a header line
that should be ignored, or if some columns are not required in the input.
These lines and fields can be removed with the header and select_cols
arguments respectively.
End of explanation
flowers_root = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
flowers_root = pathlib.Path(flowers_root)
Explanation: Consuming sets of files
There are many datasets distributed as a set of files, where each file is an example.
End of explanation
for item in flowers_root.glob("*"):
print(item.name)
Explanation: Note: these images are licensed CC-BY, see LICENSE.txt for details.
The root directory contains a directory for each class:
End of explanation
list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))
for f in list_ds.take(5):
print(f.numpy())
Explanation: The files in each class directory are examples:
End of explanation
def process_path(file_path):
label = tf.strings.split(file_path, os.sep)[-2]
return tf.io.read_file(file_path), label
labeled_ds = list_ds.map(process_path)
for image_raw, label_text in labeled_ds.take(1):
print(repr(image_raw.numpy()[:100]))
print()
print(label_text.numpy())
Explanation: Read the data using the tf.io.read_file function and extract the label from the path, returning (image, label) pairs:
End of explanation
inc_dataset = tf.data.Dataset.range(100)
dec_dataset = tf.data.Dataset.range(0, -100, -1)
dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset))
batched_dataset = dataset.batch(4)
for batch in batched_dataset.take(4):
print([arr.numpy() for arr in batch])
Explanation: <!--
TODO(mrry): Add this section.
### Handling text data with unusual sizes
-->
Batching dataset elements
Simple batching
The simplest form of batching stacks n consecutive elements of a dataset into
a single element. The Dataset.batch() transformation does exactly this, with
the same constraints as the tf.stack() operator, applied to each component
of the elements: i.e. for each component i, all elements must have a tensor
of the exact same shape.
End of explanation
batched_dataset
Explanation: While tf.data tries to propagate shape information, the default settings of Dataset.batch result in an unknown batch size because the last batch may not be full. Note the Nones in the shape:
End of explanation
batched_dataset = dataset.batch(7, drop_remainder=True)
batched_dataset
Explanation: Use the drop_remainder argument to ignore that last batch, and get full shape propagation:
End of explanation
dataset = tf.data.Dataset.range(100)
dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x))
dataset = dataset.padded_batch(4, padded_shapes=(None,))
for batch in dataset.take(2):
print(batch.numpy())
print()
Explanation: Batching tensors with padding
The above recipe works for tensors that all have the same size. However, many
models (including sequence models) work with input data that can have varying size
(for example, sequences of different lengths). To handle this case, the
Dataset.padded_batch transformation enables you to batch tensors of
different shapes by specifying one or more dimensions in which they may be
padded.
End of explanation
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic_lines = tf.data.TextLineDataset(titanic_file)
def plot_batch_sizes(ds):
batch_sizes = [batch.shape[0] for batch in ds]
plt.bar(range(len(batch_sizes)), batch_sizes)
plt.xlabel('Batch number')
plt.ylabel('Batch size')
Explanation: The Dataset.padded_batch transformation allows you to set different padding
for each dimension of each component, and it may be variable-length (signified
by None in the example above) or constant-length. It is also possible to
override the padding value, which defaults to 0.
<!--
TODO(mrry): Add this section.
### Dense ragged -> tf.SparseTensor
-->
Training workflows
Processing multiple epochs
The tf.data API offers two main ways to process multiple epochs of the same
data.
The simplest way to iterate over a dataset in multiple epochs is to use the
Dataset.repeat() transformation. First, create a dataset of titanic data:
End of explanation
titanic_batches = titanic_lines.repeat(3).batch(128)
plot_batch_sizes(titanic_batches)
Explanation: Applying the Dataset.repeat() transformation with no arguments will repeat
the input indefinitely.
The Dataset.repeat transformation concatenates its
arguments without signaling the end of one epoch and the beginning of the next
epoch. Because of this a Dataset.batch applied after Dataset.repeat will yield batches that straddle epoch boundaries:
End of explanation
titanic_batches = titanic_lines.batch(128).repeat(3)
plot_batch_sizes(titanic_batches)
Explanation: If you need clear epoch separation, put Dataset.batch before the repeat:
End of explanation
epochs = 3
dataset = titanic_lines.batch(128)
for epoch in range(epochs):
for batch in dataset:
print(batch.shape)
print("End of epoch: ", epoch)
Explanation: If you would like to perform a custom computation (for example, to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch:
End of explanation
lines = tf.data.TextLineDataset(titanic_file)
counter = tf.data.experimental.Counter()
dataset = tf.data.Dataset.zip((counter, lines))
dataset = dataset.shuffle(buffer_size=100)
dataset = dataset.batch(20)
dataset
Explanation: Randomly shuffling input data
The Dataset.shuffle() transformation maintains a fixed-size
buffer and chooses the next element uniformly at random from that buffer.
Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using Dataset.interleave across files if this becomes a problem.
Add an index to the dataset so you can see the effect:
End of explanation
n,line_batch = next(iter(dataset))
print(n.numpy())
Explanation: Since the buffer_size is 100, and the batch size is 20, the first batch contains no elements with an index over 120.
End of explanation
dataset = tf.data.Dataset.zip((counter, lines))
shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2)
print("Here are the item ID's near the epoch boundary:\n")
for n, line_batch in shuffled.skip(60).take(5):
print(n.numpy())
shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled]
plt.plot(shuffle_repeat, label="shuffle().repeat()")
plt.ylabel("Mean item ID")
plt.legend()
Explanation: As with Dataset.batch the order relative to Dataset.repeat matters.
Dataset.shuffle doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next:
End of explanation
dataset = tf.data.Dataset.zip((counter, lines))
shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10)
print("Here are the item ID's near the epoch boundary:\n")
for n, line_batch in shuffled.skip(55).take(15):
print(n.numpy())
repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled]
plt.plot(shuffle_repeat, label="shuffle().repeat()")
plt.plot(repeat_shuffle, label="repeat().shuffle()")
plt.ylabel("Mean item ID")
plt.legend()
Explanation: But a repeat before a shuffle mixes the epoch boundaries together:
End of explanation
list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))
Explanation: Preprocessing data
The Dataset.map(f) transformation produces a new dataset by applying a given
function f to each element of the input dataset. It is based on the
map() function
that is commonly applied to lists (and other structures) in functional
programming languages. The function f takes the tf.Tensor objects that
represent a single element in the input, and returns the tf.Tensor objects
that will represent a single element in the new dataset. Its implementation uses
standard TensorFlow operations to transform one element into another.
This section covers common examples of how to use Dataset.map().
Decoding image data and resizing it
<!-- TODO(markdaoust): link to image augmentation when it exists -->
When training a neural network on real-world image data, it is often necessary
to convert images of different sizes to a common size, so that they may be
batched into a fixed size.
Rebuild the flower filenames dataset:
End of explanation
# Reads an image from a file, decodes it into a dense tensor, and resizes it
# to a fixed shape.
def parse_image(filename):
parts = tf.strings.split(filename, os.sep)
label = parts[-2]
image = tf.io.read_file(filename)
image = tf.io.decode_jpeg(image)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, [128, 128])
return image, label
Explanation: Write a function that manipulates the dataset elements.
End of explanation
file_path = next(iter(list_ds))
image, label = parse_image(file_path)
def show(image, label):
plt.figure()
plt.imshow(image)
plt.title(label.numpy().decode('utf-8'))
plt.axis('off')
show(image, label)
Explanation: Test that it works.
End of explanation
images_ds = list_ds.map(parse_image)
for image, label in images_ds.take(2):
show(image, label)
Explanation: Map it over the dataset.
End of explanation
import scipy.ndimage as ndimage
def random_rotate_image(image):
image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False)
return image
image, label = next(iter(images_ds))
image = random_rotate_image(image)
show(image, label)
Explanation: Applying arbitrary Python logic
For performance reasons, use TensorFlow operations for
preprocessing your data whenever possible. However, it is sometimes useful to
call external Python libraries when parsing your input data. You can use the tf.py_function operation in a Dataset.map transformation.
For example, if you want to apply a random rotation, the tf.image module only has tf.image.rot90, which is not very useful for image augmentation.
Note: tensorflow_addons has a TensorFlow compatible rotate in tensorflow_addons.image.rotate.
To demonstrate tf.py_function, try using the scipy.ndimage.rotate function instead:
End of explanation
def tf_random_rotate_image(image, label):
im_shape = image.shape
[image,] = tf.py_function(random_rotate_image, [image], [tf.float32])
image.set_shape(im_shape)
return image, label
rot_ds = images_ds.map(tf_random_rotate_image)
for image, label in rot_ds.take(2):
show(image, label)
Explanation: To use this function with Dataset.map the same caveats apply as with Dataset.from_generator, you need to describe the return shapes and types when you apply the function:
End of explanation
fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001")
dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])
dataset
Explanation: Parsing tf.Example protocol buffer messages
Many input pipelines extract tf.train.Example protocol buffer messages from a
TFRecord format. Each tf.train.Example record contains one or more "features",
and the input pipeline typically converts these features into tensors.
End of explanation
raw_example = next(iter(dataset))
parsed = tf.train.Example.FromString(raw_example.numpy())
feature = parsed.features.feature
raw_img = feature['image/encoded'].bytes_list.value[0]
img = tf.image.decode_png(raw_img)
plt.imshow(img)
plt.axis('off')
_ = plt.title(feature["image/text"].bytes_list.value[0])
raw_example = next(iter(dataset))
def tf_parse(eg):
example = tf.io.parse_example(
eg[tf.newaxis], {
'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string),
'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string)
})
return example['image/encoded'][0], example['image/text'][0]
img, txt = tf_parse(raw_example)
print(txt.numpy())
print(repr(img.numpy()[:20]), "...")
decoded = dataset.map(tf_parse)
decoded
image_batch, text_batch = next(iter(decoded.batch(10)))
image_batch.shape
Explanation: You can work with tf.train.Example protos outside of a tf.data.Dataset to understand the data:
End of explanation
range_ds = tf.data.Dataset.range(100000)
Explanation: <a id="time_series_windowing"></a>
Time series windowing
For an end-to-end time series example see: Time series forecasting.
Time series data is often organized with the time axis intact.
Use a simple Dataset.range to demonstrate:
End of explanation
batches = range_ds.batch(10, drop_remainder=True)
for batch in batches.take(5):
print(batch.numpy())
Explanation: Typically, models based on this sort of data will want a contiguous time slice.
The simplest approach would be to batch the data:
Using batch
End of explanation
def dense_1_step(batch):
# Shift features and labels one step relative to each other.
return batch[:-1], batch[1:]
predict_dense_1_step = batches.map(dense_1_step)
for features, label in predict_dense_1_step.take(3):
print(features.numpy(), " => ", label.numpy())
Explanation: Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other:
End of explanation
batches = range_ds.batch(15, drop_remainder=True)
def label_next_5_steps(batch):
return (batch[:-5], # Inputs: All except the last 5 steps
batch[-5:]) # Labels: The last 5 steps
predict_5_steps = batches.map(label_next_5_steps)
for features, label in predict_5_steps.take(3):
print(features.numpy(), " => ", label.numpy())
Explanation: To predict a whole window instead of a fixed offset you can split the batches into two parts:
End of explanation
feature_length = 10
label_length = 3
features = range_ds.batch(feature_length, drop_remainder=True)
labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length])
predicted_steps = tf.data.Dataset.zip((features, labels))
for features, label in predicted_steps.take(5):
print(features.numpy(), " => ", label.numpy())
Explanation: To allow some overlap between the features of one batch and the labels of another, use Dataset.zip:
End of explanation
window_size = 5
windows = range_ds.window(window_size, shift=1)
for sub_ds in windows.take(5):
print(sub_ds)
Explanation: Using window
While using Dataset.batch works, there are situations where you may need finer control. The Dataset.window method gives you complete control, but requires some care: it returns a Dataset of Datasets. Go to the Dataset structure section for details.
End of explanation
for x in windows.flat_map(lambda x: x).take(30):
print(x.numpy(), end=' ')
Explanation: The Dataset.flat_map method can take a dataset of datasets and flatten it into a single dataset:
End of explanation
def sub_to_batch(sub):
return sub.batch(window_size, drop_remainder=True)
for example in windows.flat_map(sub_to_batch).take(5):
print(example.numpy())
Explanation: In nearly all cases, you will want to Dataset.batch the dataset first:
End of explanation
def make_window_dataset(ds, window_size=5, shift=1, stride=1):
windows = ds.window(window_size, shift=shift, stride=stride)
def sub_to_batch(sub):
return sub.batch(window_size, drop_remainder=True)
windows = windows.flat_map(sub_to_batch)
return windows
ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3)
for example in ds.take(10):
print(example.numpy())
Explanation: Now, you can see that the shift argument controls how much each window moves over.
Putting this together you might write this function:
End of explanation
dense_labels_ds = ds.map(dense_1_step)
for inputs,labels in dense_labels_ds.take(3):
print(inputs.numpy(), "=>", labels.numpy())
Explanation: Then it's easy to extract labels, as before:
End of explanation
zip_path = tf.keras.utils.get_file(
origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip',
fname='creditcard.zip',
extract=True)
csv_path = zip_path.replace('.zip', '.csv')
creditcard_ds = tf.data.experimental.make_csv_dataset(
csv_path, batch_size=1024, label_name="Class",
# Set the column types: 30 floats and an int.
column_defaults=[float()]*30+[int()])
Explanation: Resampling
When working with a dataset that is very class-imbalanced, you may want to resample the dataset. tf.data provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.
Note: Go to Classification on imbalanced data for a full tutorial.
End of explanation
def count(counts, batch):
features, labels = batch
class_1 = labels == 1
class_1 = tf.cast(class_1, tf.int32)
class_0 = labels == 0
class_0 = tf.cast(class_0, tf.int32)
counts['class_0'] += tf.reduce_sum(class_0)
counts['class_1'] += tf.reduce_sum(class_1)
return counts
counts = creditcard_ds.take(10).reduce(
initial_state={'class_0': 0, 'class_1': 0},
reduce_func = count)
counts = np.array([counts['class_0'].numpy(),
counts['class_1'].numpy()]).astype(np.float32)
fractions = counts/counts.sum()
print(fractions)
Explanation: Now, check the distribution of classes, it is highly skewed:
End of explanation
negative_ds = (
creditcard_ds
.unbatch()
.filter(lambda features, label: label==0)
.repeat())
positive_ds = (
creditcard_ds
.unbatch()
.filter(lambda features, label: label==1)
.repeat())
for features, label in positive_ds.batch(10).take(1):
print(label.numpy())
Explanation: A common approach to training with an imbalanced dataset is to balance it. tf.data includes a few methods which enable this workflow:
Datasets sampling
One approach to resampling a dataset is to use sample_from_datasets. This is more applicable when you have a separate tf.data.Dataset for each class.
Here, just use filter to generate them from the credit card fraud data:
End of explanation
balanced_ds = tf.data.Dataset.sample_from_datasets(
[negative_ds, positive_ds], [0.5, 0.5]).batch(10)
Explanation: To use tf.data.Dataset.sample_from_datasets pass the datasets, and the weight for each:
End of explanation
for features, labels in balanced_ds.take(10):
print(labels.numpy())
Explanation: Now the dataset produces examples of each class with a 50/50 probability:
End of explanation
def class_func(features, label):
return label
Explanation: Rejection resampling
One problem with the above Dataset.sample_from_datasets approach is that
it needs a separate tf.data.Dataset per class. You could use Dataset.filter
to create those two datasets, but that results in all the data being loaded twice.
The tf.data.Dataset.rejection_resample method can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.
The rejection_resample method takes a class_func argument. This class_func is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.
The goal here is to balance the label distribution, and the elements of creditcard_ds are already (features, label) pairs. So the class_func just needs to return those labels:
End of explanation
resample_ds = (
creditcard_ds
.unbatch()
.rejection_resample(class_func, target_dist=[0.5,0.5],
initial_dist=fractions)
.batch(10))
Explanation: The resampling method deals with individual examples, so in this case you must unbatch the dataset before applying that method.
The method needs a target distribution, and optionally an initial distribution estimate as inputs.
End of explanation
balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label)
Explanation: The rejection_resample method returns (class, example) pairs where the class is the output of the class_func. In this case, the example was already a (feature, label) pair, so use map to drop the extra copy of the labels:
End of explanation
for features, labels in balanced_ds.take(10):
print(labels.numpy())
Explanation: Now the dataset produces examples of each class with a 50/50 probability:
End of explanation
range_ds = tf.data.Dataset.range(20)
iterator = iter(range_ds)
ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3)
print([next(iterator).numpy() for _ in range(5)])
save_path = manager.save()
print([next(iterator).numpy() for _ in range(5)])
ckpt.restore(manager.latest_checkpoint)
print([next(iterator).numpy() for _ in range(5)])
Explanation: Iterator Checkpointing
Tensorflow supports taking checkpoints so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as Dataset.shuffle and Dataset.prefetch require buffering elements within the iterator.
To include your iterator in a checkpoint, pass the iterator to the tf.train.Checkpoint constructor.
End of explanation
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Note: It is not possible to checkpoint an iterator which relies on an external state, such as a tf.py_function. Attempting to do so will raise an exception complaining about the external state.
Using tf.data with tf.keras
The tf.keras API simplifies many aspects of creating and executing machine
learning models. Its Model.fit and Model.evaluate and Model.predict APIs support datasets as inputs. Here is a quick dataset and model setup:
End of explanation
model.fit(fmnist_train_ds, epochs=2)
Explanation: Passing a dataset of (feature, label) pairs is all that's needed for Model.fit and Model.evaluate:
End of explanation
model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20)
Explanation: If you pass an infinite dataset, for example by calling Dataset.repeat, you just need to also pass the steps_per_epoch argument:
End of explanation
loss, accuracy = model.evaluate(fmnist_train_ds)
print("Loss :", loss)
print("Accuracy :", accuracy)
Explanation: For evaluation you can pass the number of evaluation steps:
End of explanation
loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10)
print("Loss :", loss)
print("Accuracy :", accuracy)
Explanation: For long datasets, set the number of steps to evaluate:
End of explanation
predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32)
result = model.predict(predict_ds, steps = 10)
print(result.shape)
Explanation: The labels are not required when calling Model.predict.
End of explanation
result = model.predict(fmnist_train_ds, steps = 10)
print(result.shape)
Explanation: But the labels are ignored if you do pass a dataset containing them:
End of explanation |
13,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Approximate inference for STS models with non-Gaussian observations
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Synthetic Data
First we'll generate some synthetic count data
Step3: Model
We'll specify a simple model with a randomly walking linear trend
Step4: Instead of operating on the observed time series, this model will operate on the series of Poisson rate parameters that govern the observations.
Since Poisson rates must be positive, we'll use a bijector to transform the
real-valued STS model into a distribution over positive values. The Softplus
transformation $y = \log(1 + \exp(x))$ is a natural choice, since it is nearly linear for positive values, but other choices such as Exp (which transforms the normal random walk into a lognormal random walk) are also possible.
Step5: To use approximate inference for a non-Gaussian observation model,
we'll encode the STS model as a TFP JointDistribution. The random variables in this joint distribution are the parameters of the STS model, the time series of latent Poisson rates, and the observed counts.
Step6: Preparation for inference
We want to infer the unobserved quantities in the model, given the observed counts. First, we condition the joint log density on the observed counts.
Step7: We'll also need a constraining bijector to ensure that inference respects the constraints on the STS model's parameters (for example, scales must be positive).
Step8: Inference with HMC
We'll use HMC (specifically, NUTS) to sample from the joint posterior over model parameters and latent rates.
This will be significantly slower than fitting a standard STS model with HMC, since in addition to the model's (relatively small number of) parameters we also have to infer the entire series of Poisson rates. So we'll run for a relatively small number of steps; for applications where inference quality is critical it might make sense to increase these values or to run multiple chains.
Step9: First we specify a sampler, and then use sample_chain to run that sampling
kernel to produce samples.
Step10: We can sanity-check the inference by examining the parameter traces. In this case they appear to have explored multiple explanations for the data, which is good, although more samples would be helpful to judge how well the chain is mixing.
Step11: Now for the payoff
Step13: Forecasting
To forecast the observed counts, we'll use the standard STS tools to build a forecast distribution over the latent rates (in unconstrained space, again since STS is designed to model real-valued data), then pass the sampled forecasts through a Poisson observation model
Step14: VI inference
Variational inference can be problematic when inferring a full time series, like our approximate counts (as opposed to just
the parameters of a time series, as in standard STS models). The standard assumption that variables have independent posteriors is quite wrong, since each timestep is correlated with its neighbors, which can lead to underestimating uncertainty. For this reason, HMC may be a better choice for approximate inference over full time series. However, VI can be quite a bit faster, and may be useful for model prototyping or in cases where its performance can be empirically shown to be 'good enough'.
To fit our model with VI, we simply build and optimize a surrogate posterior | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import time
import matplotlib.pyplot as plt
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
from tensorflow_probability import bijectors as tfb
from tensorflow_probability import distributions as tfd
tf.enable_v2_behavior()
Explanation: Approximate inference for STS models with non-Gaussian observations
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/STS_approximate_inference_for_models_with_non_Gaussian_observations"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This notebook demonstrates the use of TFP approximate inference tools to incorporate a (non-Gaussian) observation model when fitting and forecasting with structural time series (STS) models. In this example, we'll use a Poisson observation model to work with discrete count data.
End of explanation
num_timesteps = 30
observed_counts = np.round(3 + np.random.lognormal(np.log(np.linspace(
num_timesteps, 5, num=num_timesteps)), 0.20, size=num_timesteps))
observed_counts = observed_counts.astype(np.float32)
plt.plot(observed_counts)
Explanation: Synthetic Data
First we'll generate some synthetic count data:
End of explanation
def build_model(approximate_unconstrained_rates):
trend = tfp.sts.LocalLinearTrend(
observed_time_series=approximate_unconstrained_rates)
return tfp.sts.Sum([trend],
observed_time_series=approximate_unconstrained_rates)
Explanation: Model
We'll specify a simple model with a randomly walking linear trend:
End of explanation
positive_bijector = tfb.Softplus() # Or tfb.Exp()
# Approximate the unconstrained Poisson rate just to set heuristic priors.
# We could avoid this by passing explicit priors on all model params.
approximate_unconstrained_rates = positive_bijector.inverse(
tf.convert_to_tensor(observed_counts) + 0.01)
sts_model = build_model(approximate_unconstrained_rates)
Explanation: Instead of operating on the observed time series, this model will operate on the series of Poisson rate parameters that govern the observations.
Since Poisson rates must be positive, we'll use a bijector to transform the
real-valued STS model into a distribution over positive values. The Softplus
transformation $y = \log(1 + \exp(x))$ is a natural choice, since it is nearly linear for positive values, but other choices such as Exp (which transforms the normal random walk into a lognormal random walk) are also possible.
End of explanation
def sts_with_poisson_likelihood_model():
# Encode the parameters of the STS model as random variables.
param_vals = []
for param in sts_model.parameters:
param_val = yield param.prior
param_vals.append(param_val)
# Use the STS model to encode the log- (or inverse-softplus)
# rate of a Poisson.
unconstrained_rate = yield sts_model.make_state_space_model(
num_timesteps, param_vals)
rate = positive_bijector.forward(unconstrained_rate[..., 0])
observed_counts = yield tfd.Poisson(rate, name='observed_counts')
model = tfd.JointDistributionCoroutineAutoBatched(sts_with_poisson_likelihood_model)
Explanation: To use approximate inference for a non-Gaussian observation model,
we'll encode the STS model as a TFP JointDistribution. The random variables in this joint distribution are the parameters of the STS model, the time series of latent Poisson rates, and the observed counts.
End of explanation
pinned_model = model.experimental_pin(observed_counts=observed_counts)
Explanation: Preparation for inference
We want to infer the unobserved quantities in the model, given the observed counts. First, we condition the joint log density on the observed counts.
End of explanation
constraining_bijector = pinned_model.experimental_default_event_space_bijector()
Explanation: We'll also need a constraining bijector to ensure that inference respects the constraints on the STS model's parameters (for example, scales must be positive).
End of explanation
#@title Sampler configuration
# Allow external control of sampling to reduce test runtimes.
num_results = 500 # @param { isTemplate: true}
num_results = int(num_results)
num_burnin_steps = 100 # @param { isTemplate: true}
num_burnin_steps = int(num_burnin_steps)
Explanation: Inference with HMC
We'll use HMC (specifically, NUTS) to sample from the joint posterior over model parameters and latent rates.
This will be significantly slower than fitting a standard STS model with HMC, since in addition to the model's (relatively small number of) parameters we also have to infer the entire series of Poisson rates. So we'll run for a relatively small number of steps; for applications where inference quality is critical it might make sense to increase these values or to run multiple chains.
End of explanation
sampler = tfp.mcmc.TransformedTransitionKernel(
tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=pinned_model.unnormalized_log_prob,
step_size=0.1),
bijector=constraining_bijector)
adaptive_sampler = tfp.mcmc.DualAveragingStepSizeAdaptation(
inner_kernel=sampler,
num_adaptation_steps=int(0.8 * num_burnin_steps),
target_accept_prob=0.75)
initial_state = constraining_bijector.forward(
type(pinned_model.event_shape)(
*(tf.random.normal(part_shape)
for part_shape in constraining_bijector.inverse_event_shape(
pinned_model.event_shape))))
# Speed up sampling by tracing with `tf.function`.
@tf.function(autograph=False, jit_compile=True)
def do_sampling():
return tfp.mcmc.sample_chain(
kernel=adaptive_sampler,
current_state=initial_state,
num_results=num_results,
num_burnin_steps=num_burnin_steps,
trace_fn=None)
t0 = time.time()
samples = do_sampling()
t1 = time.time()
print("Inference ran in {:.2f}s.".format(t1-t0))
Explanation: First we specify a sampler, and then use sample_chain to run that sampling
kernel to produce samples.
End of explanation
f = plt.figure(figsize=(12, 4))
for i, param in enumerate(sts_model.parameters):
ax = f.add_subplot(1, len(sts_model.parameters), i + 1)
ax.plot(samples[i])
ax.set_title("{} samples".format(param.name))
Explanation: We can sanity-check the inference by examining the parameter traces. In this case they appear to have explored multiple explanations for the data, which is good, although more samples would be helpful to judge how well the chain is mixing.
End of explanation
param_samples = samples[:-1]
unconstrained_rate_samples = samples[-1][..., 0]
rate_samples = positive_bijector.forward(unconstrained_rate_samples)
plt.figure(figsize=(10, 4))
mean_lower, mean_upper = np.percentile(rate_samples, [10, 90], axis=0)
pred_lower, pred_upper = np.percentile(np.random.poisson(rate_samples),
[10, 90], axis=0)
_ = plt.plot(observed_counts, color="blue", ls='--', marker='o', label='observed', alpha=0.7)
_ = plt.plot(np.mean(rate_samples, axis=0), label='rate', color="green", ls='dashed', lw=2, alpha=0.7)
_ = plt.fill_between(np.arange(0, 30), mean_lower, mean_upper, color='green', alpha=0.2)
_ = plt.fill_between(np.arange(0, 30), pred_lower, pred_upper, color='grey', label='counts', alpha=0.2)
plt.xlabel("Day")
plt.ylabel("Daily Sample Size")
plt.title("Posterior Mean")
plt.legend()
Explanation: Now for the payoff: let's see the posterior over Poisson rates! We'll also plot the 80% predictive interval over observed counts, and can check that this interval appears to contain about 80% of the counts we actually observed.
End of explanation
def sample_forecasted_counts(sts_model, posterior_latent_rates,
posterior_params, num_steps_forecast,
num_sampled_forecasts):
# Forecast the future latent unconstrained rates, given the inferred latent
# unconstrained rates and parameters.
unconstrained_rates_forecast_dist = tfp.sts.forecast(sts_model,
observed_time_series=unconstrained_rate_samples,
parameter_samples=posterior_params,
num_steps_forecast=num_steps_forecast)
# Transform the forecast to positive-valued Poisson rates.
rates_forecast_dist = tfd.TransformedDistribution(
unconstrained_rates_forecast_dist,
positive_bijector)
# Sample from the forecast model following the chain rule:
# P(counts) = P(counts | latent_rates)P(latent_rates)
sampled_latent_rates = rates_forecast_dist.sample(num_sampled_forecasts)
sampled_forecast_counts = tfd.Poisson(rate=sampled_latent_rates).sample()
return sampled_forecast_counts, sampled_latent_rates
forecast_samples, rate_samples = sample_forecasted_counts(
sts_model,
posterior_latent_rates=unconstrained_rate_samples,
posterior_params=param_samples,
# Days to forecast:
num_steps_forecast=30,
num_sampled_forecasts=100)
forecast_samples = np.squeeze(forecast_samples)
def plot_forecast_helper(data, forecast_samples, CI=90):
Plot the observed time series alongside the forecast.
plt.figure(figsize=(10, 4))
forecast_median = np.median(forecast_samples, axis=0)
num_steps = len(data)
num_steps_forecast = forecast_median.shape[-1]
plt.plot(np.arange(num_steps), data, lw=2, color='blue', linestyle='--', marker='o',
label='Observed Data', alpha=0.7)
forecast_steps = np.arange(num_steps, num_steps+num_steps_forecast)
CI_interval = [(100 - CI)/2, 100 - (100 - CI)/2]
lower, upper = np.percentile(forecast_samples, CI_interval, axis=0)
plt.plot(forecast_steps, forecast_median, lw=2, ls='--', marker='o', color='orange',
label=str(CI) + '% Forecast Interval', alpha=0.7)
plt.fill_between(forecast_steps,
lower,
upper, color='orange', alpha=0.2)
plt.xlim([0, num_steps+num_steps_forecast])
ymin, ymax = min(np.min(forecast_samples), np.min(data)), max(np.max(forecast_samples), np.max(data))
yrange = ymax-ymin
plt.title("{}".format('Observed time series with ' + str(num_steps_forecast) + ' Day Forecast'))
plt.xlabel('Day')
plt.ylabel('Daily Sample Size')
plt.legend()
plot_forecast_helper(observed_counts, forecast_samples, CI=80)
Explanation: Forecasting
To forecast the observed counts, we'll use the standard STS tools to build a forecast distribution over the latent rates (in unconstrained space, again since STS is designed to model real-valued data), then pass the sampled forecasts through a Poisson observation model:
End of explanation
surrogate_posterior = tfp.experimental.vi.build_factored_surrogate_posterior(
event_shape=pinned_model.event_shape,
bijector=constraining_bijector)
# Allow external control of optimization to reduce test runtimes.
num_variational_steps = 1000 # @param { isTemplate: true}
num_variational_steps = int(num_variational_steps)
t0 = time.time()
losses = tfp.vi.fit_surrogate_posterior(pinned_model.unnormalized_log_prob,
surrogate_posterior,
optimizer=tf.optimizers.Adam(0.1),
num_steps=num_variational_steps)
t1 = time.time()
print("Inference ran in {:.2f}s.".format(t1-t0))
plt.plot(losses)
plt.title("Variational loss")
_ = plt.xlabel("Steps")
posterior_samples = surrogate_posterior.sample(50)
param_samples = posterior_samples[:-1]
unconstrained_rate_samples = posterior_samples[-1][..., 0]
rate_samples = positive_bijector.forward(unconstrained_rate_samples)
plt.figure(figsize=(10, 4))
mean_lower, mean_upper = np.percentile(rate_samples, [10, 90], axis=0)
pred_lower, pred_upper = np.percentile(
np.random.poisson(rate_samples), [10, 90], axis=0)
_ = plt.plot(observed_counts, color='blue', ls='--', marker='o',
label='observed', alpha=0.7)
_ = plt.plot(np.mean(rate_samples, axis=0), label='rate', color='green',
ls='dashed', lw=2, alpha=0.7)
_ = plt.fill_between(
np.arange(0, 30), mean_lower, mean_upper, color='green', alpha=0.2)
_ = plt.fill_between(np.arange(0, 30), pred_lower, pred_upper, color='grey',
label='counts', alpha=0.2)
plt.xlabel('Day')
plt.ylabel('Daily Sample Size')
plt.title('Posterior Mean')
plt.legend()
forecast_samples, rate_samples = sample_forecasted_counts(
sts_model,
posterior_latent_rates=unconstrained_rate_samples,
posterior_params=param_samples,
# Days to forecast:
num_steps_forecast=30,
num_sampled_forecasts=100)
forecast_samples = np.squeeze(forecast_samples)
plot_forecast_helper(observed_counts, forecast_samples, CI=80)
Explanation: VI inference
Variational inference can be problematic when inferring a full time series, like our approximate counts (as opposed to just
the parameters of a time series, as in standard STS models). The standard assumption that variables have independent posteriors is quite wrong, since each timestep is correlated with its neighbors, which can lead to underestimating uncertainty. For this reason, HMC may be a better choice for approximate inference over full time series. However, VI can be quite a bit faster, and may be useful for model prototyping or in cases where its performance can be empirically shown to be 'good enough'.
To fit our model with VI, we simply build and optimize a surrogate posterior:
End of explanation |
13,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Painting OM10 Lens Systems with Synthetic Colors
Jenny Kim, Michael Baumer, Phil Marshall
The OM10 mock lensed quasar catalog qso_mock.fits contains estimates of the lens galaxy $i$-band magnitudes, based on a simple Faber-Jackson scaling. With the lenspop library we can compute synthetic magnitudes in any filter. In this notebook we will look at the differences in the distributions of the 1) redshift, 2) $i$-band magnitude, 3) $g-r$ magnitude, 4) $r-i$ magnitude, and 5) $i-z$ magnitude when
Step1: We first have to load the om10 catalog and select 200 random lenses.
Then, using the paint method, we color those lenses synthetically.
Step2: 1 Comparing the Lens Galaxy Synthetic Photometry with SDSS LRGs
We will overlap the colored lenses with the SDSS LRGs on the cornerplot. The SDSS LRG data is saved in ../data/SDSS_LRGs.txt. The file has these following columns
Step3: Before putting all of the real and mock lenses directly onto the cornerplot, we need to weight the mocked lenses with respect to the parent(LRG) population. We approximate the redshifts of the parent population to be Gaussian distributed with the population mean and standard deviation.
The weights were calculated using the following formula
Step4: Then, we draw the cornerplot with the synthetically colored OM10 lenses and save to the argument fig1. Then, we will overlap this fig with the original data. Ideally, the distributions of the data should look similar, and the blue contours and the red contours should have significant overlap.
Step5: Discussion
The synthetic magnitude comparisons between OM10 lens galaxies and similarly massive galaxies found as SDSS LRGs show that the lenspop synthetic lens galaxy colors are accurate.
Specifically, the i band magnitude mean of 200 selected lenses was 19.25, and the i band magnitude mean standard deviation of the lenses were 1.50. SDSS LRGs have i band magnitude mean of 18.26, and i band magnitude standard deviation of 0.49.
Using the central limit theorem under the evidence that there are more than 30 data points, we could assume that both OM10 lenses and LRG data are normally distributed. From this assumption, we could calculate the sigma difference.
The difference between two magnitudes are 0.99, and the sum of standard deviation squared is 2.49. Thus, the standard deviation would be $2.49^{0.5} = 1.57$. As $\frac{0.99}{1.57} = 0.63$, the two distribution agrees in 0.63 sigma level.
2 Comparing the Lens Galaxy Synthetic Photometry with SL2S Lens Galaxies
In two papers, Sonnenfeld et al (2014) and Sonnenfeld et al (2015) provide photometric measurements of 56 SL2S galaxy-scale lenses that were taken with Canada-France-Hawaii Telescope (CFHT). We can compare the distributions of magnitude, redshift and color between the OM10 lens galaxies (as painted above) and the SL2S lens galaxies.
CFHT magnitudes and SDSS magnitudes are different by just $~0.05 mag$; for the sake of simple calculation, we ignore thses small differences in this notebook.
We scraped the lens galaxy photometry and redshifts from the above papers into three datafiles. All the files contain LensName, which we used to match the data of the same lens systems in different datafiles. The second cell below merges all the information needed.
Step6: We only have around 35 real lenses to use to weight the mock lenses. While this is not a large sample, we can still plot the synthetically colored lenses after reweighting the mock lenses to match the real lens redshift distribution. Ideally, the distributions of the data should be similar, and the blue contours and the red contours should have significant overlap.
Step7: Discussion
The synthetic magnitude comparisons between OM10 lens galaxies and similarly massive galaxies found in CFHT data show that the lenspop synthetic lens galaxy colors are accurate.
Specifically, the i band magnitude mean of 200 selected lenses was 19.25, and the i band magnitude mean standard deviation of the lenses were 1.50. CFHT data have i band magnitude mean of 19.11, and i band magnitude standard deviation of 0.87.
Using the central limit theorem under the evidence that there are more than 30 data points, we could assume that both OM10 lenses and CFHT data are normally distributed. From this assumption, we could calculate the sigma difference.
The difference between two magnitudes are 0.14, and the sum of standard deviation squared(variance) is 3.00. Thus, the standard deviation would be $3.00^{0.5} = 1.73$. As $\frac{0.14}{1.73} = 0.08$, the two distribution agrees in 0.08 sigma level.
3 Comparing the Lensed Quasar Synthetic Photometry with SDSS Quasars
The last comparison that we make is with the lensed sources. This file contains magnitudes and redshifts for 10,000 SDSS quasars. We will compare the synthetically colored OM10 lensed quasars with this data. The processes are the same as the above.
Step8: There is the lack of scatter in the synthetic quasar colors in OM10 compared to the real data, so we could add the noise by doing the following | Python Code:
import matplotlib.pyplot as plt
import seaborn as sns
import os, matplotlib, numpy as np
import om10, corner
from om10 import plotting
from __future__ import division, print_function
from astropy.table import Table
from astropy.io import ascii
import pandas as pd
sns.set()
%load_ext autoreload
%autoreload 2
%matplotlib inline
Explanation: Painting OM10 Lens Systems with Synthetic Colors
Jenny Kim, Michael Baumer, Phil Marshall
The OM10 mock lensed quasar catalog qso_mock.fits contains estimates of the lens galaxy $i$-band magnitudes, based on a simple Faber-Jackson scaling. With the lenspop library we can compute synthetic magnitudes in any filter. In this notebook we will look at the differences in the distributions of the 1) redshift, 2) $i$-band magnitude, 3) $g-r$ magnitude, 4) $r-i$ magnitude, and 5) $i-z$ magnitude when:
comparing the synthetic magnitudes of the colored OM10 lens galaxies with those of SDSS LRGs
comparing the synthetic magnitudes of colored OM10 lens galaxies with those of 56 candidate galaxy-scale lenses that were imaged as part of the Canada-France-Hawaii Telescope (CFHT) Legacy Survey
comparing the synthetic magnitudes of colored OM10 lensed quasars with those of known SDSS Quasars
Requirements
om10, qso_mock.fits, and om10's dependencies:
End of explanation
db = om10.DB()
db.select_random(Nlens=200)
db.paint(synthetic=True)
Explanation: We first have to load the om10 catalog and select 200 random lenses.
Then, using the paint method, we color those lenses synthetically.
End of explanation
table = np.loadtxt('../data/SDSS_LRGs.txt')
zReal = table[:,2]
iReal = table[:,6]
grReal = table[:,4]-table[:,5]
riReal = table[:,5]-table[:,6]
izReal = table[:,6]-table[:,7]
Explanation: 1 Comparing the Lens Galaxy Synthetic Photometry with SDSS LRGs
We will overlap the colored lenses with the SDSS LRGs on the cornerplot. The SDSS LRG data is saved in ../data/SDSS_LRGs.txt. The file has these following columns : ra dec z mag_u mag_g mag_r mag_i mag_z.
End of explanation
db.gaussian_reweight(np.mean(zReal), np.std(zReal))
# calculating the color index for g-r, r-i, and i-z
gr = db.sample['g_SDSS_lens'] - db.sample['r_SDSS_lens']
ri = db.sample['r_SDSS_lens'] - db.sample['i_SDSS_lens']
iz = db.sample['i_SDSS_lens'] - db.sample['z_SDSS_lens']
Explanation: Before putting all of the real and mock lenses directly onto the cornerplot, we need to weight the mocked lenses with respect to the parent(LRG) population. We approximate the redshifts of the parent population to be Gaussian distributed with the population mean and standard deviation.
The weights were calculated using the following formula:
$ w_k = \frac { P(z_k) }{ Q(z_k) } $
where $P(z_k)$ is the value of the histogram of redshifts in the parent distribution at the $k^{th}$ redshift, and $Q(z_k)$ is the value of the histogram of mock, colored lenses's redshifts at the $k^{th}$ redshift. These weights were used in a rejection sampling, to make the distribution more like its gaussian population distribution. This reweighting algorithm is implemented in the method db.gaussian_reweight().
End of explanation
matplotlib.rc('text', usetex=False)
# save the synthetically colored OM10 lenses' cornerplot to fig1
names = ['i_SDSS_lens', 'ZLENS', 'gr', 'ri', 'iz']
om10data = Table({'i_SDSS_lens':db.sample['i_SDSS_lens'], 'ZLENS':db.sample['ZLENS'], 'gr':gr, 'ri':ri, 'iz':iz}, names=names)
om10features, labels = om10.extract_features(om10data, names)
fig1 = corner.corner(om10features, labels=labels, color='blue', weights=db.sample['weight'], smooth=1.3, hist_kwargs=dict(normed=True, alpha=1))
# overlay fig1 with the real distribution
realdata = Table({'i_SDSS_lens': iReal, 'ZLENS': zReal, 'gr': grReal, 'ri': riReal, 'iz': izReal}, names=names)
realfeatures, labels = plotting.extract_features(realdata, names)
corner.corner(realfeatures, labels=labels, color='red', smooth=1.0, fig=fig1, hist_kwargs=dict(normed=True), hist2d_kwarge=dict(alpha=0.5))
Explanation: Then, we draw the cornerplot with the synthetically colored OM10 lenses and save to the argument fig1. Then, we will overlap this fig with the original data. Ideally, the distributions of the data should look similar, and the blue contours and the red contours should have significant overlap.
End of explanation
III_a = np.genfromtxt('../data/SonnenfeldTable1.txt', dtype=str, usecols = (0, 4, 5, 6, 7, 8), invalid_raise=False, missing_values='xxx', usemask=False)
III_b = np.genfromtxt('../data/SonnenfeldTable2.txt', dtype=str, usecols = (0, 1), invalid_raise=False, missing_values='xxx')
IV = np.genfromtxt('../data/SonnenfeldTable3.txt', dtype=str, usecols = (0, 11), invalid_raise=False, missing_values='xxx')
name = np.array([])
redshiftReal = np.array([])
iReal = np.array([])
grReal = np.array([])
riReal = np.array([])
izReal = np.array([])
for (lensName, aIndex) in zip(III_a[:,0], range(len(III_a))):
if lensName in III_b[:,0]:
bIndex = np.argwhere(III_b[:,0]==lensName)[0][0]
if lensName in IV[:,0]:
VIndex = np.argwhere(IV[:,0]==lensName)[0][0]
# the redshift sometimes has some problem, so we should handle those problems
if(IV[VIndex][1].isdigit()):
name = np.append(name, lensName)
redshiftReal = np.append(redshiftReal, float(III_b[bIndex][1]))
iReal = np.append(iReal, float(III_a[aIndex][4]))
grReal = np.append(grReal, float(III_a[aIndex][2]) - float(III_a[aIndex][3]))
riReal = np.append(riReal, float(III_a[aIndex][3]) - float(III_a[aIndex][4]))
izReal = np.append(izReal, float(III_a[aIndex][4]) - float(III_a[aIndex][5]))
#convert every numpy array to list - if not, OM10.plot_sample throws an error
redshift = np.array(redshiftReal.tolist())
iReal = np.array(iReal.tolist())
grReal = np.array(grReal.tolist())
riReal = np.array(riReal.tolist())
izReal = np.array(izReal.tolist())
from astropy.table import Table
data = Table({'MAGI': iReal, 'ZLENS': redshiftReal, 'GR': grReal, 'RI': riReal, 'IZ': izReal}, names=['MAGI', 'ZLENS', 'GR', 'RI', 'IZ'])
Explanation: Discussion
The synthetic magnitude comparisons between OM10 lens galaxies and similarly massive galaxies found as SDSS LRGs show that the lenspop synthetic lens galaxy colors are accurate.
Specifically, the i band magnitude mean of 200 selected lenses was 19.25, and the i band magnitude mean standard deviation of the lenses were 1.50. SDSS LRGs have i band magnitude mean of 18.26, and i band magnitude standard deviation of 0.49.
Using the central limit theorem under the evidence that there are more than 30 data points, we could assume that both OM10 lenses and LRG data are normally distributed. From this assumption, we could calculate the sigma difference.
The difference between two magnitudes are 0.99, and the sum of standard deviation squared is 2.49. Thus, the standard deviation would be $2.49^{0.5} = 1.57$. As $\frac{0.99}{1.57} = 0.63$, the two distribution agrees in 0.63 sigma level.
2 Comparing the Lens Galaxy Synthetic Photometry with SL2S Lens Galaxies
In two papers, Sonnenfeld et al (2014) and Sonnenfeld et al (2015) provide photometric measurements of 56 SL2S galaxy-scale lenses that were taken with Canada-France-Hawaii Telescope (CFHT). We can compare the distributions of magnitude, redshift and color between the OM10 lens galaxies (as painted above) and the SL2S lens galaxies.
CFHT magnitudes and SDSS magnitudes are different by just $~0.05 mag$; for the sake of simple calculation, we ignore thses small differences in this notebook.
We scraped the lens galaxy photometry and redshifts from the above papers into three datafiles. All the files contain LensName, which we used to match the data of the same lens systems in different datafiles. The second cell below merges all the information needed.
End of explanation
%%capture
db.gaussian_reweight(np.mean(redshift), np.std(redshift))
matplotlib.rc('text', usetex=False)
gr = db.sample['g_SDSS_lens'] - db.sample['r_SDSS_lens']
ri = db.sample['r_SDSS_lens'] - db.sample['i_SDSS_lens']
iz = db.sample['i_SDSS_lens'] - db.sample['z_SDSS_lens']
data = Table({'i_SDSS_lens': db.sample['i_SDSS_lens'], 'ZLENS': db.sample['ZLENS'], 'gr': gr, 'ri': ri, 'iz': iz}, names=names)
features, labels = plotting.extract_features(data, names)
fig1 = corner.corner(features, labels=labels, color='blue', weights=db.sample['weight'], smooth=1.3, hist_kwargs=dict(normed=True, alpha=1))
features, labels = plotting.extract_features(data, names)
corner.corner(features, labels=labels, color='red', smooth=1.0, fig=fig1, hist_kwargs=dict(normed=True))
Explanation: We only have around 35 real lenses to use to weight the mock lenses. While this is not a large sample, we can still plot the synthetically colored lenses after reweighting the mock lenses to match the real lens redshift distribution. Ideally, the distributions of the data should be similar, and the blue contours and the red contours should have significant overlap.
End of explanation
# Read the data
pd.set_option('display.max_columns', None)
qsos = pd.read_csv("https://raw.githubusercontent.com/KIPAC/StatisticalMethods/master/examples/SDSScatalog/data/qso10000.csv",index_col=0)
# Clean out extreme colors and bad magnitudes:
qsos = qsos[(qsos["dered_r"] > -9999) & (qsos["g_r_color"] > -10) & (qsos["g_r_color"] < 10) & (qsos["mag_i"]<23.6)]
qso = qsos.as_matrix()
matplotlib.rc('text', usetex=False)
zReal = qsos["spec_z"].as_matrix()
iReal = qsos["mag_i"].as_matrix()
grReal = qsos["g_r_color"].as_matrix()
riReal = qsos["r_i_color"].as_matrix()
izReal = qsos["i_z_color"].as_matrix()
db.gaussian_reweight(np.mean(zReal), np.std(zReal))
Explanation: Discussion
The synthetic magnitude comparisons between OM10 lens galaxies and similarly massive galaxies found in CFHT data show that the lenspop synthetic lens galaxy colors are accurate.
Specifically, the i band magnitude mean of 200 selected lenses was 19.25, and the i band magnitude mean standard deviation of the lenses were 1.50. CFHT data have i band magnitude mean of 19.11, and i band magnitude standard deviation of 0.87.
Using the central limit theorem under the evidence that there are more than 30 data points, we could assume that both OM10 lenses and CFHT data are normally distributed. From this assumption, we could calculate the sigma difference.
The difference between two magnitudes are 0.14, and the sum of standard deviation squared(variance) is 3.00. Thus, the standard deviation would be $3.00^{0.5} = 1.73$. As $\frac{0.14}{1.73} = 0.08$, the two distribution agrees in 0.08 sigma level.
3 Comparing the Lensed Quasar Synthetic Photometry with SDSS Quasars
The last comparison that we make is with the lensed sources. This file contains magnitudes and redshifts for 10,000 SDSS quasars. We will compare the synthetically colored OM10 lensed quasars with this data. The processes are the same as the above.
End of explanation
db.noissify_quasars(np.std(iReal))
%%capture
gr = db.sample['g_SDSS_quasar'] - db.sample['r_SDSS_quasar']
ri = db.sample['r_SDSS_quasar'] - db.sample['i_SDSS_quasar']
iz = db.sample['i_SDSS_quasar'] - db.sample['z_SDSS_quasar']
names = ['i_SDSS_quasar', 'ZSRC', 'gr', 'ri', 'iz']
data = Table({'i_SDSS_quasar':db.sample['i_SDSS_quasar'], 'ZSRC':db.sample['ZSRC'], 'gr':gr, 'ri':ri, 'iz':iz}, names=names)
features, labels = plotting.extract_features(data, names)
fig1 = corner.corner(features, labels=labels, color='blue', weights=db.sample['weight'], smooth=1.3, hist_kwargs=dict(normed=True, alpha=1))
matplotlib.rc('text', usetex=False)
data = Table({'i_SDSS_quasar': iReal, 'ZSRC': zReal, 'gr': grReal, 'ri': riReal, 'iz': izReal}, names=names)
features, labels = plotting.extract_features(data, names)
corner.corner(features, labels=labels, color='red', smooth=1.3, hist_kwargs=dict(normed=True), fig=fig1)
Explanation: There is the lack of scatter in the synthetic quasar colors in OM10 compared to the real data, so we could add the noise by doing the following:
End of explanation |
13,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read a forward operator and display sensitivity maps
Forward solutions can be read using read_forward_solution in Python.
Step1: Show gain matrix a.k.a. leadfield matrix with sensitivity map
Step2: Show sensitivity of each sensor type to dipoles in the source space | Python Code:
# Author: Alexandre Gramfort <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
fwd = mne.read_forward_solution(fname, surf_ori=True)
leadfield = fwd['sol']['data']
print("Leadfield size : %d x %d" % leadfield.shape)
Explanation: Read a forward operator and display sensitivity maps
Forward solutions can be read using read_forward_solution in Python.
End of explanation
picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)
picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)
fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)
fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)
for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):
im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',
cmap='RdBu_r')
ax.set_title(ch_type.upper())
ax.set_xlabel('sources')
ax.set_ylabel('sensors')
plt.colorbar(im, ax=ax, cmap='RdBu_r')
Explanation: Show gain matrix a.k.a. leadfield matrix with sensitivity map
End of explanation
grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')
mag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
plt.figure()
plt.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],
bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],
color=['c', 'b', 'k'])
plt.title('Normal orientation sensitivity')
plt.xlabel('sensitivity')
plt.ylabel('count')
plt.legend()
# Cautious smoothing to see actual dipoles
grad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir,
clim=dict(lims=[0, 50, 100]))
# Note. The source space uses min-dist and therefore discards most
# superficial dipoles. This is why parts of the gyri are not covered.
Explanation: Show sensitivity of each sensor type to dipoles in the source space
End of explanation |
13,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word2Vec Tutorial
In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.
This tutorial
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
Preparing the Input
Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words (utf8 strings)
Step1: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM
Step2: Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users
Step3: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)
Step4: Training
Word2Vec accepts several parameters that affect both training speed and quality.
min_count
min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them
Step5: size
size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto.
Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
Step6: workers
workers, the last of the major parameters (full list here) is for training parallelization, to speed up training
Step7: The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application.
Google has released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad
Step8: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contains word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, 'coast' and 'shore' are very similar as they appear in the same context. At the same time 'clothes' and 'closet' are less similar because they are related but not interchangeable.
Step9: Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
Storing and loading models
You can store/load models using the standard gensim methods
Step10: which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats
Step11: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box
Step12: You can get the probability distribution for the center word given the context words as input
Step13: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis | Python Code:
# import modules & set up logging
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentences = [['first', 'sentence'], ['second', 'sentence']]
# train word2vec on the two sentences
model = gensim.models.Word2Vec(sentences, min_count=1)
Explanation: Word2Vec Tutorial
In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.
This tutorial
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
Preparing the Input
Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words (utf8 strings):
End of explanation
# create some toy data to use with the following example
import smart_open, os
if not os.path.exists('./data/'):
os.makedirs('./data/')
filenames = ['./data/f1.txt', './data/f2.txt']
for i, fname in enumerate(filenames):
with smart_open.smart_open(fname, 'w') as fout:
for line in sentences[i]:
fout.write(line + '\n')
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
for line in open(os.path.join(self.dirname, fname)):
yield line.split()
sentences = MySentences('./data/') # a memory-friendly iterator
print(list(sentences))
# generate the Word2Vec model
model = gensim.models.Word2Vec(sentences, min_count=1)
print(model)
print(model.wv.vocab)
Explanation: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence…
For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:
End of explanation
# build the same model, making the 2 steps explicit
new_model = gensim.models.Word2Vec(min_count=1) # an empty model, no training
new_model.build_vocab(sentences) # can be a non-repeatable, 1-pass generator
new_model.train(sentences, total_examples=new_model.corpus_count, epochs=new_model.iter)
# can be a non-repeatable, 1-pass generator
print(new_model)
print(model.wv.vocab)
Explanation: Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users: calling Word2Vec(sentences, iter=1) will run two passes over the sentences iterator. In general it runs iter+1 passes. By the way, the default value is iter=5 to comply with Google's word2vec in C language.
1. The first pass collects words and their frequencies to build an internal dictionary tree structure.
2. The second pass trains the neural model.
These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and you’re able to initialize the vocabulary some other way:
End of explanation
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = test_data_dir + 'lee_background.cor'
class MyText(object):
def __iter__(self):
for line in open(lee_train_file):
# assume there's one document per line, tokens separated by whitespace
yield line.lower().split()
sentences = MyText()
print(sentences)
Explanation: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim):
End of explanation
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)
Explanation: Training
Word2Vec accepts several parameters that affect both training speed and quality.
min_count
min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them:
End of explanation
# default value of size=100
model = gensim.models.Word2Vec(sentences, size=200)
Explanation: size
size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto.
Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
End of explanation
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)
Explanation: workers
workers, the last of the major parameters (full list here) is for training parallelization, to speed up training:
End of explanation
model.accuracy('./datasets/questions-words.txt')
Explanation: The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application.
Google has released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning.
The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?).
Gensim supports the same evaluation set, in exactly the same format:
End of explanation
model.evaluate_word_pairs(test_data_dir + 'wordsim353.tsv')
Explanation: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contains word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, 'coast' and 'shore' are very similar as they appear in the same context. At the same time 'clothes' and 'closet' are less similar because they are related but not interchangeable.
End of explanation
from tempfile import mkstemp
fs, temp_path = mkstemp("gensim_temp") # creates a temp file
model.save(temp_path) # save the model
new_model = gensim.models.Word2Vec.load(temp_path) # open the model
Explanation: Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
Storing and loading models
You can store/load models using the standard gensim methods:
End of explanation
model = gensim.models.Word2Vec.load(temp_path)
more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']]
model.build_vocab(more_sentences, update=True)
model.train(more_sentences, total_examples=model.corpus_count, epochs=model.iter)
# cleaning up temp
os.close(fs)
os.remove(temp_path)
Explanation: which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# using gzipped/bz2 input works too, no need to unzip:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)
Online training / Resuming training
Advanced users can load a model and continue training it with more sentences and new vocabulary words:
End of explanation
model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1)
model.doesnt_match("input is lunch he sentence cat".split())
print(model.similarity('human', 'party'))
print(model.similarity('tree', 'murder'))
Explanation: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box:
End of explanation
print(model.predict_output_word(['emergency', 'beacon', 'received']))
Explanation: You can get the probability distribution for the center word given the context words as input:
End of explanation
model['tree'] # raw NumPy vector of a word
Explanation: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis:
End of explanation |
13,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Custom layers
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https
Step2: Layers
Step3: The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer),
Conv2D, LSTM, BatchNormalization, Dropout, and many others.
Step4: Implementing custom layers
The best way to implement your own layer is extending the tf.keras.Layer class and implementing
Step5: Note that you don't have to wait until build is called to create your variables, you can also create them in __init__.
Overall code is easier to read and maintain if it uses standard layers whenever possible, as other readers will be familiar with the behavior of standard layers. If you want to use a layer which is not present in tf.keras.layers or tf.contrib.layers, consider filing a github issue or, even better, sending us a pull request!
Models
Step6: Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
tfe = tf.contrib.eager
tf.enable_eager_execution()
Explanation: Custom layers
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" /><span>Run in Google Colab</span></a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /><span>View source on GitHub</span></a></td></table>
We recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution.
End of explanation
# In the tf.keras.layers package, layers are objects. To construct a layer,
# simply construct the object. Most layers take as a first argument the number
# of output dimensions / channels.
layer = tf.keras.layers.Dense(100)
# The number of input dimensions is often unnecessary, as it can be inferred
# the first time the layer is used, but it can be provided if you want to
# specify it manually, which is useful in some complex models.
layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
Explanation: Layers: common sets of useful operations
Most of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables.
Many machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers.
TensorFlow includes the full Keras API in the tf.keras package, and the Keras layers are very useful when building your own models.
End of explanation
# To use a layer, simply call it.
layer(tf.zeros([10, 5]))
# Layers have many useful methods. For example, you can inspect all variables
# in a layer by calling layer.variables. In this case a fully-connected layer
# will have variables for weights and biases.
layer.variables
# The variables are also accessible through nice accessors
layer.kernel, layer.bias
Explanation: The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer),
Conv2D, LSTM, BatchNormalization, Dropout, and many others.
End of explanation
class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
def build(self, input_shape):
self.kernel = self.add_variable("kernel",
shape=[input_shape[-1].value,
self.num_outputs])
def call(self, input):
return tf.matmul(input, self.kernel)
layer = MyDenseLayer(10)
print(layer(tf.zeros([10, 5])))
print(layer.variables)
Explanation: Implementing custom layers
The best way to implement your own layer is extending the tf.keras.Layer class and implementing:
* __init__ , where you can do all input-independent initialization
* build, where you know the shapes of the input tensors and can do the rest of the initialization
* call, where you do the forward computation
Note that you don't have to wait until build is called to create your variables, you can also create them in __init__. However, the advantage of creating them in build is that it enables late variable creation based on the shape of the inputs the layer will operate on. On the other hand, creating variables in __init__ would mean that shapes required to create the variables will need to be explicitly specified.
End of explanation
class ResnetIdentityBlock(tf.keras.Model):
def __init__(self, kernel_size, filters):
super(ResnetIdentityBlock, self).__init__(name='')
filters1, filters2, filters3 = filters
self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))
self.bn2a = tf.keras.layers.BatchNormalization()
self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
self.bn2b = tf.keras.layers.BatchNormalization()
self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))
self.bn2c = tf.keras.layers.BatchNormalization()
def call(self, input_tensor, training=False):
x = self.conv2a(input_tensor)
x = self.bn2a(x, training=training)
x = tf.nn.relu(x)
x = self.conv2b(x)
x = self.bn2b(x, training=training)
x = tf.nn.relu(x)
x = self.conv2c(x)
x = self.bn2c(x, training=training)
x += input_tensor
return tf.nn.relu(x)
block = ResnetIdentityBlock(1, [1, 2, 3])
print(block(tf.zeros([1, 2, 3, 3])))
print([x.name for x in block.variables])
Explanation: Note that you don't have to wait until build is called to create your variables, you can also create them in __init__.
Overall code is easier to read and maintain if it uses standard layers whenever possible, as other readers will be familiar with the behavior of standard layers. If you want to use a layer which is not present in tf.keras.layers or tf.contrib.layers, consider filing a github issue or, even better, sending us a pull request!
Models: composing layers
Many interesting layer-like things in machine learning models are implemented by composing existing layers. For example, each residual block in a resnet is a composition of convolutions, batch normalizations, and a shortcut.
The main class used when creating a layer-like thing which contains other layers is tf.keras.Model. Implementing one is done by inheriting from tf.keras.Model.
End of explanation
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(2, 1,
padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(3, (1, 1)),
tf.keras.layers.BatchNormalization()])
my_seq(tf.zeros([1, 2, 3, 3]))
Explanation: Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential
End of explanation |
13,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Laminar Pipe Flow</h1>
In this first tutorial, you will simulate a laminar pipe flow using the cad file <FONT FACE="courier" style="color
Step1: The wall shear stress is defined | Python Code:
D = 2.5e-2 # m
nu = 15.e-6 #m^2/s
rho = 1.2 #kg/m^3
mu = nu/rho
R = D/2
Re = 500.
Ub = Re * nu / D
print("Bulk velocity= %2.2f m/s" %Ub)
import numpy as np
n = 30
r = np.linspace(0,R,n)
U = 2 * Ub * (1 - np.power(r,2)/R**2)
import matplotlib.pyplot as plt
plt.plot(r,U,linewidth = 2)
plt.xlabel(r"$r$ (m)", fontdict = fontlabel)
plt.ylabel(r"$U(r)$ (m/s)", fontdict = fontlabel)
plt.xlim(0,R)
plt.show()
Explanation: <h1>Laminar Pipe Flow</h1>
In this first tutorial, you will simulate a laminar pipe flow using the cad file <FONT FACE="courier" style="color:blue">CAD>PipeD0.025L2.5VOF.iges</FONT>. The pipe inner diameter is 2.5cm and 250cm long.
<img src="CAD/Pipe.png">
The Junior course of Fluid Mechanics has taught you that:
<ul>
<li> The entrance length for laminar flow is given by
$$
\frac{L_e}{D}=0.05Re=0.05\frac{U_{b}D}{\nu}
$$
</li>
<li> The pressure gradient is constant in the fully developed region ($x > L_e$)</li>
<li> The velocity profile in the fully developed region is governed by the reduced streamwise momentum equation
$$
0=-\frac{dP}{dx}+\frac{\mu}{r}\frac{d}{dr}r\frac{dU}{dr}
$$
</li>
<li>The solution of this equation is
$$
U(r) = \frac{R^2}{4\mu}\left(-\frac{dP}{dx}\right)\left(1-\frac{r^2}{R^2}\right)=2U_b\left(1-\frac{r^2}{R^2}\right)
$$
</li>
<li>Hence,
$$
U_b=\frac{R^2}{8\mu}\left(-\frac{dP}{dx}\right)
$$
<li> Finally, the wall shear stress is proportional to the pressure drop:
$$
-\frac{dP}{dx}=\frac{2\tau_w}{R}
$$
</ul>
<p class='alert alert-danger'>
Using <a href="https://www.simscale.com">https://www.simscale.com</a>, Simulate a laminar flow. Verify and Validate (V & V) your simulation.
</p>
End of explanation
tauw = - mu * (U[n-1] - U[n-2])/(r[n-1] - r[n-2])
print("wall shear stress = %1.4f Pa" %tauw)
mdpdx = 8 * mu *Ub / R**2
tauw_e = R * mdpdx / 2
print("Exact wall shear stress = %1.4f Pa" %tauw_e)
Explanation: The wall shear stress is defined:
$$
\tau_w = \mu \frac{dU}{dr}
$$
End of explanation |
13,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 01
Step1: This scenario, as well as all other scenarios in Flow, is parametrized by the following arguments
Step2: 2.2 VehicleParams
The VehicleParams class stores state information on all vehicles in the network. This class is used to identify the dynamical behavior of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various get methods within this class.
The initial configuration of this class describes the number of vehicles in the network at the start of every simulation, as well as the properties of these vehicles. We begin by creating an empty VehicleParams object.
Step3: Once this object is created, vehicles may be introduced using the add method. This method specifies the types and quantities of vehicles at the start of a simulation rollout. For a description of the various arguements associated with the add method, we refer the reader to the following documentation (VehicleParams.add).
When adding vehicles, their dynamical behaviors may be specified either by the simulator (default), or by user-generated models. For longitudinal (acceleration) dynamics, several prominent car-following models are implemented in Flow. For this example, the acceleration behavior of all vehicles will be defined by the Intelligent Driver Model (IDM) [2].
Step4: Another controller we define is for the vehicle's routing behavior. For closed network where the route for any vehicle is repeated, the ContinuousRouter controller is used to perpetually reroute all vehicles to the initial set route.
Step5: Finally, we add 22 vehicles of type "human" with the above acceleration and routing behavior into the Vehicles class.
Step6: 2.3 NetParams
NetParams are network-specific parameters used to define the shape and properties of a network. Unlike most other parameters, NetParams may vary drastically depending on the specific network configuration, and accordingly most of its parameters are stored in additional_params. In order to determine which additional_params variables may be needed for a specific scenario, we refer to the ADDITIONAL_NET_PARAMS variable located in the scenario file.
Step7: Importing the ADDITIONAL_NET_PARAMS dict from the ring road scenario, we see that the required parameters are
Step8: 2.4 InitialConfig
InitialConfig specifies parameters that affect the positioning of vehicle in the network at the start of a simulation. These parameters can be used to limit the edges and number of lanes vehicles originally occupy, and provide a means of adding randomness to the starting positions of vehicles. In order to introduce a small initial disturbance to the system of vehicles in the network, we set the perturbation term in InitialConfig to 1m.
Step9: 2.5 TrafficLightParams
TrafficLightParams are used to describe the positions and types of traffic lights in the network. These inputs are outside the scope of this tutorial, and instead are covered in exercise06_traffic_lights.ipynb. For our example, we create an empty TrafficLightParams object, thereby ensuring that none are placed on any nodes.
Step10: 3. Setting up an Environment
Several envionrments in Flow exist to train autonomous agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. These environments are often scenario or task specific; however, some can be deployed on an ambiguous set of scenarios as well. One such environment, AccelEnv, may be used to train a variable number of vehicles in a fully observable network with a static number of vehicles.
Step11: Although we will not be training any autonomous agents in this exercise, the use of an environment allows us to view the cumulative reward simulation rollouts receive in the absence of autonomy.
Envrionments in Flow are parametrized by three components
Step12: 3.2 EnvParams
EnvParams specify environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. Much like NetParams, the attributes associated with this parameter are mostly environment specific, and can be found in the environment's ADDITIONAL_ENV_PARAMS dictionary.
Step13: Importing the ADDITIONAL_ENV_PARAMS variable, we see that it consists of only one entry, "target_velocity", which is used when computing the reward function associated with the environment. We use this default value when generating the EnvParams object.
Step14: 4. Setting up and Running the Experiment
Once the inputs to the scenario and environment classes are ready, we are ready to set up a Experiment object.
Step15: These objects may be used to simulate rollouts in the absence of reinforcement learning agents, as well as acquire behaviors and rewards that may be used as a baseline with which to compare the performance of the learning agent. In this case, we choose to run our experiment for one rollout consisting of 3000 steps (300 s).
Note
Step16: As we can see from the above simulation, the initial perturbations in the network instabilities propogate and intensify, eventually leading to the formation of stop-and-go waves after approximately 180s.
5. Visualizing Post-Simulation
Once the simulation is done, a .xml file will be generated in the location of the specified emission_path in SumoParams (assuming this parameter has been specified) under the name of the scenario. In our case, this is
Step17: The .xml file contains various vehicle-specific parameters at every time step. This information is transferred to a .csv file if the convert_to_csv parameter in exp.run() is set to True. This file looks as follows | Python Code:
from flow.scenarios.loop import LoopScenario
Explanation: Tutorial 01: Running Sumo Simulations
This tutorial walks through the process of running non-RL traffic simulations in Flow. Simulations of this form act as non-autonomous baselines and depict the behavior of human dynamics on a network. Similar simulations may also be used to evaluate the performance of hand-designed controllers on a network. This tutorial focuses primarily on the former use case, while an example of the latter may be found in exercise07_controllers.ipynb.
In this exercise, we simulate a initially perturbed single lane ring road. We witness in simulation that as time advances the initially perturbations do not dissipate, but instead propagates and expands until vehicles are forced to periodically stop and accelerate. For more information on this behavior, we refer the reader to the following article [1].
1. Components of a Simulation
All simulations, both in the presence and absence of RL, require two components: a scenario, and an environment. Scenarios describe the features of the transportation network used in simulation. This includes the positions and properties of nodes and edges constituting the lanes and junctions, as well as properties of the vehicles, traffic lights, inflows, etc. in the network. Environments, on the other hand, initialize, reset, and advance simulations, and act the primary interface between the reinforcement learning algorithm and the scenario. Moreover, custom environments may be used to modify the dynamical features of an scenario.
2. Setting up a Scenario
Flow contains a plethora of pre-designed scenarios used to replicate highways, intersections, and merges in both closed and open settings. All these scenarios are located in flow/scenarios. In order to recreate a ring road network, we begin by importing the scenario LoopScenario.
End of explanation
name = "ring_example"
Explanation: This scenario, as well as all other scenarios in Flow, is parametrized by the following arguments:
* name
* vehicles
* net_params
* initial_config
* traffic_lights
These parameters allow a single scenario to be recycled for a multitude of different network settings. For example, LoopScenario may be used to create ring roads of variable length with a variable number of lanes and vehicles.
2.1 Name
The name argument is a string variable depicting the name of the scenario. This has no effect on the type of network created.
End of explanation
from flow.core.params import VehicleParams
vehicles = VehicleParams()
Explanation: 2.2 VehicleParams
The VehicleParams class stores state information on all vehicles in the network. This class is used to identify the dynamical behavior of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various get methods within this class.
The initial configuration of this class describes the number of vehicles in the network at the start of every simulation, as well as the properties of these vehicles. We begin by creating an empty VehicleParams object.
End of explanation
from flow.controllers.car_following_models import IDMController
Explanation: Once this object is created, vehicles may be introduced using the add method. This method specifies the types and quantities of vehicles at the start of a simulation rollout. For a description of the various arguements associated with the add method, we refer the reader to the following documentation (VehicleParams.add).
When adding vehicles, their dynamical behaviors may be specified either by the simulator (default), or by user-generated models. For longitudinal (acceleration) dynamics, several prominent car-following models are implemented in Flow. For this example, the acceleration behavior of all vehicles will be defined by the Intelligent Driver Model (IDM) [2].
End of explanation
from flow.controllers.routing_controllers import ContinuousRouter
Explanation: Another controller we define is for the vehicle's routing behavior. For closed network where the route for any vehicle is repeated, the ContinuousRouter controller is used to perpetually reroute all vehicles to the initial set route.
End of explanation
vehicles.add("human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
Explanation: Finally, we add 22 vehicles of type "human" with the above acceleration and routing behavior into the Vehicles class.
End of explanation
from flow.scenarios.loop import ADDITIONAL_NET_PARAMS
print(ADDITIONAL_NET_PARAMS)
Explanation: 2.3 NetParams
NetParams are network-specific parameters used to define the shape and properties of a network. Unlike most other parameters, NetParams may vary drastically depending on the specific network configuration, and accordingly most of its parameters are stored in additional_params. In order to determine which additional_params variables may be needed for a specific scenario, we refer to the ADDITIONAL_NET_PARAMS variable located in the scenario file.
End of explanation
from flow.core.params import NetParams
net_params = NetParams(additional_params=ADDITIONAL_NET_PARAMS)
Explanation: Importing the ADDITIONAL_NET_PARAMS dict from the ring road scenario, we see that the required parameters are:
length: length of the ring road
lanes: number of lanes
speed: speed limit for all edges
resolution: resolution of the curves on the ring. Setting this value to 1 converts the ring to a diamond.
At times, other inputs may be needed from NetParams to recreate proper network features/behavior. These requirements can be founded in the scenario's documentation. For the ring road, no attributes are needed aside from the additional_params terms. Furthermore, for this exercise, we use the scenario's default parameters when creating the NetParams object.
End of explanation
from flow.core.params import InitialConfig
initial_config = InitialConfig(spacing="uniform", perturbation=1)
Explanation: 2.4 InitialConfig
InitialConfig specifies parameters that affect the positioning of vehicle in the network at the start of a simulation. These parameters can be used to limit the edges and number of lanes vehicles originally occupy, and provide a means of adding randomness to the starting positions of vehicles. In order to introduce a small initial disturbance to the system of vehicles in the network, we set the perturbation term in InitialConfig to 1m.
End of explanation
from flow.core.params import TrafficLightParams
traffic_lights = TrafficLightParams()
Explanation: 2.5 TrafficLightParams
TrafficLightParams are used to describe the positions and types of traffic lights in the network. These inputs are outside the scope of this tutorial, and instead are covered in exercise06_traffic_lights.ipynb. For our example, we create an empty TrafficLightParams object, thereby ensuring that none are placed on any nodes.
End of explanation
from flow.envs.loop.loop_accel import AccelEnv
Explanation: 3. Setting up an Environment
Several envionrments in Flow exist to train autonomous agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. These environments are often scenario or task specific; however, some can be deployed on an ambiguous set of scenarios as well. One such environment, AccelEnv, may be used to train a variable number of vehicles in a fully observable network with a static number of vehicles.
End of explanation
from flow.core.params import SumoParams
sumo_params = SumoParams(sim_step=0.1, render=True, emission_path='data')
Explanation: Although we will not be training any autonomous agents in this exercise, the use of an environment allows us to view the cumulative reward simulation rollouts receive in the absence of autonomy.
Envrionments in Flow are parametrized by three components:
* EnvParams
* SumoParams
* Scenario
3.1 SumoParams
SumoParams specifies simulation-specific variables. These variables include the length a simulation step (in seconds) and whether to render the GUI when running the experiment. For this example, we consider a simulation step length of 0.1s and activate the GUI.
Another useful parameter is emission_path, which is used to specify the path where the emissions output will be generated. They contain a lot of information about the simulation, for instance the position and speed of each car at each time step. If you do not specify any emission path, the emission file will not be generated. More on this in Section 5.
End of explanation
from flow.envs.loop.loop_accel import ADDITIONAL_ENV_PARAMS
print(ADDITIONAL_ENV_PARAMS)
Explanation: 3.2 EnvParams
EnvParams specify environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. Much like NetParams, the attributes associated with this parameter are mostly environment specific, and can be found in the environment's ADDITIONAL_ENV_PARAMS dictionary.
End of explanation
from flow.core.params import EnvParams
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
Explanation: Importing the ADDITIONAL_ENV_PARAMS variable, we see that it consists of only one entry, "target_velocity", which is used when computing the reward function associated with the environment. We use this default value when generating the EnvParams object.
End of explanation
from flow.core.experiment import Experiment
Explanation: 4. Setting up and Running the Experiment
Once the inputs to the scenario and environment classes are ready, we are ready to set up a Experiment object.
End of explanation
# create the scenario object
scenario = LoopScenario(name="ring_example",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config,
traffic_lights=traffic_lights)
# create the environment object
env = AccelEnv(env_params, sumo_params, scenario)
# create the experiment object
exp = Experiment(env)
# run the experiment for a set number of rollouts / time steps
_ = exp.run(1, 3000, convert_to_csv=True)
Explanation: These objects may be used to simulate rollouts in the absence of reinforcement learning agents, as well as acquire behaviors and rewards that may be used as a baseline with which to compare the performance of the learning agent. In this case, we choose to run our experiment for one rollout consisting of 3000 steps (300 s).
Note: When executing the below code, remeber to click on the <img style="display:inline;" src="img/play_button.png"> Play button after the GUI is rendered.
End of explanation
import os
emission_location = os.path.join(exp.env.sim_params.emission_path, exp.env.scenario.name)
print(emission_location + '-emission.xml')
Explanation: As we can see from the above simulation, the initial perturbations in the network instabilities propogate and intensify, eventually leading to the formation of stop-and-go waves after approximately 180s.
5. Visualizing Post-Simulation
Once the simulation is done, a .xml file will be generated in the location of the specified emission_path in SumoParams (assuming this parameter has been specified) under the name of the scenario. In our case, this is:
End of explanation
import pandas as pd
pd.read_csv(emission_location + '-emission.csv')
Explanation: The .xml file contains various vehicle-specific parameters at every time step. This information is transferred to a .csv file if the convert_to_csv parameter in exp.run() is set to True. This file looks as follows:
End of explanation |
13,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting the SPY's Future Closing Price with a Multi-Model Forecast
Creating many machine learning models to predict future price movements from Redis.
How?
Uses pricing metrics (hlocv)
Streamline development and deployment of machine learning forecasts by storing large, pre-trained models living in Redis
Custom rolled dataset (takes about 7 hours per 1 ticker)
Technical indicators
Why?
Took too long to manually rebuild the dataset, and build + tune new models
Improve model accuracy by tracking success (situational/seasonal risks)
Wanted simple, consistent delivery of results
Service layer for abstracting model implementation
Multi-tenant, distributed machine learning cloud
Team needed Jupyter integration
Data security - so it had to run on-premise and cloud
Now it takes 30 minutes to build the dataset and 5 minutes to make new predictions
Sample SPY Multi-Model Forecast
Setup the Environment
Load the shared core, methods, and environment before starting processing
Step1: Configure the job
Step2: Start Forecasting
Step3: Wait for the job to finish
Step4: Get Forecast Accuracies
Step5: Get the Analysis Images
Step6: Get the Recent Machine Learning Jobs
Step7: Redis Machine Learning Manifest
Jobs use a manifest to prevent concurrent jobs in-flight and models from colliding between users and historical machine learning jobs
A manifest contains | Python Code:
from __future__ import print_function
import sys, os, requests, json, datetime
# Load the environment and login the user
from src.common.load_redten_ipython_env import user_token, user_login, csv_file, run_job, core, api_urls, ppj, rt_url, rt_user, rt_pass, rt_email, lg, good, boom, anmt, mark, ppj, uni_key, rest_login_as_user, rest_full_login, wait_for_job_to_finish, wait_on_job, get_job_analysis, get_job_results, get_analysis_manifest, get_job_cache_manifest, build_prediction_results, build_forecast_results, get_job_cache_manifest, search_ml_jobs, show_logs, show_errors, ipyImage, ipyHTML, ipyDisplay, pd, np
Explanation: Predicting the SPY's Future Closing Price with a Multi-Model Forecast
Creating many machine learning models to predict future price movements from Redis.
How?
Uses pricing metrics (hlocv)
Streamline development and deployment of machine learning forecasts by storing large, pre-trained models living in Redis
Custom rolled dataset (takes about 7 hours per 1 ticker)
Technical indicators
Why?
Took too long to manually rebuild the dataset, and build + tune new models
Improve model accuracy by tracking success (situational/seasonal risks)
Wanted simple, consistent delivery of results
Service layer for abstracting model implementation
Multi-tenant, distributed machine learning cloud
Team needed Jupyter integration
Data security - so it had to run on-premise and cloud
Now it takes 30 minutes to build the dataset and 5 minutes to make new predictions
Sample SPY Multi-Model Forecast
Setup the Environment
Load the shared core, methods, and environment before starting processing
End of explanation
# dataset name is the ticker
ds_name = "SPY"
# Label and description for job
title = str(ds_name) + " Forecast v5 - " + str(uni_key())
desc = "Forecast simulation - " + str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
# Whats the algorithm model you want to use?
algo_name = "xgb-regressor"
# If your dataset is stored in redis, you can pass in the location
# to the dataset like: <redis endpoint>:<key>
rloc = ""
# If your dataset is stored in S3, you can pass in the location
# to the dataset like: <bucket>:<key>
sloc = ""
# During training what ratio of tests-vs-training do you want to use?
# Trade off smarts vs accuracy...how smart are we going?
test_ratio = 0.1
# Customize dataset samples used during the analysis using json dsl
sample_filter_rules = {}
# What column do you want to predict values?
target_column_name = "FClose"
# What columns can the algorithms use for training and learning?
feature_column_names = [ "FHigh", "FLow", "FOpen", "FClose", "FVolume" ]
# values in the Target Column
target_column_values = [ "GoodBuys", "BadBuys", "Not Finished" ]
# How many units ahead do you want to forecast?
units_ahead_set = [ 5, 10, 15, 20, 25, 30 ]
units_ahead_type = "Days"
# Prune non-int/float columns as needed:
ignore_features = [
"Ticker",
"Date",
"FDate",
"FPrice",
"DcsnDate",
"Decision"
]
# Set up the XGB parameter
# https://github.com/dmlc/xgboost/blob/master/doc/parameter.md
train_xgb = {
"learning_rate" : 0.20,
"num_estimators" : 50,
"sub_sample" : 0.20,
"col_sample_by_tree" : 0.90,
"col_sample_by_level" : 1.0,
"objective" : "reg:linear",
"max_depth" : 3,
"max_delta_step" : 0,
"min_child_weight" : 1,
"reg_alpha" : 0,
"reg_lambda" : 1,
"base_score" : 0.6,
"gamma" : 0,
"seed" : 42,
"silent" : True
}
# Predict new price points during the day
predict_row = {
"High" : 250.82,
"Low" : 245.54,
"Open" : 247.77,
"Close" : 246.24,
"Volume" : 77670266
}
Explanation: Configure the job
End of explanation
job_id = None # on success, this will store the actively running job's id
csv_file = ""
post_data = {
"predict_this_data" : predict_row,
"title" : title,
"desc" : desc,
"ds_name" : ds_name,
"target_column_name" : target_column_name,
"feature_column_names" : feature_column_names,
"ignore_features" : ignore_features,
"csv_file" : csv_file,
"rloc" : rloc,
"sloc" : sloc,
"algo_name" : algo_name,
"test_ratio" : test_ratio,
"target_column_values" : target_column_values,
"label_column_name" : target_column_name,
"prediction_type" : "Forecast",
"ml_type" : "Playbook-UnitsAhead",
"train" : train_xgb,
"tracking_type" : "",
"units_ahead_set" : units_ahead_set,
"units_ahead_type" : units_ahead_type,
"forecast_type" : "ETFPriceForecasting",
"sample_filters" : sample_filter_rules,
"predict_units_back" : 90, # how many days back should the final chart go?
"send_to_email" : [ "[email protected]" ] # comma separated list
}
anmt("Running job: " + str(title))
auth_headers = {
"Content-type": "application/json",
"Authorization" : "JWT " + str(user_token)
}
job_response = run_job(post_data=post_data, headers=auth_headers)
if job_response["status"] != "valid":
boom("Forecast job failed with error=" + str(job_response["status"]))
else:
if "id" not in job_response["data"]:
boom("Failed to create new forecast job")
else:
job_id = job_response["data"]["id"]
job_status = job_response["data"]["status"]
lg("Started Forecast job=" + str(job_id) + " with current status=" + str(job_status))
# end of if job was valid or not
Explanation: Start Forecasting
End of explanation
job_data = {}
job_report = {}
# Should hook this up to a randomized image loader...
ipyDisplay(ipyImage(url="https://media.giphy.com/media/l397998l2DT0ogare/giphy.gif"))
job_res = {}
if job_id == None:
boom("Failed to start a new job")
else:
job_res = wait_on_job(job_id)
if job_res["status"] != "SUCCESS":
boom("Job=" + str(job_id) + " failed with status=" + str(job_res["status"]) + " err=" + str(job_res["error"]))
else:
job_data = job_res["record"]
anmt("Job Report:")
lg(ppj(job_data), 5)
# end of waiting
Explanation: Wait for the job to finish
End of explanation
job_report = {}
if job_id == None:
boom("Failed to start a new job")
else:
# Get the analysis, but do not auto-show the plots
job_report = get_job_analysis(job_id, show_plots=False)
if len(job_report) == 0:
boom("Job=" + str(job_id) + " failed")
else:
lg("")
# if the job failed
# end of get job analysis
# Build the forecast accuracy dictionary from the analysis
# and show the forecast dataframes
acc_results = build_forecast_results(job_report)
for col in acc_results:
col_node = acc_results[col]
predictions_df = col_node["predictions_df"]
date_predictions_df = col_node["date_predictions_df"]
train_predictions_df = col_node["train_predictions_df"]
lg("--------------------------------------------------")
# for all columns in the accuracy dictionary:
# successful predictions above 90%...how's that error rate though?
if col_node["accuracy"] > 0.90:
good("Column=" + str(col) + " accuracy=" + str(col_node["accuracy"]) + " mse=" + str(col_node["mse"]) + " num_predictions=" + str(len(col_node["date_predictions_df"].index)))
# successful predictions between 90% and 80%...how's that error rate though?
elif 0.90 > col_node["accuracy"] > 0.80:
lg("Column=" + str(col) + " accuracy=" + str(col_node["accuracy"]) + " mse=" + str(col_node["mse"]) + " num_predictions=" + str(len(col_node["date_predictions_df"].index)))
else:
boom("Column=" + str(col) + " is not very accurate: accuracy=" + str(col_node["accuracy"]) + " mse=" + str(col_node["mse"]) + " num_predictions=" + str(len(col_node["predictions_df"].index)))
# end of header line
# show the timeseries forecast
ipyDisplay(date_predictions_df)
lg("")
# end of showing prediction results
Explanation: Get Forecast Accuracies
End of explanation
job_res = get_job_analysis(job_id, show_plots=True)
Explanation: Get the Analysis Images
End of explanation
user_token = user_login(rt_user, rt_pass, rt_url)
auth_headers = {
"Authorization" : "JWT " + str(user_token)
}
resource_url = rt_url + "/ml/run/"
query_params = {}
post_data = {}
# Get the ML Job
resource_url = rt_url + "/ml/jobs/"
lg("Running Get ML Job url=" + str(resource_url), 6)
get_response = requests.get(resource_url, params=query_params, data=post_data, headers=auth_headers)
if get_response.status_code != 201 and get_response.status_code != 200:
lg("Failed with GET Response Status=" + str(get_response.status_code) + " Reason=" + str(get_response.reason), 0)
lg("Details:\n" + str(get_response.text) + "\n", 0)
else:
lg("SUCCESS - GET Response Status=" + str(get_response.status_code) + " Reason=" + str(get_response.reason)[0:10], 5)
as_json = True
record = {}
if as_json:
record = json.loads(get_response.text)
lg(ppj(record))
# end of post for running an ML Job
Explanation: Get the Recent Machine Learning Jobs
End of explanation
job_manifest = get_job_cache_manifest(job_report)
lg(ppj(job_manifest))
Explanation: Redis Machine Learning Manifest
Jobs use a manifest to prevent concurrent jobs in-flight and models from colliding between users and historical machine learning jobs
A manifest contains:
A dictionary of Redis model locations
S3 archival locations
Tracking data for import and export across environments
Decoupled large model files (8gb files in S3) from the tracking and deployment
End of explanation |
13,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 迁移学习和微调
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 数据预处理
数据下载
在本教程中,您将使用包含数千个猫和狗图像的数据集。下载并解压缩包含图像的 zip 文件,然后使用 tf.keras.preprocessing.image_dataset_from_directory 效用函数创建一个 tf.data.Dataset 进行训练和验证。您可以在此教程中详细了解如何加载图像。
Step3: 显示训练集中的前九个图像和标签:
Step4: 由于原始数据集不包含测试集,因此您需要创建一个。为此,请使用 tf.data.experimental.cardinality 确定验证集中有多少批次的数据,然后将其中的 20% 移至测试集。
Step5: 配置数据集以提高性能
使用缓冲预提取从磁盘加载图像,以免造成 I/O 阻塞。要详细了解这种方式,请参阅数据性能指南。
Step6: 使用数据扩充
当您没有较大的图像数据集时,最好将随机但现实的转换应用于训练图像(例如旋转或水平翻转)来人为引入样本多样性。这有助于使模型暴露于训练数据的不同方面并减少过拟合。您可以在此教程中详细了解数据扩充。
Step7: 注:当您调用 model.fit 时,这些层仅在训练过程中才会处于有效状态。在 model.evaulate 或 model.fit 中的推断模式下使用模型时,它们处于停用状态。
我们将这些层重复应用于同一个图像,然后查看结果。
Step8: 重新缩放像素值
稍后,您将下载 tf.keras.applications.MobileNetV2 作为基础模型。此模型期望像素值处于 [-1, 1] 范围内,但此时,图像中的像素值处于 [0, 255] 范围内。要重新缩放这些像素值,请使用模型随附的预处理方法。
Step9: 注:另外,您也可以使用 Rescaling 层将像素值从 [0,255] 重新缩放为 [-1, 1]。
Step10: 注:如果使用其他 tf.keras.applications,请确保查阅 API 文档以确定它们是否期望 [-1,1] 或 [0,1] 范围内的像素,或者使用随附的 preprocess_input 函数。
从预训练卷积网络创建基础模型
您将根据 Google 开发的 MobileNet V2 模型来创建基础模型。此模型已基于 ImageNet 数据集进行预训练,ImageNet 数据集是一个包含 140 万个图像和 1000 个类的大型数据集。ImageNet 是一个研究训练数据集,具有各种各样的类别,例如 jackfruit 和 syringe。此知识库将帮助我们对特定数据集中的猫和狗进行分类。
首先,您需要选择将 MobileNet V2 的哪一层用于特征提取。最后的分类层(在“顶部”,因为大多数机器学习模型的图表是从下到上的)不是很有用。相反,您将按照常见做法依赖于展平操作之前的最后一层。此层被称为“瓶颈层”。与最后一层/顶层相比,瓶颈层的特征保留了更多的通用性。
首先,实例化一个已预加载基于 ImageNet 训练的权重的 MobileNet V2 模型。通过指定 include_top=False 参数,可以加载不包括顶部分类层的网络,这对于特征提取十分理想。
Step11: 此特征提取程序将每个 160x160x3 图像转换为 5x5x1280 的特征块。我们看看它对一批示例图像做了些什么:
Step12: 特征提取
在此步骤中,您将冻结在上一步中创建的卷积基,并用作特征提取程序。此外,您还可以在其顶部添加分类器以及训练顶级分类器。
冻结卷积基
在编译和训练模型之前,冻结卷积基至关重要。冻结(通过设置 layer.trainable = False)可避免在训练期间更新给定层中的权重。MobileNet V2 具有许多层,因此将整个模型的 trainable 标记设置为 False 会冻结所有这些层。
Step13: 有关 BatchNormalization 层的重要说明
许多模型都包含 tf.keras.layers.BatchNormalization 层。此层是一个特例,应在微调的上下文中采取预防措施,如本教程后面所示。
设置 layer.trainable = False 时,BatchNormalization 层将以推断模式运行,并且不会更新其均值和方差统计信息。
解冻包含 BatchNormalization 层的模型以进行微调时,应在调用基础模型时通过传递 training = False 来使 BatchNormalization 层保持在推断模式下。否则,应用于不可训练权重的更新将破坏模型已经学习到的内容。
有关详细信息,请参阅迁移学习指南。
Step14: 添加分类头
要从特征块生成预测,请使用 tf.keras.layers.GlobalAveragePooling2D 层在 5x5 空间位置内取平均值,以将特征转换成每个图像一个向量(包含 1280 个元素)。
Step15: 应用 tf.keras.layers.Dense 层将这些特征转换成每个图像一个预测。您在此处不需要激活函数,因为此预测将被视为 logit 或原始预测值。正数预测 1 类,负数预测 0 类。
Step16: 通过使用 Keras 函数式 API 将数据扩充、重新缩放、base_model 和特征提取程序层链接在一起来构建模型。如前面所述,由于我们的模型包含 BatchNormalization 层,因此请使用 training = False。
Step17: 编译模型
在训练模型前,需要先编译模型。由于存在两个类,并且模型提供线性输出,请将二进制交叉熵损失与 from_logits=True 结合使用。
Step18: MobileNet 中的 250 万个参数被冻结,但在密集层中有 1200 个可训练参数。它们分为两个 tf.Variable 对象,即权重和偏差。
Step19: 训练模型
经过 10 个周期的训练后,您应该在验证集上看到约 94% 的准确率。
Step20: 学习曲线
我们看一下使用 MobileNet V2 基础模型作为固定特征提取程序时训练和验证准确率/损失的学习曲线。
Step21: 注:如果您想知道为什么验证指标明显优于训练指标,主要原因是 tf.keras.layers.BatchNormalization 和 tf.keras.layers.Dropout 等层会影响训练期间的准确率。在计算验证损失时,它们处于关闭状态。
在较小程度上,这也是因为训练指标报告的是某个周期的平均值,而验证指标则在经过该周期后才进行评估,因此验证指标会看到训练时间略长一些的模型。
微调
在特征提取实验中,您仅在 MobileNet V2 基础模型的顶部训练了一些层。预训练网络的权重在训练过程中未更新。
进一步提高性能的一种方式是在训练(或“微调”)预训练模型顶层的权重的同时,另外训练您添加的分类器。训练过程将强制权重从通用特征映射调整为专门与数据集相关联的特征。
注:只有在您使用设置为不可训练的预训练模型训练顶级分类器之后,才能尝试这样做。如果您在预训练模型的顶部添加一个随机初始化的分类器并尝试共同训练所有层,则梯度更新的幅度将过大(由于分类器的随机权重所致),这将导致您的预训练模型忘记它已经学习的内容。
另外,您还应尝试微调少量顶层而不是整个 MobileNet 模型。在大多数卷积网络中,层越高,它的专门程度就越高。前几层学习非常简单且通用的特征,这些特征可以泛化到几乎所有类型的图像。随着您向上层移动,这些特征越来越特定于训练模型所使用的数据集。微调的目标是使这些专用特征适应新的数据集,而不是覆盖通用学习。
解冻模型的顶层
您需要做的是解冻 base_model 并将底层设置为不可训练。随后,您应该重新编译模型(使这些更改生效的必需操作),然后恢复训练。
Step22: 编译模型
当您正在训练一个大得多的模型并且想要重新调整预训练权重时,请务必在此阶段使用较低的学习率。否则,您的模型可能会很快过拟合。
Step23: 继续训练模型
如果您已提前训练至收敛,则此步骤将使您的准确率提高几个百分点。
Step24: 在微调 MobileNet V2 基础模型的最后几层并在这些层上训练分类器时,我们来看一下训练和验证准确率/损失的学习曲线。验证损失比训练损失高得多,因此可能存在一些过拟合。
当新的训练集相对较小且与原始 MobileNet V2 数据集相似时,也可能存在一些过拟合。
经过微调后,模型在验证集上的准确率几乎达到 98%。
Step25: 评估和预测
最后,您可以使用测试集在新数据上验证模型的性能。
Step26: 现在,您可以使用此模型来预测您的宠物是猫还是狗。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet # IGNORE_COPYRIGHT: cleared by OSS licensing
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
Explanation: 迁移学习和微调
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/tutorials/images/transfer_learning"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/images/transfer_learning.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/images/transfer_learning.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/images/transfer_learning.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
在本教程中,您将学习如何使用迁移学习通过预训练网络对猫和狗的图像进行分类。
预训练模型是一个之前基于大型数据集(通常是大型图像分类任务)训练的已保存网络。您可以按原样使用预训练模型,也可以使用迁移学习针对给定任务自定义此模型。
用于图像分类的迁移学习背后的理念是,如果一个模型是基于足够大且通用的数据集训练的,那么该模型将有效地充当视觉世界的通用模型。随后,您可以利用这些学习到的特征映射,而不必通过基于大型数据集训练大型模型而从头开始。
在此笔记本中,您将尝试通过以下两种方式来自定义预训练模型:
特征提取:使用先前网络学习的表示从新样本中提取有意义的特征。您只需在预训练模型上添加一个将从头开始训练的新分类器,这样便可重复利用先前针对数据集学习的特征映射。
您无需(重新)训练整个模型。基础卷积网络已经包含通常用于图片分类的特征。但是,预训练模型的最终分类部分特定于原始分类任务,随后特定于训练模型所使用的类集。
微调:解冻已冻结模型库的一些顶层,并共同训练新添加的分类器层和基础模型的最后几层。这样,我们便能“微调”基础模型中的高阶特征表示,以使其与特定任务更相关。
您将遵循通用的机器学习工作流。
检查并理解数据
构建输入流水线,在本例中使用 Keras ImageDataGenerator
构成模型
加载预训练的基础模型(和预训练权重)
将分类层堆叠在顶部
训练模型
评估模型
End of explanation
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
BATCH_SIZE = 32
IMG_SIZE = (160, 160)
train_dataset = image_dataset_from_directory(train_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = image_dataset_from_directory(validation_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
Explanation: 数据预处理
数据下载
在本教程中,您将使用包含数千个猫和狗图像的数据集。下载并解压缩包含图像的 zip 文件,然后使用 tf.keras.preprocessing.image_dataset_from_directory 效用函数创建一个 tf.data.Dataset 进行训练和验证。您可以在此教程中详细了解如何加载图像。
End of explanation
class_names = train_dataset.class_names
plt.figure(figsize=(10, 10))
for images, labels in train_dataset.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
Explanation: 显示训练集中的前九个图像和标签:
End of explanation
val_batches = tf.data.experimental.cardinality(validation_dataset)
test_dataset = validation_dataset.take(val_batches // 5)
validation_dataset = validation_dataset.skip(val_batches // 5)
print('Number of validation batches: %d' % tf.data.experimental.cardinality(validation_dataset))
print('Number of test batches: %d' % tf.data.experimental.cardinality(test_dataset))
Explanation: 由于原始数据集不包含测试集,因此您需要创建一个。为此,请使用 tf.data.experimental.cardinality 确定验证集中有多少批次的数据,然后将其中的 20% 移至测试集。
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE)
validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE)
Explanation: 配置数据集以提高性能
使用缓冲预提取从磁盘加载图像,以免造成 I/O 阻塞。要详细了解这种方式,请参阅数据性能指南。
End of explanation
data_augmentation = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.RandomFlip('horizontal'),
tf.keras.layers.experimental.preprocessing.RandomRotation(0.2),
])
Explanation: 使用数据扩充
当您没有较大的图像数据集时,最好将随机但现实的转换应用于训练图像(例如旋转或水平翻转)来人为引入样本多样性。这有助于使模型暴露于训练数据的不同方面并减少过拟合。您可以在此教程中详细了解数据扩充。
End of explanation
for image, _ in train_dataset.take(1):
plt.figure(figsize=(10, 10))
first_image = image[0]
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
augmented_image = data_augmentation(tf.expand_dims(first_image, 0))
plt.imshow(augmented_image[0] / 255)
plt.axis('off')
Explanation: 注:当您调用 model.fit 时,这些层仅在训练过程中才会处于有效状态。在 model.evaulate 或 model.fit 中的推断模式下使用模型时,它们处于停用状态。
我们将这些层重复应用于同一个图像,然后查看结果。
End of explanation
preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input
Explanation: 重新缩放像素值
稍后,您将下载 tf.keras.applications.MobileNetV2 作为基础模型。此模型期望像素值处于 [-1, 1] 范围内,但此时,图像中的像素值处于 [0, 255] 范围内。要重新缩放这些像素值,请使用模型随附的预处理方法。
End of explanation
rescale = tf.keras.layers.experimental.preprocessing.Rescaling(1./127.5, offset= -1)
Explanation: 注:另外,您也可以使用 Rescaling 层将像素值从 [0,255] 重新缩放为 [-1, 1]。
End of explanation
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
Explanation: 注:如果使用其他 tf.keras.applications,请确保查阅 API 文档以确定它们是否期望 [-1,1] 或 [0,1] 范围内的像素,或者使用随附的 preprocess_input 函数。
从预训练卷积网络创建基础模型
您将根据 Google 开发的 MobileNet V2 模型来创建基础模型。此模型已基于 ImageNet 数据集进行预训练,ImageNet 数据集是一个包含 140 万个图像和 1000 个类的大型数据集。ImageNet 是一个研究训练数据集,具有各种各样的类别,例如 jackfruit 和 syringe。此知识库将帮助我们对特定数据集中的猫和狗进行分类。
首先,您需要选择将 MobileNet V2 的哪一层用于特征提取。最后的分类层(在“顶部”,因为大多数机器学习模型的图表是从下到上的)不是很有用。相反,您将按照常见做法依赖于展平操作之前的最后一层。此层被称为“瓶颈层”。与最后一层/顶层相比,瓶颈层的特征保留了更多的通用性。
首先,实例化一个已预加载基于 ImageNet 训练的权重的 MobileNet V2 模型。通过指定 include_top=False 参数,可以加载不包括顶部分类层的网络,这对于特征提取十分理想。
End of explanation
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print(feature_batch.shape)
Explanation: 此特征提取程序将每个 160x160x3 图像转换为 5x5x1280 的特征块。我们看看它对一批示例图像做了些什么:
End of explanation
base_model.trainable = False
Explanation: 特征提取
在此步骤中,您将冻结在上一步中创建的卷积基,并用作特征提取程序。此外,您还可以在其顶部添加分类器以及训练顶级分类器。
冻结卷积基
在编译和训练模型之前,冻结卷积基至关重要。冻结(通过设置 layer.trainable = False)可避免在训练期间更新给定层中的权重。MobileNet V2 具有许多层,因此将整个模型的 trainable 标记设置为 False 会冻结所有这些层。
End of explanation
# Let's take a look at the base model architecture
base_model.summary()
Explanation: 有关 BatchNormalization 层的重要说明
许多模型都包含 tf.keras.layers.BatchNormalization 层。此层是一个特例,应在微调的上下文中采取预防措施,如本教程后面所示。
设置 layer.trainable = False 时,BatchNormalization 层将以推断模式运行,并且不会更新其均值和方差统计信息。
解冻包含 BatchNormalization 层的模型以进行微调时,应在调用基础模型时通过传递 training = False 来使 BatchNormalization 层保持在推断模式下。否则,应用于不可训练权重的更新将破坏模型已经学习到的内容。
有关详细信息,请参阅迁移学习指南。
End of explanation
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
Explanation: 添加分类头
要从特征块生成预测,请使用 tf.keras.layers.GlobalAveragePooling2D 层在 5x5 空间位置内取平均值,以将特征转换成每个图像一个向量(包含 1280 个元素)。
End of explanation
prediction_layer = tf.keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
Explanation: 应用 tf.keras.layers.Dense 层将这些特征转换成每个图像一个预测。您在此处不需要激活函数,因为此预测将被视为 logit 或原始预测值。正数预测 1 类,负数预测 0 类。
End of explanation
inputs = tf.keras.Input(shape=(160, 160, 3))
x = data_augmentation(inputs)
x = preprocess_input(x)
x = base_model(x, training=False)
x = global_average_layer(x)
x = tf.keras.layers.Dropout(0.2)(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs)
Explanation: 通过使用 Keras 函数式 API 将数据扩充、重新缩放、base_model 和特征提取程序层链接在一起来构建模型。如前面所述,由于我们的模型包含 BatchNormalization 层,因此请使用 training = False。
End of explanation
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.Adam(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
Explanation: 编译模型
在训练模型前,需要先编译模型。由于存在两个类,并且模型提供线性输出,请将二进制交叉熵损失与 from_logits=True 结合使用。
End of explanation
len(model.trainable_variables)
Explanation: MobileNet 中的 250 万个参数被冻结,但在密集层中有 1200 个可训练参数。它们分为两个 tf.Variable 对象,即权重和偏差。
End of explanation
initial_epochs = 10
loss0, accuracy0 = model.evaluate(validation_dataset)
print("initial loss: {:.2f}".format(loss0))
print("initial accuracy: {:.2f}".format(accuracy0))
history = model.fit(train_dataset,
epochs=initial_epochs,
validation_data=validation_dataset)
Explanation: 训练模型
经过 10 个周期的训练后,您应该在验证集上看到约 94% 的准确率。
End of explanation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
Explanation: 学习曲线
我们看一下使用 MobileNet V2 基础模型作为固定特征提取程序时训练和验证准确率/损失的学习曲线。
End of explanation
base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine-tune from this layer onwards
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
Explanation: 注:如果您想知道为什么验证指标明显优于训练指标,主要原因是 tf.keras.layers.BatchNormalization 和 tf.keras.layers.Dropout 等层会影响训练期间的准确率。在计算验证损失时,它们处于关闭状态。
在较小程度上,这也是因为训练指标报告的是某个周期的平均值,而验证指标则在经过该周期后才进行评估,因此验证指标会看到训练时间略长一些的模型。
微调
在特征提取实验中,您仅在 MobileNet V2 基础模型的顶部训练了一些层。预训练网络的权重在训练过程中未更新。
进一步提高性能的一种方式是在训练(或“微调”)预训练模型顶层的权重的同时,另外训练您添加的分类器。训练过程将强制权重从通用特征映射调整为专门与数据集相关联的特征。
注:只有在您使用设置为不可训练的预训练模型训练顶级分类器之后,才能尝试这样做。如果您在预训练模型的顶部添加一个随机初始化的分类器并尝试共同训练所有层,则梯度更新的幅度将过大(由于分类器的随机权重所致),这将导致您的预训练模型忘记它已经学习的内容。
另外,您还应尝试微调少量顶层而不是整个 MobileNet 模型。在大多数卷积网络中,层越高,它的专门程度就越高。前几层学习非常简单且通用的特征,这些特征可以泛化到几乎所有类型的图像。随着您向上层移动,这些特征越来越特定于训练模型所使用的数据集。微调的目标是使这些专用特征适应新的数据集,而不是覆盖通用学习。
解冻模型的顶层
您需要做的是解冻 base_model 并将底层设置为不可训练。随后,您应该重新编译模型(使这些更改生效的必需操作),然后恢复训练。
End of explanation
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer = tf.keras.optimizers.RMSprop(lr=base_learning_rate/10),
metrics=['accuracy'])
model.summary()
len(model.trainable_variables)
Explanation: 编译模型
当您正在训练一个大得多的模型并且想要重新调整预训练权重时,请务必在此阶段使用较低的学习率。否则,您的模型可能会很快过拟合。
End of explanation
fine_tune_epochs = 10
total_epochs = initial_epochs + fine_tune_epochs
history_fine = model.fit(train_dataset,
epochs=total_epochs,
initial_epoch=history.epoch[-1],
validation_data=validation_dataset)
Explanation: 继续训练模型
如果您已提前训练至收敛,则此步骤将使您的准确率提高几个百分点。
End of explanation
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.8, 1])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 1.0])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
Explanation: 在微调 MobileNet V2 基础模型的最后几层并在这些层上训练分类器时,我们来看一下训练和验证准确率/损失的学习曲线。验证损失比训练损失高得多,因此可能存在一些过拟合。
当新的训练集相对较小且与原始 MobileNet V2 数据集相似时,也可能存在一些过拟合。
经过微调后,模型在验证集上的准确率几乎达到 98%。
End of explanation
loss, accuracy = model.evaluate(test_dataset)
print('Test accuracy :', accuracy)
Explanation: 评估和预测
最后,您可以使用测试集在新数据上验证模型的性能。
End of explanation
#Retrieve a batch of images from the test set
image_batch, label_batch = test_dataset.as_numpy_iterator().next()
predictions = model.predict_on_batch(image_batch).flatten()
# Apply a sigmoid since our model returns logits
predictions = tf.nn.sigmoid(predictions)
predictions = tf.where(predictions < 0.5, 0, 1)
print('Predictions:\n', predictions.numpy())
print('Labels:\n', label_batch)
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].astype("uint8"))
plt.title(class_names[predictions[i]])
plt.axis("off")
Explanation: 现在,您可以使用此模型来预测您的宠物是猫还是狗。
End of explanation |
13,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm2m', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: GFDL-ESM2M
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
13,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple Deep Neural Network using Keras
In this notebook, we are going to explore a deep neural network to classify MNIST dataset.
We have picked Keras, which is high level wrapper over Theano/TensorFlow for this purpose.
We are using Theano backend for this particular exercise.
*note
Step1: Step 2
Step2: We can look at the shape of the dataset
Step3: Step 3
Step4: The final preprocessing step for the input data is to convert our data type to float32 and normalize our data values to the range [0, 1].
Step5: Step 4
Step6: That may be problematic. We should have 10 different classes, one for each digit, but it looks like we only have a 1-dimensional array. Let's take a look at the labels for the first 10 training samples
Step7: Step 5
Step8: Step 6
Step9: Step 7
Step10: Step 8 | Python Code:
from __future__ import print_function
import keras
# For MNIST dataset
from keras.datasets import mnist
# Keras model module
from keras.models import Sequential
# Keras core layers
from keras.layers import Dense, Dropout, Flatten
# Keras CNN Layers
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras.utils import np_utils
# %matplotlib inline
import matplotlib.pyplot as plt
Explanation: A simple Deep Neural Network using Keras
In this notebook, we are going to explore a deep neural network to classify MNIST dataset.
We have picked Keras, which is high level wrapper over Theano/TensorFlow for this purpose.
We are using Theano backend for this particular exercise.
*note: TensorFlow is also supported (as an alternative to Theano), but we stick with Theano to keep it simple. The main difference is that you'll need to reshape the data slightly differently before feeding it to your network.
Step 1: Import the packages
End of explanation
# Load MNSIT data
from keras.datasets import mnist
# Load pre-shuffled MNIST data into train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
Explanation: Step 2: Load image data from MNIST.
MNIST is a great dataset for getting started with deep learning and computer vision. It's a big enough challenge to warrant neural networks, but it's manageable on a single computer
End of explanation
print(x_train.shape)
# (60000, 28, 28)
num_classes = 10
# Plot a few example images
plt.subplot(221)
plt.imshow(x_train[0], cmap=plt.get_cmap('gray'))
plt.subplot(222)
plt.imshow(x_train[1], cmap=plt.get_cmap('gray'))
plt.subplot(223)
plt.imshow(x_train[2], cmap=plt.get_cmap('gray'))
plt.subplot(224)
plt.imshow(x_train[3], cmap=plt.get_cmap('gray'))
# show the plot
plt.show()
Explanation: We can look at the shape of the dataset:
End of explanation
# input image dimensions
img_rows, img_cols = 28, 28
# Reshape input data
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
# To confirm, we can print X_train's dimensions again:
print (x_train.shape)
Explanation: Step 3: Preprocess input data for Keras.
When using the Theano backend, you must explicitly declare a dimension for the depth of the input image. For example, a full-color image with all 3 RGB channels will have a depth of 3.
Our MNIST images only have a depth of 1, but we must explicitly declare that.
In other words, we want to transform our dataset to (n, depth, width, height).
Here's how we can do that easily:
End of explanation
# Convert data types and normalize values
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
Explanation: The final preprocessing step for the input data is to convert our data type to float32 and normalize our data values to the range [0, 1].
End of explanation
print(y_train.shape)
Explanation: Step 4: Preprocess class labels for Keras
Next, let's take a look at the shape of our class label data:
End of explanation
print(y_train[:10])
# Convert 1-dimensional class arrays to 10-dimensional class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train.shape)
Explanation: That may be problematic. We should have 10 different classes, one for each digit, but it looks like we only have a 1-dimensional array. Let's take a look at the labels for the first 10 training samples:
End of explanation
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# Check model output shape
print(model.output_shape)
Explanation: Step 5: Define model architecture
We are going to use a simple architecture. There are multiple options for an architecture.
End of explanation
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
Explanation: Step 6: Compile the model
We just need to compile the model and we'll be ready to train it. When we compile the model, we declare the loss function and the optimizer (SGD, Adam, etc.).
End of explanation
batch_size = 128
epochs = 2
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs,
verbose=1, validation_data=(x_test, y_test))
Explanation: Step 7: Fit model on training data
To fit the model, all we have to do is declare the batch size and number of epochs to train for, then pass in our training data.
End of explanation
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Step 8: Evaluate model on test data
End of explanation |
13,591 | Given the following text description, write Python code to implement the functionality described.
Description:
Find 2 ^ ( 2 ^ A ) % B
Function to return 2 ^ ( 2 ^ A ) % B ; Base case , 2 ^ ( 2 ^ 1 ) % B = 4 % B ; Driver code ; Print 2 ^ ( 2 ^ A ) % B
| Python Code:
def F(A , B ) :
if(A == 1 ) :
return(4 % B ) ;
else :
temp = F(A - 1 , B ) ;
return(temp * temp ) % B ;
A = 25 ;
B = 50 ;
print(F(A , B ) ) ;
|
13,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load a neuroimaging dataset
Step1: Advanced visualization with nilearn
Step2: Processing data with nilearn
Step3: Check nilearn website
http | Python Code:
import nibabel as nib
fp = '/home/grg/upf/Data/RM/rch2.nii'
im = nib.load(fp)
print im.header.keys()
print im.header['pixdim']
%matplotlib inline
im.orthoview()
from matplotlib import pyplot as plt
import numpy as np
d_t1 = np.array(im.dataobj)
d2 = d_t1[35,:,:]
plt.imshow(d2)
fp2 = '/home/grg/upf/Data/RM/rch2_wm.nii'
wm = nib.load(fp2)
d_wm = np.array(wm.dataobj)
wm.orthoview()
fp3 = '/home/grg/upf/Data/Control/norm2ROI2wscan001.nii'
im2 = nib.load(fp3)
#pet1.orthoview()
pet1 = np.array(im2.dataobj)
m = pet1[d_wm>0.8].mean()
print m
plt.hist(pet1[d_wm>0.8])
d_t1[d_wm>0.8] = 0
plt.imshow(d2) # d2 is the sagittal slice
#test = nib.Nifti1Image(d_t1, im.affine)
#test.to_filename('/tmp/test.nii.gz')
Explanation: Load a neuroimaging dataset
End of explanation
from nilearn import plotting
plotting.
plotting.plot_glass_brain(im) #, colorbar=True, title='PET exam', black_bg=True, threshold=260)
plotting.plot_stat_map(im2, bg_img=im, threshold=260)
plotting.plot_stat_map(im2, bg_img=im, threshold=260, display_mode='z', cut_coords=range(-10,10,2))
roi_fp = '/home/grg/spm/ROIapoE/ROI_DARTEL/csf5/rois.nii.gz'
plotting.plot_roi(roi_fp)
Explanation: Advanced visualization with nilearn
End of explanation
from nilearn import image
image.
Explanation: Processing data with nilearn
End of explanation
import nipype
Explanation: Check nilearn website
http://nilearn.github.io/modules/reference.html#module-nilearn.image
http://nilearn.github.io/auto_examples/04_manipulating_images/plot_roi_extraction.html#sphx-glr-auto-examples-04-manipulating-images-plot-roi-extraction-py
http://nilearn.github.io/auto_examples/04_manipulating_images/plot_extract_rois_statistical_maps.html#sphx-glr-auto-examples-04-manipulating-images-plot-extract-rois-statistical-maps-py
Interfacing neuroimaging software
End of explanation |
13,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TF-Agents Authors.
Step1: SAC minitaur with the Actor-Learner API
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Setup
First we will import the different tools that we need.
Step3: Hyperparameters
Step4: Environment
Environments in RL represent the task or problem that we are trying to solve. Standard environments can be easily created in TF-Agents using suites. We have different suites for loading environments from sources such as the OpenAI Gym, Atari, DM Control, etc., given a string environment name.
Now let's load the Minituar environment from the Pybullet suite.
Step5: In this environment the goal is for the agent to train a policy that will control the Minitaur robot and have it move forward as fast as possible. Episodes last 1000 steps and the return will be the sum of rewards throughout the episode.
Let's look at the information the environment provides as an observation which the policy will use to generate actions.
Step6: The observation is fairly complex. We receive 28 values representing the angles, velocities, and torques for all the motors. In return the environment expects 8 values for the actions between [-1, 1]. These are the desired motor angles.
Usually we create two environments
Step7: Distribution Strategy
We use the DistributionStrategy API to enable running the train step computation across multiple devices such as multiple GPUs or TPUs using data parallelism. The train step
Step8: All variables and Agents need to be created under strategy.scope(), as you'll see below.
Agent
To create an SAC Agent, we first need to create the networks that it will train. SAC is an actor-critic agent, so we will need two networks.
The critic will give us value estimates for Q(s,a). That is, it will recieve as input an observation and an action, and it will give us an estimate of how good that action was for the given state.
Step9: We will use this critic to train an actor network which will allow us to generate actions given an observation.
The ActorNetwork will predict parameters for a tanh-squashed MultivariateNormalDiag distribution. This distribution will then be sampled, conditioned on the current observation, whenever we need to generate actions.
Step10: With these networks at hand we can now instantiate the agent.
Step11: Replay Buffer
In order to keep track of the data collected from the environment, we will use Reverb, an efficient, extensible, and easy-to-use replay system by Deepmind. It stores experience data collected by the Actors and consumed by the Learner during training.
In this tutorial, this is less important than max_size -- but in a distributed setting with async collection and training, you will probably want to experiment with rate_limiters.SampleToInsertRatio, using a samples_per_insert somewhere between 2 and 1000. For example
Step12: The replay buffer is constructed using specs describing the tensors that are to be stored, which can be obtained from the agent using tf_agent.collect_data_spec.
Since the SAC Agent needs both the current and next observation to compute the loss, we set sequence_length=2.
Step13: Now we generate a TensorFlow dataset from the Reverb replay buffer. We will pass this to the Learner to sample experiences for training.
Step14: Policies
In TF-Agents, policies represent the standard notion of policies in RL
Step15: Policies can be created independently of agents. For example, use tf_agents.policies.random_py_policy to create a policy which will randomly select an action for each time_step.
Step16: Actors
The actor manages interactions between a policy and an environment.
* The Actor components contain an instance of the environment (as py_environment) and a copy of the policy variables.
* Each Actor worker runs a sequence of data collection steps given the local values of the policy variables.
* Variable updates are done explicitly using the variable container client instance in the training script before calling actor.run().
* The observed experience is written into the replay buffer in each data collection step.
As the Actors run data collection steps, they pass trajectories of (state, action, reward) to the observer, which caches and writes them to the Reverb replay system.
We're storing trajectories for frames [(t0,t1) (t1,t2) (t2,t3), ...] because stride_length=1.
Step17: We create an Actor with the random policy and collect experiences to seed the replay buffer with.
Step18: Instantiate an Actor with the collect policy to gather more experiences during training.
Step19: Create an Actor which will be used to evaluate the policy during training. We pass in actor.eval_metrics(num_eval_episodes) to log metrics later.
Step20: Learners
The Learner component contains the agent and performs gradient step updates to the policy variables using experience data from the replay buffer. After one or more training steps, the Learner can push a new set of variable values to the variable container.
Step21: Metrics and Evaluation
We instantiated the eval Actor with actor.eval_metrics above, which creates most commonly used metrics during policy evaluation
Step22: Check out the metrics module for other standard implementations of different metrics.
Training the agent
The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing.
Step23: Visualization
Plots
We can plot average return vs global steps to see the performance of our agent. In Minitaur, the reward function is based on how far the minitaur walks in 1000 steps and penalizes the energy expenditure.
Step25: Videos
It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab.
Step26: The following code visualizes the agent's policy for a few episodes | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
!sudo apt-get update
!sudo apt-get install -y xvfb ffmpeg
!pip install 'imageio==2.4.0'
!pip install matplotlib
!pip install tf-agents[reverb]
!pip install pybullet
Explanation: SAC minitaur with the Actor-Learner API
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/7_SAC_minitaur_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/7_SAC_minitaur_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/7_SAC_minitaur_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/7_SAC_minitaur_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Introduction
This example shows how to train a Soft Actor Critic agent on the Minitaur environment.
If you've worked through the DQN Colab this should feel very familiar. Notable changes include:
Changing the agent from DQN to SAC.
Training on Minitaur which is a much more complex environment than CartPole. The Minitaur environment aims to train a quadruped robot to move forward.
Using the TF-Agents Actor-Learner API for distributed Reinforcement Learning.
The API supports both distributed data collection using an experience replay buffer and variable container (parameter server) and distributed training across multiple devices. The API is designed to be very simple and modular. We utilize Reverb for both replay buffer and variable container and TF DistributionStrategy API for distributed training on GPUs and TPUs.
If you haven't installed the following dependencies, run:
End of explanation
import base64
import imageio
import IPython
import matplotlib.pyplot as plt
import os
import reverb
import tempfile
import PIL.Image
import tensorflow as tf
from tf_agents.agents.ddpg import critic_network
from tf_agents.agents.sac import sac_agent
from tf_agents.agents.sac import tanh_normal_projection_network
from tf_agents.environments import suite_pybullet
from tf_agents.metrics import py_metrics
from tf_agents.networks import actor_distribution_network
from tf_agents.policies import greedy_policy
from tf_agents.policies import py_tf_eager_policy
from tf_agents.policies import random_py_policy
from tf_agents.replay_buffers import reverb_replay_buffer
from tf_agents.replay_buffers import reverb_utils
from tf_agents.train import actor
from tf_agents.train import learner
from tf_agents.train import triggers
from tf_agents.train.utils import spec_utils
from tf_agents.train.utils import strategy_utils
from tf_agents.train.utils import train_utils
tempdir = tempfile.gettempdir()
Explanation: Setup
First we will import the different tools that we need.
End of explanation
env_name = "MinitaurBulletEnv-v0" # @param {type:"string"}
# Use "num_iterations = 1e6" for better results (2 hrs)
# 1e5 is just so this doesn't take too long (1 hr)
num_iterations = 100000 # @param {type:"integer"}
initial_collect_steps = 10000 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
replay_buffer_capacity = 10000 # @param {type:"integer"}
batch_size = 256 # @param {type:"integer"}
critic_learning_rate = 3e-4 # @param {type:"number"}
actor_learning_rate = 3e-4 # @param {type:"number"}
alpha_learning_rate = 3e-4 # @param {type:"number"}
target_update_tau = 0.005 # @param {type:"number"}
target_update_period = 1 # @param {type:"number"}
gamma = 0.99 # @param {type:"number"}
reward_scale_factor = 1.0 # @param {type:"number"}
actor_fc_layer_params = (256, 256)
critic_joint_fc_layer_params = (256, 256)
log_interval = 5000 # @param {type:"integer"}
num_eval_episodes = 20 # @param {type:"integer"}
eval_interval = 10000 # @param {type:"integer"}
policy_save_interval = 5000 # @param {type:"integer"}
Explanation: Hyperparameters
End of explanation
env = suite_pybullet.load(env_name)
env.reset()
PIL.Image.fromarray(env.render())
Explanation: Environment
Environments in RL represent the task or problem that we are trying to solve. Standard environments can be easily created in TF-Agents using suites. We have different suites for loading environments from sources such as the OpenAI Gym, Atari, DM Control, etc., given a string environment name.
Now let's load the Minituar environment from the Pybullet suite.
End of explanation
print('Observation Spec:')
print(env.time_step_spec().observation)
print('Action Spec:')
print(env.action_spec())
Explanation: In this environment the goal is for the agent to train a policy that will control the Minitaur robot and have it move forward as fast as possible. Episodes last 1000 steps and the return will be the sum of rewards throughout the episode.
Let's look at the information the environment provides as an observation which the policy will use to generate actions.
End of explanation
collect_env = suite_pybullet.load(env_name)
eval_env = suite_pybullet.load(env_name)
Explanation: The observation is fairly complex. We receive 28 values representing the angles, velocities, and torques for all the motors. In return the environment expects 8 values for the actions between [-1, 1]. These are the desired motor angles.
Usually we create two environments: one for collecting data during training and one for evaluation. The environments are written in pure python and use numpy arrays, which the Actor Learner API directly consumes.
End of explanation
use_gpu = True #@param {type:"boolean"}
strategy = strategy_utils.get_strategy(tpu=False, use_gpu=use_gpu)
Explanation: Distribution Strategy
We use the DistributionStrategy API to enable running the train step computation across multiple devices such as multiple GPUs or TPUs using data parallelism. The train step:
* Receives a batch of training data
* Splits it across the devices
* Computes the forward step
* Aggregates and computes the MEAN of the loss
* Computes the backward step and performs a gradient variable update
With TF-Agents Learner API and DistributionStrategy API it is quite easy to switch between running the train step on GPUs (using MirroredStrategy) to TPUs (using TPUStrategy) without changing any of the training logic below.
Enabling the GPU
If you want to try running on a GPU, you'll first need to enable GPUs for the notebook:
Navigate to Edit→Notebook Settings
Select GPU from the Hardware Accelerator drop-down
Picking a strategy
Use strategy_utils to generate a strategy. Under the hood, passing the parameter:
* use_gpu = False returns tf.distribute.get_strategy(), which uses CPU
* use_gpu = True returns tf.distribute.MirroredStrategy(), which uses all GPUs that are visible to TensorFlow on one machine
End of explanation
observation_spec, action_spec, time_step_spec = (
spec_utils.get_tensor_specs(collect_env))
with strategy.scope():
critic_net = critic_network.CriticNetwork(
(observation_spec, action_spec),
observation_fc_layer_params=None,
action_fc_layer_params=None,
joint_fc_layer_params=critic_joint_fc_layer_params,
kernel_initializer='glorot_uniform',
last_kernel_initializer='glorot_uniform')
Explanation: All variables and Agents need to be created under strategy.scope(), as you'll see below.
Agent
To create an SAC Agent, we first need to create the networks that it will train. SAC is an actor-critic agent, so we will need two networks.
The critic will give us value estimates for Q(s,a). That is, it will recieve as input an observation and an action, and it will give us an estimate of how good that action was for the given state.
End of explanation
with strategy.scope():
actor_net = actor_distribution_network.ActorDistributionNetwork(
observation_spec,
action_spec,
fc_layer_params=actor_fc_layer_params,
continuous_projection_net=(
tanh_normal_projection_network.TanhNormalProjectionNetwork))
Explanation: We will use this critic to train an actor network which will allow us to generate actions given an observation.
The ActorNetwork will predict parameters for a tanh-squashed MultivariateNormalDiag distribution. This distribution will then be sampled, conditioned on the current observation, whenever we need to generate actions.
End of explanation
with strategy.scope():
train_step = train_utils.create_train_step()
tf_agent = sac_agent.SacAgent(
time_step_spec,
action_spec,
actor_network=actor_net,
critic_network=critic_net,
actor_optimizer=tf.keras.optimizers.Adam(
learning_rate=actor_learning_rate),
critic_optimizer=tf.keras.optimizers.Adam(
learning_rate=critic_learning_rate),
alpha_optimizer=tf.keras.optimizers.Adam(
learning_rate=alpha_learning_rate),
target_update_tau=target_update_tau,
target_update_period=target_update_period,
td_errors_loss_fn=tf.math.squared_difference,
gamma=gamma,
reward_scale_factor=reward_scale_factor,
train_step_counter=train_step)
tf_agent.initialize()
Explanation: With these networks at hand we can now instantiate the agent.
End of explanation
table_name = 'uniform_table'
table = reverb.Table(
table_name,
max_size=replay_buffer_capacity,
sampler=reverb.selectors.Uniform(),
remover=reverb.selectors.Fifo(),
rate_limiter=reverb.rate_limiters.MinSize(1))
reverb_server = reverb.Server([table])
Explanation: Replay Buffer
In order to keep track of the data collected from the environment, we will use Reverb, an efficient, extensible, and easy-to-use replay system by Deepmind. It stores experience data collected by the Actors and consumed by the Learner during training.
In this tutorial, this is less important than max_size -- but in a distributed setting with async collection and training, you will probably want to experiment with rate_limiters.SampleToInsertRatio, using a samples_per_insert somewhere between 2 and 1000. For example:
rate_limiter=reverb.rate_limiters.SampleToInsertRatio(samples_per_insert=3.0, min_size_to_sample=3, error_buffer=3.0)
End of explanation
reverb_replay = reverb_replay_buffer.ReverbReplayBuffer(
tf_agent.collect_data_spec,
sequence_length=2,
table_name=table_name,
local_server=reverb_server)
Explanation: The replay buffer is constructed using specs describing the tensors that are to be stored, which can be obtained from the agent using tf_agent.collect_data_spec.
Since the SAC Agent needs both the current and next observation to compute the loss, we set sequence_length=2.
End of explanation
dataset = reverb_replay.as_dataset(
sample_batch_size=batch_size, num_steps=2).prefetch(50)
experience_dataset_fn = lambda: dataset
Explanation: Now we generate a TensorFlow dataset from the Reverb replay buffer. We will pass this to the Learner to sample experiences for training.
End of explanation
tf_eval_policy = tf_agent.policy
eval_policy = py_tf_eager_policy.PyTFEagerPolicy(
tf_eval_policy, use_tf_function=True)
tf_collect_policy = tf_agent.collect_policy
collect_policy = py_tf_eager_policy.PyTFEagerPolicy(
tf_collect_policy, use_tf_function=True)
Explanation: Policies
In TF-Agents, policies represent the standard notion of policies in RL: given a time_step produce an action or a distribution over actions. The main method is policy_step = policy.step(time_step) where policy_step is a named tuple PolicyStep(action, state, info). The policy_step.action is the action to be applied to the environment, state represents the state for stateful (RNN) policies and info may contain auxiliary information such as log probabilities of the actions.
Agents contain two policies:
agent.policy — The main policy that is used for evaluation and deployment.
agent.collect_policy — A second policy that is used for data collection.
End of explanation
random_policy = random_py_policy.RandomPyPolicy(
collect_env.time_step_spec(), collect_env.action_spec())
Explanation: Policies can be created independently of agents. For example, use tf_agents.policies.random_py_policy to create a policy which will randomly select an action for each time_step.
End of explanation
rb_observer = reverb_utils.ReverbAddTrajectoryObserver(
reverb_replay.py_client,
table_name,
sequence_length=2,
stride_length=1)
Explanation: Actors
The actor manages interactions between a policy and an environment.
* The Actor components contain an instance of the environment (as py_environment) and a copy of the policy variables.
* Each Actor worker runs a sequence of data collection steps given the local values of the policy variables.
* Variable updates are done explicitly using the variable container client instance in the training script before calling actor.run().
* The observed experience is written into the replay buffer in each data collection step.
As the Actors run data collection steps, they pass trajectories of (state, action, reward) to the observer, which caches and writes them to the Reverb replay system.
We're storing trajectories for frames [(t0,t1) (t1,t2) (t2,t3), ...] because stride_length=1.
End of explanation
initial_collect_actor = actor.Actor(
collect_env,
random_policy,
train_step,
steps_per_run=initial_collect_steps,
observers=[rb_observer])
initial_collect_actor.run()
Explanation: We create an Actor with the random policy and collect experiences to seed the replay buffer with.
End of explanation
env_step_metric = py_metrics.EnvironmentSteps()
collect_actor = actor.Actor(
collect_env,
collect_policy,
train_step,
steps_per_run=1,
metrics=actor.collect_metrics(10),
summary_dir=os.path.join(tempdir, learner.TRAIN_DIR),
observers=[rb_observer, env_step_metric])
Explanation: Instantiate an Actor with the collect policy to gather more experiences during training.
End of explanation
eval_actor = actor.Actor(
eval_env,
eval_policy,
train_step,
episodes_per_run=num_eval_episodes,
metrics=actor.eval_metrics(num_eval_episodes),
summary_dir=os.path.join(tempdir, 'eval'),
)
Explanation: Create an Actor which will be used to evaluate the policy during training. We pass in actor.eval_metrics(num_eval_episodes) to log metrics later.
End of explanation
saved_model_dir = os.path.join(tempdir, learner.POLICY_SAVED_MODEL_DIR)
# Triggers to save the agent's policy checkpoints.
learning_triggers = [
triggers.PolicySavedModelTrigger(
saved_model_dir,
tf_agent,
train_step,
interval=policy_save_interval),
triggers.StepPerSecondLogTrigger(train_step, interval=1000),
]
agent_learner = learner.Learner(
tempdir,
train_step,
tf_agent,
experience_dataset_fn,
triggers=learning_triggers,
strategy=strategy)
Explanation: Learners
The Learner component contains the agent and performs gradient step updates to the policy variables using experience data from the replay buffer. After one or more training steps, the Learner can push a new set of variable values to the variable container.
End of explanation
def get_eval_metrics():
eval_actor.run()
results = {}
for metric in eval_actor.metrics:
results[metric.name] = metric.result()
return results
metrics = get_eval_metrics()
def log_eval_metrics(step, metrics):
eval_results = (', ').join(
'{} = {:.6f}'.format(name, result) for name, result in metrics.items())
print('step = {0}: {1}'.format(step, eval_results))
log_eval_metrics(0, metrics)
Explanation: Metrics and Evaluation
We instantiated the eval Actor with actor.eval_metrics above, which creates most commonly used metrics during policy evaluation:
* Average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes.
* Average episode length.
We run the Actor to generate these metrics.
End of explanation
#@test {"skip": true}
try:
%%time
except:
pass
# Reset the train step
tf_agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = get_eval_metrics()["AverageReturn"]
returns = [avg_return]
for _ in range(num_iterations):
# Training.
collect_actor.run()
loss_info = agent_learner.run(iterations=1)
# Evaluating.
step = agent_learner.train_step_numpy
if eval_interval and step % eval_interval == 0:
metrics = get_eval_metrics()
log_eval_metrics(step, metrics)
returns.append(metrics["AverageReturn"])
if log_interval and step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, loss_info.loss.numpy()))
rb_observer.close()
reverb_server.stop()
Explanation: Check out the metrics module for other standard implementations of different metrics.
Training the agent
The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing.
End of explanation
#@test {"skip": true}
steps = range(0, num_iterations + 1, eval_interval)
plt.plot(steps, returns)
plt.ylabel('Average Return')
plt.xlabel('Step')
plt.ylim()
Explanation: Visualization
Plots
We can plot average return vs global steps to see the performance of our agent. In Minitaur, the reward function is based on how far the minitaur walks in 1000 steps and penalizes the energy expenditure.
End of explanation
def embed_mp4(filename):
Embeds an mp4 file in the notebook.
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
Explanation: Videos
It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab.
End of explanation
num_episodes = 3
video_filename = 'sac_minitaur.mp4'
with imageio.get_writer(video_filename, fps=60) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_env.render())
while not time_step.is_last():
action_step = eval_actor.policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_env.render())
embed_mp4(video_filename)
Explanation: The following code visualizes the agent's policy for a few episodes:
End of explanation |
13,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Constrained Local Models - Basics
The aim of this notebook is to showcase how one can build and fit CLMs to images using menpofit.
Note that this notebook assumes that the user has previously gone through the AAMs Basics notebook and he/she is already familiar with the basics of Menpo's Deformable Model Fitting framework explained in there.
1. Loading data
Step1: 2. Build a CLM with default parameters
Building a CLM using Menpo can be done using a single line of code.
Step2: 3. Fit the previous CLM
In Menpo, CLMs can be fitted to images by creating Fitter objects around them.
One of the most popular algorithms for fitting CLMs is the Regularized Landmark Mean-Shift algorithm. In order to fit our CLM using this algorithm using Menpo, the user needs to define a GradientDescentCLMFitter object. This can be done again using a single line of code!!!
Step3: Fitting a GradientDescentCLMFitter to an image is as simple as calling its fit method. Let's try it by fitting some images of the LFPW database test set!!!
Step4: Note that for the purpose of this simple fitting demonstration we will just fit the first 5 images of the LFPW test set. | Python Code:
%matplotlib inline
from pathlib import Path
path_to_lfpw = Path('/vol/atlas/databases/lfpw')
import menpo.io as mio
training_images = []
# load landmarked images
for i in mio.import_images(path_to_lfpw / 'trainset', verbose=True):
# crop image
i = i.crop_to_landmarks_proportion(0.1)
# convert it to grayscale if needed
if i.n_channels == 3:
i = i.as_greyscale(mode='luminosity')
# append it to the list
training_images.append(i)
from menpowidgets import visualize_images
visualize_images(training_images)
Explanation: Constrained Local Models - Basics
The aim of this notebook is to showcase how one can build and fit CLMs to images using menpofit.
Note that this notebook assumes that the user has previously gone through the AAMs Basics notebook and he/she is already familiar with the basics of Menpo's Deformable Model Fitting framework explained in there.
1. Loading data
End of explanation
from menpofit.clm import CLM
clm = CLM(
training_images,
verbose=True,
group='PTS',
diagonal=200
)
print(clm)
clm.view_clm_widget()
Explanation: 2. Build a CLM with default parameters
Building a CLM using Menpo can be done using a single line of code.
End of explanation
from menpofit.clm import GradientDescentCLMFitter
fitter = GradientDescentCLMFitter(clm, n_shape=[6, 12])
Explanation: 3. Fit the previous CLM
In Menpo, CLMs can be fitted to images by creating Fitter objects around them.
One of the most popular algorithms for fitting CLMs is the Regularized Landmark Mean-Shift algorithm. In order to fit our CLM using this algorithm using Menpo, the user needs to define a GradientDescentCLMFitter object. This can be done again using a single line of code!!!
End of explanation
import menpo.io as mio
# load test images
test_images = []
for i in mio.import_images(path_to_lfpw / 'testset', max_images=5, verbose=True):
# crop image
i = i.crop_to_landmarks_proportion(0.5)
# convert it to grayscale if needed
if i.n_channels == 3:
i = i.as_greyscale(mode='luminosity')
# append it to the list
test_images.append(i)
Explanation: Fitting a GradientDescentCLMFitter to an image is as simple as calling its fit method. Let's try it by fitting some images of the LFPW database test set!!!
End of explanation
from menpofit.fitter import noisy_shape_from_bounding_box
fitting_results = []
for i in test_images:
gt_s = i.landmarks['PTS'].lms
# generate perturbed landmarks
s = noisy_shape_from_bounding_box(gt_s, gt_s.bounding_box())
# fit image
fr = fitter.fit_from_shape(i, s, gt_shape=gt_s)
fitting_results.append(fr)
# print fitting error
print(fr)
from menpowidgets import visualize_fitting_result
visualize_fitting_result(fitting_results)
Explanation: Note that for the purpose of this simple fitting demonstration we will just fit the first 5 images of the LFPW test set.
End of explanation |
13,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: Problem set #3
Step11: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint
Step12: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step13: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step14: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step15: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step16: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output | Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
Explanation: Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
raw_numbers = numbers_str.split(",")
numbers_list=[int(x) for x in raw_numbers]
max(numbers_list)
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
sorted(numbers_list)[-10:]
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
sorted(x for x in numbers_list if x%3==0)
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
from math import sqrt
[sqrt(x) for x in numbers_list if x < 100]
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
[x['name'] for x in planets if x['diameter']>4]
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
sum(x['mass'] for x in planets)
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
[x['name'] for x in planets if 'giant' in x['type']]
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
[x['name'] for x in sorted(planets, key=lambda x:x['moons'])] #can't sort a dictionary, sort the dictionary by the number of moons
def get_moon_count(d):
return d['moons']
[x['name'] for x in sorted(planets, key=get_moon_count)]
#sort the dictionary by reverse order of the diameter:
[x['name'] for x in sorted(planets, key=lambda d:d['diameter'],reverse=True)]
[x['name'] for x in \
sorted(planets, key=lambda d:d['diameter'], reverse=True) \
if x['diameter'] >4]
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
End of explanation
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
Explanation: Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
[line for line in poem_lines if re.search(r"\b\w{4}\s\w{4}\b",line)]
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
[line for line in poem_lines if re.search(r"\b\w{5}\b.?$",line)]
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
all_lines = " ".join(poem_lines)
all_lines
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
re.findall(r"I (\w+)", all_lines)
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
menu = []
for item in entrees:
pass # replace 'pass' with your code
menu
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
End of explanation |
13,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http
Step1: Data Preparation
Step2: We are getting a dataset of dataset_size sequences of integers of length seq_len between 0 and max_num. We use split*100% of them for training and the rest for testing.
For example
Step3: For the purpose of training, we encode the input as characters rather than numbers
Step4: We write a transform that will convert our numbers into text of maximum length max_len, and one-hot encode the characters.
For example
Step5: Creating the network
Step6: We use a learning rate schedule to improve the convergence of the model
Step7: Training loop
Step8: Testing
We get a random element from the testing set
Step9: Printing the result
Step10: We can also pick our own example, and the network manages to sort it without problem
Step11: The model has even learned to generalize to examples not on the training set
Step12: However we can see it has trouble with other edge cases | Python Code:
import random
import string
import mxnet as mx
from mxnet import gluon, np
import numpy as onp
Explanation: Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Using a bi-lstm to sort a sequence of integers
End of explanation
max_num = 999
dataset_size = 60000
seq_len = 5
split = 0.8
batch_size = 512
ctx = mx.gpu() if mx.device.num_gpus() > 0 else mx.cpu()
Explanation: Data Preparation
End of explanation
X = mx.np.random.uniform(low=0, high=max_num, size=(dataset_size, seq_len)).astype('int32').asnumpy()
Y = X.copy()
Y.sort() #Let's sort X to get the target
print("Input {}\nTarget {}".format(X[0].tolist(), Y[0].tolist()))
Explanation: We are getting a dataset of dataset_size sequences of integers of length seq_len between 0 and max_num. We use split*100% of them for training and the rest for testing.
For example:
50 10 200 999 30
Should return
10 30 50 200 999
End of explanation
vocab = string.digits + " "
print(vocab)
vocab_idx = { c:i for i,c in enumerate(vocab)}
print(vocab_idx)
Explanation: For the purpose of training, we encode the input as characters rather than numbers
End of explanation
max_len = len(str(max_num))*seq_len+(seq_len-1)
print("Maximum length of the string: %s" % max_len)
def transform(x, y):
x_string = ' '.join(map(str, x.tolist()))
x_string_padded = x_string + ' '*(max_len-len(x_string))
x = [vocab_idx[c] for c in x_string_padded]
y_string = ' '.join(map(str, y.tolist()))
y_string_padded = y_string + ' '*(max_len-len(y_string))
y = [vocab_idx[c] for c in y_string_padded]
return mx.npx.one_hot(mx.nd.array(x), len(vocab)), mx.np.array(y)
split_idx = int(split*len(X))
train_dataset = gluon.data.ArrayDataset(X[:split_idx], Y[:split_idx]).transform(transform)
test_dataset = gluon.data.ArrayDataset(X[split_idx:], Y[split_idx:]).transform(transform)
print("Input {}".format(X[0]))
print("Transformed data Input {}".format(train_dataset[0][0]))
print("Target {}".format(Y[0]))
print("Transformed data Target {}".format(train_dataset[0][1]))
train_data = gluon.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=20, last_batch='rollover')
test_data = gluon.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, num_workers=5, last_batch='rollover')
Explanation: We write a transform that will convert our numbers into text of maximum length max_len, and one-hot encode the characters.
For example:
"30 10" corresponding indices are [3, 0, 10, 1, 0]
We then one hot encode that and get a matrix representation of our input. We don't need to encode our target as the loss we are going to use support sparse labels
End of explanation
net = gluon.nn.HybridSequential()
net.add(
gluon.rnn.LSTM(hidden_size=128, num_layers=2, layout='NTC', bidirectional=True),
gluon.nn.Dense(len(vocab), flatten=False)
)
net.initialize(mx.init.Xavier(), ctx=ctx)
loss = gluon.loss.SoftmaxCELoss()
Explanation: Creating the network
End of explanation
schedule = mx.lr_scheduler.FactorScheduler(step=len(train_data)*10, factor=0.75)
schedule.base_lr = 0.01
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate':0.01, 'lr_scheduler':schedule})
Explanation: We use a learning rate schedule to improve the convergence of the model
End of explanation
epochs = 100
for e in range(epochs):
epoch_loss = 0.
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(ctx)
label = label.as_in_context(ctx)
with mx.autograd.record():
output = net(data)
l = loss(output, label)
l.backward()
trainer.step(data.shape[0])
epoch_loss += l.mean()
print("Epoch [{}] Loss: {}, LR {}".format(e, epoch_loss.item()/(i+1), trainer.learning_rate))
Explanation: Training loop
End of explanation
n = random.randint(0, len(test_data)-1)
x_orig = X[split_idx+n]
y_orig = Y[split_idx+n]
def get_pred(x):
x, _ = transform(x, x)
output = net(mx.np.expand_dims(x.to_device(ctx), axis=0))
# Convert output back to string
pred = ''.join([vocab[int(o)] for o in output[0].argmax(axis=1).asnumpy().tolist()])
return pred
Explanation: Testing
We get a random element from the testing set
End of explanation
x_ = ' '.join(map(str,x_orig))
label = ' '.join(map(str,y_orig))
print("X {}\nPredicted {}\nLabel {}".format(x_, get_pred(x_orig), label))
Explanation: Printing the result
End of explanation
print(get_pred(onp.array([500, 30, 999, 10, 130])))
Explanation: We can also pick our own example, and the network manages to sort it without problem:
End of explanation
print("Only four numbers:", get_pred(onp.array([105, 302, 501, 202])))
Explanation: The model has even learned to generalize to examples not on the training set
End of explanation
print("Small digits:", get_pred(onp.array([10, 3, 5, 2, 8])))
print("Small digits, 6 numbers:", get_pred(onp.array([10, 33, 52, 21, 82, 10])))
Explanation: However we can see it has trouble with other edge cases:
End of explanation |
13,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Shunting Yard Algorithm (Operator Precedence Parsing)
The function $\texttt{toInt}(s)$ tries to convert the string $s$ to an integer. If this works out, the integer is returned. Otherwise, the string $s$ is returned unchanged.
Step1: The module re provides support for <a href='https
Step2: The function $\texttt{tokenize}(s)$ takes a string $s$ representing an arithmetic expression and splits this string into a list of tokens.
The string regExp in the implementation below is interpreted as follows
Step3: Given an operator $o$, the expression $\texttt{precedence}(o)$ returns the precedence of the operator
$o$. If $o_1$ and $o_2$ are different operators and the <em style="color
Step4: The expression isLeftAssociative}(o) is True iff the operator $o$
associates to the left. If $o$ associates to the right, this expression is False.
If the operator $o$ is unknown, evaluation of the expression results
in an error.
Step5: The function evalBefore(o1, o2) receives to strings representing arithmetical operators. It returns True if the operator $o_1$ should be evaluated before the operator $o_2$ in an arithmetical expression of the form $a \;\texttt{o}_1\; b \;\texttt{o}_2\; c$. In order to determine whether $o_1$ should be evaluated before $o_2$ it uses the precedence and the associativity of the operators.
Its behavior is specified by the following rules
Step6: The class Calculator supports three member variables
Step7: The method __str__ is used to convert an object of class Calculator to a string.
Step8: The function $\texttt{evaluate}(\texttt{self})$ evaluates the expression that is given by the tokens on the mTokenStack.
There are two phases
Step9: The method $\texttt{popAndEvaluate}(\texttt{self})$ removes the two topmost numbers $\texttt{rhs}$ and $\texttt{lhs}$ from the argument stack and
removes the topmost operator $\texttt{op}$ from the operator stack. It applies the operator $\texttt{op}$ to these numbers
by computing $\texttt{lhs} \;\texttt{op}\; \texttt{rhs}$
and then pushes this value back on the argument stack. | Python Code:
def toInt(s):
try:
return int(s)
except ValueError:
return s
toInt('123')
toInt('**')
Explanation: The Shunting Yard Algorithm (Operator Precedence Parsing)
The function $\texttt{toInt}(s)$ tries to convert the string $s$ to an integer. If this works out, the integer is returned. Otherwise, the string $s$ is returned unchanged.
End of explanation
import re
Explanation: The module re provides support for <a href='https://en.wikipedia.org/wiki/Regular_expression'>regular expressions</a>. These are needed for
<em style="color:blue;">tokenizing</em> a string.
End of explanation
def tokenize(s):
regExp = r'[0-9]+|\*\*|[()+*%/-]'
L = [ toInt(t) for t in re.findall(regExp, s) ]
return list(reversed(L))
re.findall(r'[0-9]+|\*\*|[()+*%/-]', '1 * 2 * 3**4')
tokenize('1 * 2 * 3**4')
Explanation: The function $\texttt{tokenize}(s)$ takes a string $s$ representing an arithmetic expression and splits this string into a list of tokens.
The string regExp in the implementation below is interpreted as follows:
The r in front of the apostrophe ' specifies that the regular expression is defined as a raw string.
In a raw string the backslash does not have to be escaped because it is treated as a literal character.
The regular expression is divided into three parts. These parts are separated by the character |.
[0-9]+ matches a natural number. For example, it matches 0 or 123. It would also match a string like 007.
The + at the end of the substring [0-9]+ specifies that there are any positive number of the characters in the range [0-9].
\*\* matches the operator **.
[()+*/%-] matches a parenthesis or an arithmetical operator. Note that we have
to put the symbol - last in this group as otherwise this symbol would be
interpreted as a range operator.
End of explanation
def precedence(o):
Precedence = { '+': 1, '-': 1, '*': 2, '/': 2, '%': 2, '**' : 3 }
return Precedence[o]
Explanation: Given an operator $o$, the expression $\texttt{precedence}(o)$ returns the precedence of the operator
$o$. If $o_1$ and $o_2$ are different operators and the <em style="color:blue">precedence</em> of $\texttt{o}_1$ is at least as high than the
<em style="color:blue">precedence</em> of $\texttt{o}_2$, then the expression
$$ a \;\texttt{o}_1\; b \;\texttt{o}_2\; c $$
should be evaluated as
$$ (a \;\texttt{o}_1\; b) \;\texttt{o}_2\; c. $$
Otherwise, the expression $a \;\texttt{o}_1\; b \;\texttt{o}_2\; c$ should be evaluated as
$$ a \;\texttt{o}_1\; (b \;\texttt{o}_2\; c). $$
End of explanation
def isLeftAssociative(o):
if o in { '+', '-', '*', '/', '%' }:
return True
if o in { '**' }:
return False
assert False, f'unknown operator {o}'
Explanation: The expression isLeftAssociative}(o) is True iff the operator $o$
associates to the left. If $o$ associates to the right, this expression is False.
If the operator $o$ is unknown, evaluation of the expression results
in an error.
End of explanation
def evalBefore(stackOp, nextOp):
if precedence(stackOp) > precedence(nextOp):
return True
if stackOp == nextOp:
return isLeftAssociative(stackOp)
if precedence(stackOp) == precedence(nextOp) and stackOp != nextOp:
return True
if precedence(stackOp) < precedence(nextOp):
return False
assert False, f'incomplete case distinction in evalBefore({stackOp}, {nextOp})'
%run Stack.ipynb
Explanation: The function evalBefore(o1, o2) receives to strings representing arithmetical operators. It returns True if the operator $o_1$ should be evaluated before the operator $o_2$ in an arithmetical expression of the form $a \;\texttt{o}_1\; b \;\texttt{o}_2\; c$. In order to determine whether $o_1$ should be evaluated before $o_2$ it uses the precedence and the associativity of the operators.
Its behavior is specified by the following rules:
- $\texttt{precedence}(o_1) > \texttt{precedence}(o_2) \rightarrow \texttt{evalBefore}(\texttt{o}_1, \texttt{o}_2) = \texttt{True}$,
- $o_1 = o_2 \rightarrow \texttt{evalBefore}(\texttt{o}_1, \texttt{o}_2) = \texttt{isLeftAssociative}(o_1)$,
- $\texttt{precedence}(o_1) = \texttt{precedence}(o_2) \wedge o_1 \not= o_2 \rightarrow \texttt{evalBefore}(\texttt{o}_1, \texttt{o}_2) = \texttt{True}$,
- $\texttt{precedence}(o_1) < \texttt{precedence}(o_2) \rightarrow \texttt{evalBefore}(\texttt{o}_1, \texttt{o}_2) = \texttt{False}$.
End of explanation
class Calculator:
def __init__(self, s):
self.mTokens = createStack(tokenize(s))
self.mOperators = Stack()
self.mArguments = Stack()
Explanation: The class Calculator supports three member variables:
- the token stack mTokenStack
- the operator stack mOperators
- the argument stack mArguments
The constructor takes a string that is tokenized and pushes the tokens onto the token stack such that the first token is on top of the token stack.
End of explanation
def toString(self):
return '\n'.join(['_'*50,
'Tokens: ', str(self.mTokens),
'Arguments: ', str(self.mArguments),
'Operators: ', str(self.mOperators),
'_'*50])
Calculator.__str__ = toString
del toString
Calculator.__repr__ = Calculator.__str__
Explanation: The method __str__ is used to convert an object of class Calculator to a string.
End of explanation
def evaluate(self):
while not self.mTokens.isEmpty():
print(self) # only for debugging
nextOp = self.mTokens.top(); self.mTokens.pop()
if isinstance(nextOp, int):
self.mArguments.push(nextOp)
continue
if self.mOperators.isEmpty():
self.mOperators.push(nextOp)
continue
if nextOp == "(":
self.mOperators.push(nextOp)
continue
stackOp = self.mOperators.top()
if stackOp == "(" and nextOp == ")":
self.mOperators.pop()
continue
if nextOp == ")":
self.popAndEvaluate()
self.mTokens.push(nextOp)
continue
if stackOp == '(':
self.mOperators.push(nextOp)
continue
if evalBefore(stackOp, nextOp):
self.popAndEvaluate()
self.mTokens.push(nextOp)
else:
self.mOperators.push(nextOp)
while not self.mOperators.isEmpty():
print(self) # only for debugging
self.popAndEvaluate()
print(self)
return self.mArguments.top()
Calculator.evaluate = evaluate
del evaluate
Explanation: The function $\texttt{evaluate}(\texttt{self})$ evaluates the expression that is given by the tokens on the mTokenStack.
There are two phases:
1. The first phase is the <em style="color:blue">reading phase</em>. In this phase
the tokens are removed from the token stack mTokens.
2. The second phase is the <em style="color:blue">evaluation phase</em>. In this phase,
the remaining operators on the operator stack mOperators are evaluated. Note that some operators are already
evaluated in the reading phase.
We can describe what happens in the reading phase using
<em style="color:blue">rewrite rules</em> that describe how the three stacks mTokens, mArguments and mOperators
are changed in each step. Here, a step is one iteration of the first while-loop of the function evaluate.
The following rewrite rules are executed until the token stack mTokens is empty.
1. If the token on top of the token stack is an integer, it is removed from the token stack and pushed onto the argument stack.
The operator stack remains unchanged in this case.
$$\begin{array}{lc}
\texttt{mTokens} = \texttt{mTokensRest} + [\texttt{token} ] & \wedge \
\texttt{isInteger}(\texttt{token}) & \Rightarrow \[0.2cm]
\texttt{mArguments}' = \texttt{mArguments} + [\texttt{token}] & \wedge \
\texttt{mTokens}' = \texttt{mTokensRest} & \wedge \
\texttt{mOperators}' = \texttt{mOperators}
\end{array}
$$
Here, the primed variable $\texttt{mArguments}'$ refers to the argument stack after $\texttt{token}$
has been pushed onto it.
In the following rules we implicitly assume that the token on top of the token stack is not an integer but
rather a parenthesis or a proper operator. In order to be more concise, we suppress this precondition from the
following rewrite rules.
2. If the operator stack is empty, the next token is pushed onto the operator stack.
$$\begin{array}{lc}
\texttt{mTokens} = \texttt{mTokensRest} + [\texttt{op} ] & \wedge \
\texttt{mOperators} = [] & \Rightarrow \[0.2cm]
\texttt{mOperators}' = \texttt{mOperators} + [\texttt{op}] & \wedge \
\texttt{mTokens}' = \texttt{mTokensRest} & \wedge \
\texttt{mArguments}' = \texttt{mArguments}
\end{array}
$$
3. If the next token is an opening parenthesis, this parenthesis token is pushed onto the operator stack.
$$\begin{array}{lc}
\texttt{mTokens} = \texttt{mTokensRest} + [\texttt{'('} ] & \Rightarrow \[0.2cm]
\texttt{mOperators}' = \texttt{mOperators} + [\texttt{'('}] & \wedge \
\texttt{mTokens}' = \texttt{mTokensRest} & \wedge \
\texttt{mArguments}' = \texttt{mArguments}
\end{array}
$$
4. If the next token is a closing parenthesis and the operator on top of the operator stack is an opening parenthesis, then both
parentheses are removed.
$$\begin{array}{lc}
\texttt{mTokens} = \texttt{mTokensRest} + [\texttt{')'} ] & \wedge \
\texttt{mOperators} =\texttt{mOperatorsRest} + [\texttt{'('}] & \Rightarrow \[0.2cm]
\texttt{mOperators}' = \texttt{mOperatorsRest} & \wedge \
\texttt{mTokens}' = \texttt{mTokensRest} & \wedge \
\texttt{mArguments}' = \texttt{mArguments}
\end{array}
$$
5. If the next token is a closing parenthesis but the operator on top of the operator stack is not an opening parenthesis,
the operator on top of the operator stack is evaluated. Note that the token stack is not changed in this case.
$$\begin{array}{lc}
\texttt{mTokens} = \texttt{mTokensRest} + [\texttt{')'} ] & \wedge \
\texttt{mOperatorsRest} + [\texttt{op}] & \wedge \
\texttt{op} \not= \texttt{'('} & \wedge \
\texttt{mArguments} = \texttt{mArgumentsRest} + [\texttt{lhs}, \texttt{rhs}] & \Rightarrow \[0.2cm]
\texttt{mOperators}' = \texttt{mOperatorsRest} & \wedge \
\texttt{mTokens}' = \texttt{mTokens} & \wedge \
\texttt{mArguments}' = \texttt{mArgumentsRest} + [\texttt{lhs} \;\texttt{op}\; \texttt{rhs}]
\end{array}
$$
Here, the expression $\texttt{lhs} \;\texttt{op}\; \texttt{rhs}$ denotes evaluating the operator $\texttt{op}$ with the arguments
$\texttt{lhs}$ and $\texttt{rhs}$.
6. If the token on top of the operator stack is an opening parenthesis, then the operator on top of the token stack
is pushed onto the operator stack.
$$\begin{array}{lc}
\texttt{mTokens} = \texttt{mTokensRest} + [\texttt{op}] & \wedge \
\texttt{op} \not= \texttt{')'} & \wedge \
\texttt{mOperators} = \texttt{mOperatorsRest} + [\texttt{'('}] & \Rightarrow \[0.2cm]
\texttt{mOperator}' = \texttt{mOperator} + [\texttt{op}] & \wedge \
\texttt{mTokens}' = \texttt{mTokensRest} & \wedge \
\texttt{mArguments}' = \texttt{mArguments}
\end{array}
$$
In the remaining cases neither the token on top of the token stack nor the operator on top of the operator stack can be a
a parenthesis. The following rules will implicitly assume that this is the case.
7. If the operator on top of the operator stack needs to be evaluated before the operator on top of the token stack,
the operator on top of the operator stack is evaluated.
$$\begin{array}{lc}
\texttt{mTokens} = \texttt{mTokensRest} + [o_2] & \wedge \
\texttt{mOperatorsRest} + [o_1] & \wedge \
\texttt{evalBefore}(o_1, o_2) & \wedge \
\texttt{mArguments} = \texttt{mArgumentsRest} + [\texttt{lhs}, \texttt{rhs}] & \Rightarrow \[0.2cm]
\texttt{mOperators}' = \texttt{mOperatorRest} & \wedge \
\texttt{mTokens}' = \texttt{mTokens} & \wedge \
\texttt{mArguments}' = \texttt{mArgumentsRest} + [\texttt{lhs} \;o_1\; \texttt{rhs}]
\end{array}
$$
8. Otherwise, the operator on top of the token stack is pushed onto the operator stack.
$$\begin{array}{lc}
\texttt{mTokens} = \texttt{mTokensRest} + [o_2] & \wedge \
\texttt{mOperators} = \texttt{mOperatorsRest} + [o_1] & \wedge \
\neg \texttt{evalBefore}(o_1, o_2) & \Rightarrow \[0.2cm]
\texttt{mOperators}' = \texttt{mOperators} + [o_2] & \wedge \
\texttt{mTokens}' = \texttt{mTokensRest} & \wedge \
\texttt{mArguments}' = \texttt{mArguments}
\end{array}
$$
In every step of the evaluation phase we
- remove one operator from the operator stack,
- remove its arguments from the argument stack,
- evaluate the operator, and
- push the result back on the argument stack.
End of explanation
def popAndEvaluate(self):
rhs = self.mArguments.top(); self.mArguments.pop()
lhs = self.mArguments.top(); self.mArguments.pop()
op = self.mOperators.top(); self.mOperators.pop()
result = None
if op == '+':
result = lhs + rhs
if op == '-':
result = lhs - rhs
if op == '*':
result = lhs * rhs
if op == '/':
result = lhs // rhs
if op == '%':
result = lhs % rhs
if op == '**':
result = lhs ** rhs
assert result != None, f'ERROR: *** Unknown Operator *** "{op}"'
self.mArguments.push(result)
Calculator.popAndEvaluate = popAndEvaluate
del popAndEvaluate
C = Calculator('1*3+4**2-1-2')
C.evaluate()
Explanation: The method $\texttt{popAndEvaluate}(\texttt{self})$ removes the two topmost numbers $\texttt{rhs}$ and $\texttt{lhs}$ from the argument stack and
removes the topmost operator $\texttt{op}$ from the operator stack. It applies the operator $\texttt{op}$ to these numbers
by computing $\texttt{lhs} \;\texttt{op}\; \texttt{rhs}$
and then pushes this value back on the argument stack.
End of explanation |
13,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluation of Poincare Embeddings
This notebook demonstrates how well Poincare embeddings perform on the tasks detailed in the original paper about the embeddings.
The following two external, open-source implementations are used -
1. C++
2. Numpy
This is the list of tasks -
1. WordNet reconstruction
2. WordNet link prediction
3. Link prediction in collaboration networks (evaluation incomplete)
4. Lexical entailment on HyperLex
A more detailed explanation of the tasks and the evaluation methodology is present in the individual evaluation subsections.
1. Setup
The following section performs the following -
1. Imports required python libraries and downloads the wordnet data
2. Clones the repositories containing the C++ and Numpy implementations of the Poincare embeddings
3. Applies patches containing minor changes to the implementations.
4. Compiles the C++ sources to create a binary
Step1: Please set the variable parent_directory below to change the directory to which the repositories are cloned.
Step2: You might need to install an updated version of cmake to be able to compile the source code. Please make sure that the binary poincare_embedding has been created before proceeding by verifying the above cell does not raise an error.
Step3: 2. Training
2.1 Create the data
Step6: 2.2 Training C++ embeddings
Step8: 2.3 Training numpy embeddings (non-gensim)
Step10: 2.4 Training gensim embeddings
Step17: 3. Loading the embeddings
Step20: 4. Evaluation
Step21: 4.1 WordNet reconstruction
For this task, embeddings are learnt using the entire transitive closure of the WordNet noun hypernym hierarchy. Subsequently, for every hypernym pair (u, v), the rank of v amongst all nodes that do not have a positive edge with v is computed. The final metric mean_rank is the average of all these ranks. The MAP metric is the mean of the Average Precision of the rankings for all positive nodes for a given node u.
Note that this task tests representation capacity of the learnt embeddings, and not the generalization ability.
Step24: Results from the paper -
The figures above illustrate a few things -
1. The gensim implementation does significantly better for all model sizes and hyperparameters than both the other implementations.
2. The results from the original paper have not been achieved by our implementation. Especially for models with lower dimensions, the paper mentions significantly better mean rank and MAP for the reconstruction task.
3. Using burn-in and regularization leads to much better results with low model sizes, however the results do not improve significantly with increasing model size. This might have to do with tuning the regularization coefficient, which the paper does not mention.
4.2 WordNet link prediction
This task is similar to the reconstruction task described above, except that the list of relations is split into a training and testing set, and the mean rank reported is for the edges in the test set.
Therefore, this tests the ability of the model to predict unseen edges between nodes, i.e. generalization ability, as opposed to the representation capacity tested in the Reconstruction task
4.2.1 Preparing data
Step25: 4.2.2 Training models
Step26: 4.2.3 Evaluating models
Step27: Results from the paper -
These results follow similar trends as the reconstruction results. Repeating here for ease of reading -
1. The gensim implementation does significantly better for all model sizes and hyperparameters than both the other implementations.
2. The results from the original paper have not been achieved by our implementation. Especially for models with lower dimensions, the paper mentions significantly better mean rank and MAP for the link prediction task.
4. Using burn-in and regularization leads to better results with low model sizes, however the results do not improve significantly with increasing model size.
The main difference from the reconstruction results is that mean ranks for link prediction are slightly worse most of the time than the corresponding reconstruction results. This is to be expected, as link prediction is performed on a held-out test set.
4.3 HyperLex Lexical Entailment
The Lexical Entailment task is performed using the HyperLex dataset, a collection of 2163 noun pairs and scores that denote "To what degree is noun A a type of noun Y". For example -
girl person 9.85
These scores are out of 10.
The spearman's correlation score is computed for the predicted and actual similarity scores, with the models trained on the entire WordNet noun hierarchy. | Python Code:
%cd ../..
# Some libraries need to be installed that are not part of Gensim
! pip install click>=6.7 nltk>=3.2.5 prettytable>=0.7.2 pygtrie>=2.2
import csv
from collections import OrderedDict
from IPython.display import display, HTML
import logging
import os
import pickle
import random
import re
import click
from gensim.models.poincare import PoincareModel, PoincareRelations, \
ReconstructionEvaluation, LinkPredictionEvaluation, \
LexicalEntailmentEvaluation, PoincareKeyedVectors
from gensim.utils import check_output
import nltk
from prettytable import PrettyTable
from smart_open import smart_open
logging.basicConfig(level=logging.INFO)
nltk.download('wordnet')
Explanation: Evaluation of Poincare Embeddings
This notebook demonstrates how well Poincare embeddings perform on the tasks detailed in the original paper about the embeddings.
The following two external, open-source implementations are used -
1. C++
2. Numpy
This is the list of tasks -
1. WordNet reconstruction
2. WordNet link prediction
3. Link prediction in collaboration networks (evaluation incomplete)
4. Lexical entailment on HyperLex
A more detailed explanation of the tasks and the evaluation methodology is present in the individual evaluation subsections.
1. Setup
The following section performs the following -
1. Imports required python libraries and downloads the wordnet data
2. Clones the repositories containing the C++ and Numpy implementations of the Poincare embeddings
3. Applies patches containing minor changes to the implementations.
4. Compiles the C++ sources to create a binary
End of explanation
%cd docs/notebooks/
current_directory = os.getcwd()
# Change this variable to `False` to not remove and re-download repos for external implementations
force_setup = False
# The poincare datasets, models and source code for external models are downloaded to this directory
parent_directory = os.path.join(current_directory, 'poincare')
! mkdir -p {parent_directory}
%cd {parent_directory}
# Clone repos
np_repo_name = 'poincare-np-embedding'
if force_setup and os.path.exists(np_repo_name):
! rm -rf {np_repo_name}
clone_np_repo = not os.path.exists(np_repo_name)
if clone_np_repo:
! git clone https://github.com/nishnik/poincare_embeddings.git {np_repo_name}
cpp_repo_name = 'poincare-cpp-embedding'
if force_setup and os.path.exists(cpp_repo_name):
! rm -rf {cpp_repo_name}
clone_cpp_repo = not os.path.exists(cpp_repo_name)
if clone_cpp_repo:
! git clone https://github.com/TatsuyaShirakawa/poincare-embedding.git {cpp_repo_name}
patches_applied = False
# Apply patches
if clone_cpp_repo and not patches_applied:
%cd {cpp_repo_name}
! git apply ../poincare_burn_in_eps.patch
if clone_np_repo and not patches_applied:
%cd ../{np_repo_name}
! git apply ../poincare_numpy.patch
patches_applied = True
# Compile the code for the external c++ implementation into a binary
%cd {parent_directory}/{cpp_repo_name}
!mkdir -p work
%cd work
!cmake ..
!make
%cd {current_directory}
Explanation: Please set the variable parent_directory below to change the directory to which the repositories are cloned.
End of explanation
cpp_binary_path = os.path.join(parent_directory, cpp_repo_name, 'work', 'poincare_embedding')
assert(os.path.exists(cpp_binary_path)), 'Binary file doesnt exist at %s' % cpp_binary_path
Explanation: You might need to install an updated version of cmake to be able to compile the source code. Please make sure that the binary poincare_embedding has been created before proceeding by verifying the above cell does not raise an error.
End of explanation
# These directories are auto created in the current directory for storing poincare datasets and models
data_directory = os.path.join(parent_directory, 'data')
models_directory = os.path.join(parent_directory, 'models')
# Create directories
! mkdir -p {data_directory}
! mkdir -p {models_directory}
# Prepare the WordNet data
# Can also be downloaded directly from -
# https://github.com/jayantj/gensim/raw/wordnet_data/docs/notebooks/poincare/data/wordnet_noun_hypernyms.tsv
wordnet_file = os.path.join(data_directory, 'wordnet_noun_hypernyms.tsv')
if not os.path.exists(wordnet_file):
! python {parent_directory}/{cpp_repo_name}/scripts/create_wordnet_noun_hierarchy.py {wordnet_file}
# Prepare the HyperLex data
hyperlex_url = "http://people.ds.cam.ac.uk/iv250/paper/hyperlex/hyperlex-data.zip"
! wget {hyperlex_url} -O {data_directory}/hyperlex-data.zip
if os.path.exists(os.path.join(data_directory, 'hyperlex')):
! rm -r {data_directory}/hyperlex
! unzip {data_directory}/hyperlex-data.zip -d {data_directory}/hyperlex/
hyperlex_file = os.path.join(data_directory, 'hyperlex', 'nouns-verbs', 'hyperlex-nouns.txt')
Explanation: 2. Training
2.1 Create the data
End of explanation
def train_cpp_model(
binary_path, data_file, output_file, dim, epochs, neg,
num_threads, epsilon, burn_in, seed=0):
Train a poincare embedding using the c++ implementation
Args:
binary_path (str): Path to the compiled c++ implementation binary
data_file (str): Path to tsv file containing relation pairs
output_file (str): Path to output file containing model
dim (int): Number of dimensions of the trained model
epochs (int): Number of epochs to use
neg (int): Number of negative samples to use
num_threads (int): Number of threads to use for training the model
epsilon (float): Constant used for clipping below a norm of one
burn_in (int): Number of epochs to use for burn-in init (0 means no burn-in)
Notes:
If `output_file` already exists, skips training
if os.path.exists(output_file):
print('File %s exists, skipping' % output_file)
return
args = {
'dim': dim,
'max_epoch': epochs,
'neg_size': neg,
'num_thread': num_threads,
'epsilon': epsilon,
'burn_in': burn_in,
'learning_rate_init': 0.1,
'learning_rate_final': 0.0001,
}
cmd = [binary_path, data_file, output_file]
for option, value in args.items():
cmd.append("--%s" % option)
cmd.append(str(value))
return check_output(args=cmd)
model_sizes = [5, 10, 20, 50, 100, 200]
default_params = {
'neg': 20,
'epochs': 50,
'threads': 8,
'eps': 1e-6,
'burn_in': 0,
'batch_size': 10,
'reg': 0.0
}
non_default_params = {
'neg': [10],
'epochs': [200],
'burn_in': [10]
}
def cpp_model_name_from_params(params, prefix):
param_keys = ['burn_in', 'epochs', 'neg', 'eps', 'threads']
name = ['%s_%s' % (key, params[key]) for key in sorted(param_keys)]
return '%s_%s' % (prefix, '_'.join(name))
def train_model_with_params(params, train_file, model_sizes, prefix, implementation):
Trains models with given params for multiple model sizes using the given implementation
Args:
params (dict): parameters to train the model with
train_file (str): Path to tsv file containing relation pairs
model_sizes (list): list of dimension sizes (integer) to train the model with
prefix (str): prefix to use for the saved model filenames
implementation (str): whether to use the numpy or c++ implementation,
allowed values: 'numpy', 'c++'
Returns:
tuple (model_name, model_files)
model_files is a dict of (size, filename) pairs
Example: ('cpp_model_epochs_50', {5: 'models/cpp_model_epochs_50_dim_5'})
files = {}
if implementation == 'c++':
model_name = cpp_model_name_from_params(params, prefix)
elif implementation == 'numpy':
model_name = np_model_name_from_params(params, prefix)
elif implementation == 'gensim':
model_name = gensim_model_name_from_params(params, prefix)
else:
raise ValueError('Given implementation %s not found' % implementation)
for model_size in model_sizes:
output_file_name = '%s_dim_%d' % (model_name, model_size)
output_file = os.path.join(models_directory, output_file_name)
print('Training model %s of size %d' % (model_name, model_size))
if implementation == 'c++':
out = train_cpp_model(
cpp_binary_path, train_file, output_file, model_size,
params['epochs'], params['neg'], params['threads'],
params['eps'], params['burn_in'], seed=0)
elif implementation == 'numpy':
train_external_numpy_model(
python_script_path, train_file, output_file, model_size,
params['epochs'], params['neg'], seed=0)
elif implementation == 'gensim':
train_gensim_model(
train_file, output_file, model_size, params['epochs'],
params['neg'], params['burn_in'], params['batch_size'], params['reg'], seed=0)
else:
raise ValueError('Given implementation %s not found' % implementation)
files[model_size] = output_file
return (model_name, files)
model_files = {}
model_files['c++'] = {}
# Train c++ models with default params
model_name, files = train_model_with_params(default_params, wordnet_file, model_sizes, 'cpp_model', 'c++')
model_files['c++'][model_name] = {}
for dim, filepath in files.items():
model_files['c++'][model_name][dim] = filepath
# Train c++ models with non-default params
for param, values in non_default_params.items():
params = default_params.copy()
for value in values:
params[param] = value
model_name, files = train_model_with_params(params, wordnet_file, model_sizes, 'cpp_model', 'c++')
model_files['c++'][model_name] = {}
for dim, filepath in files.items():
model_files['c++'][model_name][dim] = filepath
Explanation: 2.2 Training C++ embeddings
End of explanation
python_script_path = os.path.join(parent_directory, np_repo_name, 'poincare.py')
def np_model_name_from_params(params, prefix):
param_keys = ['neg', 'epochs']
name = ['%s_%s' % (key, params[key]) for key in sorted(param_keys)]
return '%s_%s' % (prefix, '_'.join(name))
def train_external_numpy_model(
script_path, data_file, output_file, dim, epochs, neg, seed=0):
Train a poincare embedding using an external numpy implementation
Args:
script_path (str): Path to the Python training script
data_file (str): Path to tsv file containing relation pairs
output_file (str): Path to output file containing model
dim (int): Number of dimensions of the trained model
epochs (int): Number of epochs to use
neg (int): Number of negative samples to use
Notes:
If `output_file` already exists, skips training
if os.path.exists(output_file):
print('File %s exists, skipping' % output_file)
return
args = {
'input-file': data_file,
'output-file': output_file,
'dimensions': dim,
'epochs': epochs,
'learning-rate': 0.01,
'num-negative': neg,
}
cmd = ['python', script_path]
for option, value in args.items():
cmd.append("--%s" % option)
cmd.append(str(value))
return check_output(args=cmd)
model_files['numpy'] = {}
# Train models with default params
model_name, files = train_model_with_params(default_params, wordnet_file, model_sizes, 'np_model', 'numpy')
model_files['numpy'][model_name] = {}
for dim, filepath in files.items():
model_files['numpy'][model_name][dim] = filepath
Explanation: 2.3 Training numpy embeddings (non-gensim)
End of explanation
def gensim_model_name_from_params(params, prefix):
param_keys = ['neg', 'epochs', 'burn_in', 'batch_size', 'reg']
name = ['%s_%s' % (key, params[key]) for key in sorted(param_keys)]
return '%s_%s' % (prefix, '_'.join(name))
def train_gensim_model(
data_file, output_file, dim, epochs, neg, burn_in, batch_size, reg, seed=0):
Train a poincare embedding using gensim implementation
Args:
data_file (str): Path to tsv file containing relation pairs
output_file (str): Path to output file containing model
dim (int): Number of dimensions of the trained model
epochs (int): Number of epochs to use
neg (int): Number of negative samples to use
burn_in (int): Number of epochs to use for burn-in initialization
batch_size (int): Size of batch to use for training
reg (float): Coefficient used for l2-regularization while training
Notes:
If `output_file` already exists, skips training
if os.path.exists(output_file):
print('File %s exists, skipping' % output_file)
return
train_data = PoincareRelations(data_file)
model = PoincareModel(train_data, size=dim, negative=neg, burn_in=burn_in, regularization_coeff=reg)
model.train(epochs=epochs, batch_size=batch_size)
model.save(output_file)
non_default_params_gensim = [
{'neg': 10,},
{'burn_in': 10,},
{'batch_size': 50,},
{'neg': 10, 'reg': 1, 'burn_in': 10, 'epochs': 200},
]
model_files['gensim'] = {}
# Train models with default params
model_name, files = train_model_with_params(default_params, wordnet_file, model_sizes, 'gensim_model', 'gensim')
model_files['gensim'][model_name] = {}
for dim, filepath in files.items():
model_files['gensim'][model_name][dim] = filepath
# Train models with non-default params
for new_params in non_default_params_gensim:
params = default_params.copy()
params.update(new_params)
model_name, files = train_model_with_params(params, wordnet_file, model_sizes, 'gensim_model', 'gensim')
model_files['gensim'][model_name] = {}
for dim, filepath in files.items():
model_files['gensim'][model_name][dim] = filepath
Explanation: 2.4 Training gensim embeddings
End of explanation
def transform_cpp_embedding_to_kv(input_file, output_file, encoding='utf8'):
Given a C++ embedding tsv filepath, converts it to a KeyedVector-supported file
with smart_open(input_file, 'rb') as f:
lines = [line.decode(encoding) for line in f]
if not len(lines):
raise ValueError("file is empty")
first_line = lines[0]
parts = first_line.rstrip().split("\t")
model_size = len(parts) - 1
vocab_size = len(lines)
with smart_open(output_file, 'w') as f:
f.write('%d %d\n' % (vocab_size, model_size))
for line in lines:
f.write(line.replace('\t', ' '))
def transform_numpy_embedding_to_kv(input_file, output_file, encoding='utf8'):
Given a numpy poincare embedding pkl filepath, converts it to a KeyedVector-supported file
np_embeddings = pickle.load(open(input_file, 'rb'))
random_embedding = np_embeddings[list(np_embeddings.keys())[0]]
model_size = random_embedding.shape[0]
vocab_size = len(np_embeddings)
with smart_open(output_file, 'w') as f:
f.write('%d %d\n' % (vocab_size, model_size))
for key, vector in np_embeddings.items():
vector_string = ' '.join('%.6f' % value for value in vector)
f.write('%s %s\n' % (key, vector_string))
def load_poincare_cpp(input_filename):
Load embedding trained via C++ Poincare model.
Parameters
----------
filepath : str
Path to tsv file containing embedding.
Returns
-------
PoincareKeyedVectors instance.
keyed_vectors_filename = input_filename + '.kv'
transform_cpp_embedding_to_kv(input_filename, keyed_vectors_filename)
embedding = PoincareKeyedVectors.load_word2vec_format(keyed_vectors_filename)
os.unlink(keyed_vectors_filename)
return embedding
def load_poincare_numpy(input_filename):
Load embedding trained via Python numpy Poincare model.
Parameters
----------
filepath : str
Path to pkl file containing embedding.
Returns:
PoincareKeyedVectors instance.
keyed_vectors_filename = input_filename + '.kv'
transform_numpy_embedding_to_kv(input_filename, keyed_vectors_filename)
embedding = PoincareKeyedVectors.load_word2vec_format(keyed_vectors_filename)
os.unlink(keyed_vectors_filename)
return embedding
def load_poincare_gensim(input_filename):
Load embedding trained via Gensim PoincareModel.
Parameters
----------
filepath : str
Path to model file.
Returns:
PoincareKeyedVectors instance.
model = PoincareModel.load(input_filename)
return model.kv
def load_model(implementation, model_file):
Convenience function over functions to load models from different implementations.
Parameters
----------
implementation : str
Implementation used to create model file ('c++'/'numpy'/'gensim').
model_file : str
Path to model file.
Returns
-------
PoincareKeyedVectors instance
Notes
-----
Raises ValueError in case of invalid value for `implementation`
if implementation == 'c++':
return load_poincare_cpp(model_file)
elif implementation == 'numpy':
return load_poincare_numpy(model_file)
elif implementation == 'gensim':
return load_poincare_gensim(model_file)
else:
raise ValueError('Invalid implementation %s' % implementation)
Explanation: 3. Loading the embeddings
End of explanation
def display_results(task_name, results):
Display evaluation results of multiple embeddings on a single task in a tabular format
Args:
task_name (str): name the task being evaluated
results (dict): mapping between embeddings and corresponding results
result_table = PrettyTable()
result_table.field_names = ["Model Description", "Metric"] + [str(dim) for dim in sorted(model_sizes)]
for model_name, model_results in results.items():
metrics = [metric for metric in model_results.keys()]
dims = sorted([dim for dim in model_results[metrics[0]].keys()])
description = model_description_from_name(model_name)
row = [description, '\n'.join(metrics) + '\n']
for dim in dims:
scores = ['%.2f' % model_results[metric][dim] for metric in metrics]
row.append('\n'.join(scores))
result_table.add_row(row)
result_table.align = 'r'
result_html = result_table.get_html_string()
search = "<table>"
insert_at = result_html.index(search) + len(search)
new_row =
<tr>
<th colspan="1" style="text-align:left">%s</th>
<th colspan="1"></th>
<th colspan="%d" style="text-align:center"> Dimensions</th>
</tr> % (task_name, len(model_sizes))
result_html = result_html[:insert_at] + new_row + result_html[insert_at:]
display(HTML(result_html))
def model_description_from_name(model_name):
if model_name.startswith('gensim'):
implementation = 'Gensim'
elif model_name.startswith('cpp'):
implementation = 'C++'
elif model_name.startswith('np'):
implementation = 'Numpy'
else:
raise ValueError('Unsupported implementation for model: %s' % model_name)
description = []
for param_key in sorted(default_params.keys()):
pattern = '%s_([^_]*)_?' % param_key
match = re.search(pattern, model_name)
if match:
description.append("%s=%s" % (param_key, match.groups()[0]))
return "%s: %s" % (implementation, ", ".join(description))
Explanation: 4. Evaluation
End of explanation
reconstruction_results = OrderedDict()
metrics = ['mean_rank', 'MAP']
for implementation, models in sorted(model_files.items()):
for model_name, files in models.items():
if model_name in reconstruction_results:
continue
reconstruction_results[model_name] = OrderedDict()
for metric in metrics:
reconstruction_results[model_name][metric] = {}
for model_size, model_file in files.items():
print('Evaluating model %s of size %d' % (model_name, model_size))
embedding = load_model(implementation, model_file)
eval_instance = ReconstructionEvaluation(wordnet_file, embedding)
eval_result = eval_instance.evaluate(max_n=1000)
for metric in metrics:
reconstruction_results[model_name][metric][model_size] = eval_result[metric]
display_results('WordNet Reconstruction', reconstruction_results)
Explanation: 4.1 WordNet reconstruction
For this task, embeddings are learnt using the entire transitive closure of the WordNet noun hypernym hierarchy. Subsequently, for every hypernym pair (u, v), the rank of v amongst all nodes that do not have a positive edge with v is computed. The final metric mean_rank is the average of all these ranks. The MAP metric is the mean of the Average Precision of the rankings for all positive nodes for a given node u.
Note that this task tests representation capacity of the learnt embeddings, and not the generalization ability.
End of explanation
def train_test_split(data_file, test_ratio=0.1):
Creates train and test files from given data file, returns train/test file names
Args:
data_file (str): path to data file for which train/test split is to be created
test_ratio (float): fraction of lines to be used for test data
Returns
(train_file, test_file): tuple of strings with train file and test file paths
train_filename = data_file + '.train'
test_filename = data_file + '.test'
if os.path.exists(train_filename) and os.path.exists(test_filename):
print('Train and test files already exist, skipping')
return (train_filename, test_filename)
root_nodes, leaf_nodes = get_root_and_leaf_nodes(data_file)
test_line_candidates = []
line_count = 0
all_nodes = set()
with smart_open(data_file, 'rb') as f:
for i, line in enumerate(f):
node_1, node_2 = line.split()
all_nodes.update([node_1, node_2])
if (
node_1 not in leaf_nodes
and node_2 not in leaf_nodes
and node_1 not in root_nodes
and node_2 not in root_nodes
and node_1 != node_2
):
test_line_candidates.append(i)
line_count += 1
num_test_lines = int(test_ratio * line_count)
if num_test_lines > len(test_line_candidates):
raise ValueError('Not enough candidate relations for test set')
print('Choosing %d test lines from %d candidates' % (num_test_lines, len(test_line_candidates)))
test_line_indices = set(random.sample(test_line_candidates, num_test_lines))
train_line_indices = set(l for l in range(line_count) if l not in test_line_indices)
train_set_nodes = set()
with smart_open(data_file, 'rb') as f:
train_file = smart_open(train_filename, 'wb')
test_file = smart_open(test_filename, 'wb')
for i, line in enumerate(f):
if i in train_line_indices:
train_set_nodes.update(line.split())
train_file.write(line)
elif i in test_line_indices:
test_file.write(line)
else:
raise AssertionError('Line %d not present in either train or test line indices' % i)
train_file.close()
test_file.close()
assert len(train_set_nodes) == len(all_nodes), 'Not all nodes from dataset present in train set relations'
return (train_filename, test_filename)
def get_root_and_leaf_nodes(data_file):
Return keys of root and leaf nodes from a file with transitive closure relations
Args:
data_file(str): file path containing transitive closure relations
Returns:
(root_nodes, leaf_nodes) - tuple containing keys of root and leaf nodes
root_candidates = set()
leaf_candidates = set()
with smart_open(data_file, 'rb') as f:
for line in f:
nodes = line.split()
root_candidates.update(nodes)
leaf_candidates.update(nodes)
with smart_open(data_file, 'rb') as f:
for line in f:
node_1, node_2 = line.split()
if node_1 == node_2:
continue
leaf_candidates.discard(node_1)
root_candidates.discard(node_2)
return (leaf_candidates, root_candidates)
wordnet_train_file, wordnet_test_file = train_test_split(wordnet_file)
Explanation: Results from the paper -
The figures above illustrate a few things -
1. The gensim implementation does significantly better for all model sizes and hyperparameters than both the other implementations.
2. The results from the original paper have not been achieved by our implementation. Especially for models with lower dimensions, the paper mentions significantly better mean rank and MAP for the reconstruction task.
3. Using burn-in and regularization leads to much better results with low model sizes, however the results do not improve significantly with increasing model size. This might have to do with tuning the regularization coefficient, which the paper does not mention.
4.2 WordNet link prediction
This task is similar to the reconstruction task described above, except that the list of relations is split into a training and testing set, and the mean rank reported is for the edges in the test set.
Therefore, this tests the ability of the model to predict unseen edges between nodes, i.e. generalization ability, as opposed to the representation capacity tested in the Reconstruction task
4.2.1 Preparing data
End of explanation
# Training models for link prediction
lp_model_files = {}
lp_model_files['c++'] = {}
# Train c++ models with default params
model_name, files = train_model_with_params(default_params, wordnet_train_file, model_sizes, 'cpp_lp_model', 'c++')
lp_model_files['c++'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['c++'][model_name][dim] = filepath
# Train c++ models with non-default params
for param, values in non_default_params.items():
params = default_params.copy()
for value in values:
params[param] = value
model_name, files = train_model_with_params(params, wordnet_train_file, model_sizes, 'cpp_lp_model', 'c++')
lp_model_files['c++'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['c++'][model_name][dim] = filepath
lp_model_files['numpy'] = {}
# Train numpy models with default params
model_name, files = train_model_with_params(default_params, wordnet_train_file, model_sizes, 'np_lp_model', 'numpy')
lp_model_files['numpy'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['numpy'][model_name][dim] = filepath
lp_model_files['gensim'] = {}
# Train models with default params
model_name, files = train_model_with_params(default_params, wordnet_train_file, model_sizes, 'gensim_lp_model', 'gensim')
lp_model_files['gensim'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['gensim'][model_name][dim] = filepath
# Train models with non-default params
for new_params in non_default_params_gensim:
params = default_params.copy()
params.update(new_params)
model_name, files = train_model_with_params(params, wordnet_file, model_sizes, 'gensim_lp_model', 'gensim')
lp_model_files['gensim'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['gensim'][model_name][dim] = filepath
Explanation: 4.2.2 Training models
End of explanation
lp_results = OrderedDict()
metrics = ['mean_rank', 'MAP']
for implementation, models in sorted(lp_model_files.items()):
for model_name, files in models.items():
lp_results[model_name] = OrderedDict()
for metric in metrics:
lp_results[model_name][metric] = {}
for model_size, model_file in files.items():
print('Evaluating model %s of size %d' % (model_name, model_size))
embedding = load_model(implementation, model_file)
eval_instance = LinkPredictionEvaluation(wordnet_train_file, wordnet_test_file, embedding)
eval_result = eval_instance.evaluate(max_n=1000)
for metric in metrics:
lp_results[model_name][metric][model_size] = eval_result[metric]
display_results('WordNet Link Prediction', lp_results)
Explanation: 4.2.3 Evaluating models
End of explanation
entailment_results = OrderedDict()
eval_instance = LexicalEntailmentEvaluation(hyperlex_file)
for implementation, models in sorted(model_files.items()):
for model_name, files in models.items():
if model_name in entailment_results:
continue
entailment_results[model_name] = OrderedDict()
entailment_results[model_name]['spearman'] = {}
for model_size, model_file in files.items():
print('Evaluating model %s of size %d' % (model_name, model_size))
embedding = load_model(implementation, model_file)
entailment_results[model_name]['spearman'][model_size] = eval_instance.evaluate_spearman(embedding)
display_results('Lexical Entailment (HyperLex)', entailment_results)
Explanation: Results from the paper -
These results follow similar trends as the reconstruction results. Repeating here for ease of reading -
1. The gensim implementation does significantly better for all model sizes and hyperparameters than both the other implementations.
2. The results from the original paper have not been achieved by our implementation. Especially for models with lower dimensions, the paper mentions significantly better mean rank and MAP for the link prediction task.
4. Using burn-in and regularization leads to better results with low model sizes, however the results do not improve significantly with increasing model size.
The main difference from the reconstruction results is that mean ranks for link prediction are slightly worse most of the time than the corresponding reconstruction results. This is to be expected, as link prediction is performed on a held-out test set.
4.3 HyperLex Lexical Entailment
The Lexical Entailment task is performed using the HyperLex dataset, a collection of 2163 noun pairs and scores that denote "To what degree is noun A a type of noun Y". For example -
girl person 9.85
These scores are out of 10.
The spearman's correlation score is computed for the predicted and actual similarity scores, with the models trained on the entire WordNet noun hierarchy.
End of explanation |
13,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source of the materials
Step1: Graphics including GenomeDiagram
The Bio.Graphics module depends on the third party Python library ReportLab. Although focused on producing PDF files, ReportLab can also create encapsulated postscript (EPS) and (SVG) files. In addition to these vector based images, provided certain further dependencies such as the Python Imaging Library (PIL) are installed, ReportLab can also output bitmap images (including JPEG, PNG, GIF, BMP and PICT formats).
GenomeDiagram
Introduction
The Bio.Graphics.GenomeDiagram module was added to Biopython 1.50, having previously been available as a separate Python module dependent on Biopython.
As the name might suggest, GenomeDiagram was designed for drawing whole genomes, in particular prokaryotic genomes, either as linear diagrams (optionally broken up into fragments to fit better) or as circular wheel diagrams. It proved also well suited to drawing quite detailed figures for smaller genomes such as phage, plasmids or mitochrondia.
This module is easiest to use if you have your genome loaded as a SeqRecord object containing lots of SeqFeature objects - for example as loaded from a GenBank file.
Diagrams, tracks, feature-sets and features
GenomeDiagram uses a nested set of objects. At the top level, you have a diagram object representing a sequence (or sequence region) along the horizontal axis (or circle). A diagram can contain one or more tracks, shown stacked vertically (or radially on circular diagrams). These will typically all have the same length and represent the same sequence region. You might use one track to show the gene locations, another to show regulatory regions, and a third track to show the GC percentage.
The most commonly used type of track will contain features, bundled together in feature-sets. You might choose to use one feature-set for all your CDS features, and another for tRNA features. This isn’t required - they can all go in the same feature-set, but it makes it easier to update the properties of just selected features (e.g. make all the tRNA features red).
There are two main ways to build up a complete diagram. Firstly, the top down approach where you create a diagram object, and then using its methods add track(s), and use the track methods to add feature-set(s), and use their methods to add the features. Secondly, you can create the individual objects separately (in whatever order suits your code), and then combine them.
A top down example
We’re going to draw a whole genome from a SeqRecord object read in from a GenBank file. This example uses the pPCP1 plasmid from Yersinia pestis biovar Microtus (<a href="data/NC_005816.gb">NC_005816.gb</a>)
Step2: We’re using a top down approach, so after loading in our sequence we next create an empty diagram, then add an (empty) track, and to that add an (empty) feature set
Step3: Now the fun part - we take each gene SeqFeature object in our SeqRecord, and use it to generate a feature on the diagram. We’re going to color them blue, alternating between a dark blue and a light blue.
Step4: Now we come to actually making the output file. This happens in two steps, first we call the draw method, which creates all the shapes using ReportLab objects. Then we call the write method which renders these to the requested file format. Note you can output in multiple file formats
Step5: Lets have a look at the previous one
Step6: These figures are not very exciting, but we’ve only just got started.
A bottom up example
Now let’s produce exactly the same figures, but using the bottom up approach. This means we create the different objects directly (and this can be done in almost any order) and then combine them.
Step7: You can now call the draw and write methods as before to produce a linear or circular diagram, using the code at the end of the top-down example above. The figures should be identical.
Features without a SeqFeature
In the above example we used a SeqRecord’s SeqFeature objects to build our diagram. Sometimes you won’t have SeqFeature objects, but just the coordinates for a feature you want to draw. You have to create minimal SeqFeature object, but this is easy
Step8: For strand, use +1 for the forward strand, -1 for the reverse strand, and None for both. Here is a short self contained example
Step9: The top part of the image in the next subsection shows the output (in the default feature color, pale green).
Notice that we have used the name argument here to specify the caption text for these features. This is discussed in more detail next.
Feature captions
Recall we used the following (where feature was a SeqFeature object) to add a feature to the diagram
Step10: In the example above the SeqFeature annotation was used to pick a sensible caption for the features. By default the following possible entries under the SeqFeature object’s qualifiers dictionary are used
Step11: In addition to the caption text for each feature’s label, you can also choose the font, position (this defaults to the start of the sigil, you can also choose the middle or at the end) and orientation (for linear diagrams only, where this defaults to rotated by 45 degrees)
Step12: Combining each of these three fragments with the complete example in the previous section should give something like this
Step13: We’ve not shown it here, but you can also set label_color to control the label’s color.
You’ll notice the default font is quite small - this makes sense because you will usually be drawing many (small) features on a page, not just a few large ones as shown here.
Feature sigils
The examples above have all just used the default sigil for the feature, a plain box, which was all that was available in the last publicly released standalone version of GenomeDiagram. Arrow sigils were included when GenomeDiagram was added to Biopython 1.50
Step14: These are shown below. Most sigils fit into a bounding box (as given by the default BOX sigil), either above or below the axis for the forward or reverse strand, or straddling it (double the height) for strand-less features. The BIGARROW sigil is different, always straddling the axis with the direction taken from the feature’s stand.
Arrow sigils
We introduced the arrow sigils in the previous section. There are two additional options to adjust the shapes of the arrows, firstly the thickness of the arrow shaft, given as a proportion of the height of the bounding box
Step15: The results are shown below
Step16: The results are shown below
Step17: All the shaft and arrow head options shown above for the ARROW sigil can be used for the BIGARROW sigil too.
A nice example
Now let’s return to the pPCP1 plasmid from Yersinia pestis biovar Microtus, and the top down approach used above, but take advantage of the sigil options we’ve now discussed. This time we’ll use arrows for the genes, and overlay them with strand-less features (as plain boxes) showing the position of some restriction digest sites.
Step18: Multiple tracks
All the examples so far have used a single track, but you can have more than one track – for example show the genes on one, and repeat regions on another. In this example we’re going to show three phage genomes side by side to scale, inspired by Figure 6 in Proux et al. (2002). We’ll need the GenBank files for the following three phage
Step19: The figure we are imitating used different colors for different gene functions. One way to do this is to edit the GenBank file to record color preferences for each feature - something Sanger’s Artemis editor does, and which GenomeDiagram should understand. Here however, we’ll just hard code three lists of colors.
Note that the annotation in the GenBank files doesn’t exactly match that shown in Proux et al., they have drawn some unannotated genes.
Step20: Now to draw them – this time we add three tracks to the diagram, and also notice they are given different start/end values to reflect their different lengths.
Step21: I did wonder why in the original manuscript there were no red or orange genes marked in the bottom phage. Another important point is here the phage are shown with different lengths - this is because they are all drawn to the same scale (they are different lengths).
The key difference from the published figure is they have color-coded links between similar proteins – which is what we will do in the next section.
Cross-Links between tracks
Biopython 1.59 added the ability to draw cross links between tracks - both simple linear diagrams as we will show here, but also linear diagrams split into fragments and circular diagrams.
Continuing the example from the previous section inspired by Figure 6 from Proux et al. 2002, we would need a list of cross links between pairs of genes, along with a score or color to use. Realistically you might extract this from a BLAST file computationally, but here I have manually typed them in.
My naming convention continues to refer to the three phage as A, B and C. Here are the links we want to show between A and B, given as a list of tuples (percentage similarity score, gene in A, gene in B).
Step23: For the first and last phage these identifiers are locus tags, for the middle phage there are no locus tags so I’ve used gene names instead. The following little helper function lets us lookup a feature using either a locus tag or gene name
Step24: We can now turn those list of identifier pairs into SeqFeature pairs, and thus find their location co-ordinates. We can now add all that code and the following snippet to the previous example (just before the gd_diagram.draw(...) line – see the finished example script <a href="data/Proux_et_al_2002_Figure_6.py">Proux_et_al_2002_Figure_6.py</a> included in the Doc/examples folder of the Biopython source code) to add cross links to the figure
Step25: There are several important pieces to this code. First the GenomeDiagram object has a cross_track_links attribute which is just a list of CrossLink objects. Each CrossLink object takes two sets of track-specific co-ordinates (here given as tuples, you can alternatively use a GenomeDiagram.Feature object instead). You can optionally supply a colour, border color, and say if this link should be drawn flipped (useful for showing inversions).
You can also see how we turn the BLAST percentage identity score into a colour, interpolating between white (0%) and a dark red (100%). In this example we don’t have any problems with overlapping cross-links. One way to tackle that is to use transparency in ReportLab, by using colors with their alpha channel set. However, this kind of shaded color scheme combined with overlap transparency would be difficult to interpret. The result
Step26: Here is a very simple example - for which we’ll use Arabidopsis thaliana.
You can skip this bit, but first I downloaded the five sequenced chromosomes from the NCBI’s FTP site (per the code above) and then parsed them with Bio.SeqIO to find out their lengths. You could use the GenBank files for this, but it is faster to use the FASTA files for the whole chromosomes
Step27: This gave the lengths of the five chromosomes, which we’ll now use in the following short demonstration of the BasicChromosome module
Step28: This example is deliberately short and sweet. The next example shows the location of features of interest.
Continuing from the previous example, let’s also show the tRNA genes. We’ll get their locations by parsing the GenBank files for the five Arabidopsis thaliana chromosomes. You’ll need to download these files from the NCBI FTP site. | Python Code:
#Lets load notebook's Image
from IPython.core.display import Image
from reportlab.lib import colors
from reportlab.lib.units import cm
from Bio.Graphics import GenomeDiagram
from Bio import SeqIO
Explanation: Source of the materials: Biopython cookbook (adapted)
End of explanation
record = SeqIO.read("data/NC_005816.gb", "genbank")
Explanation: Graphics including GenomeDiagram
The Bio.Graphics module depends on the third party Python library ReportLab. Although focused on producing PDF files, ReportLab can also create encapsulated postscript (EPS) and (SVG) files. In addition to these vector based images, provided certain further dependencies such as the Python Imaging Library (PIL) are installed, ReportLab can also output bitmap images (including JPEG, PNG, GIF, BMP and PICT formats).
GenomeDiagram
Introduction
The Bio.Graphics.GenomeDiagram module was added to Biopython 1.50, having previously been available as a separate Python module dependent on Biopython.
As the name might suggest, GenomeDiagram was designed for drawing whole genomes, in particular prokaryotic genomes, either as linear diagrams (optionally broken up into fragments to fit better) or as circular wheel diagrams. It proved also well suited to drawing quite detailed figures for smaller genomes such as phage, plasmids or mitochrondia.
This module is easiest to use if you have your genome loaded as a SeqRecord object containing lots of SeqFeature objects - for example as loaded from a GenBank file.
Diagrams, tracks, feature-sets and features
GenomeDiagram uses a nested set of objects. At the top level, you have a diagram object representing a sequence (or sequence region) along the horizontal axis (or circle). A diagram can contain one or more tracks, shown stacked vertically (or radially on circular diagrams). These will typically all have the same length and represent the same sequence region. You might use one track to show the gene locations, another to show regulatory regions, and a third track to show the GC percentage.
The most commonly used type of track will contain features, bundled together in feature-sets. You might choose to use one feature-set for all your CDS features, and another for tRNA features. This isn’t required - they can all go in the same feature-set, but it makes it easier to update the properties of just selected features (e.g. make all the tRNA features red).
There are two main ways to build up a complete diagram. Firstly, the top down approach where you create a diagram object, and then using its methods add track(s), and use the track methods to add feature-set(s), and use their methods to add the features. Secondly, you can create the individual objects separately (in whatever order suits your code), and then combine them.
A top down example
We’re going to draw a whole genome from a SeqRecord object read in from a GenBank file. This example uses the pPCP1 plasmid from Yersinia pestis biovar Microtus (<a href="data/NC_005816.gb">NC_005816.gb</a>)
End of explanation
gd_diagram = GenomeDiagram.Diagram("Yersinia pestis biovar Microtus plasmid pPCP1")
gd_track_for_features = gd_diagram.new_track(1, name="Annotated Features")
gd_feature_set = gd_track_for_features.new_set()
Explanation: We’re using a top down approach, so after loading in our sequence we next create an empty diagram, then add an (empty) track, and to that add an (empty) feature set:
End of explanation
for feature in record.features:
if feature.type != "gene":
#Exclude this feature
continue
if len(gd_feature_set) % 2 == 0:
color = colors.blue
else:
color = colors.lightblue
gd_feature_set.add_feature(feature, color=color, label=True)
Explanation: Now the fun part - we take each gene SeqFeature object in our SeqRecord, and use it to generate a feature on the diagram. We’re going to color them blue, alternating between a dark blue and a light blue.
End of explanation
gd_diagram.draw(format="linear", orientation="landscape", pagesize='A4',
fragments=4, start=0, end=len(record))
gd_diagram.write("data/plasmid_linear.png", "png")
Explanation: Now we come to actually making the output file. This happens in two steps, first we call the draw method, which creates all the shapes using ReportLab objects. Then we call the write method which renders these to the requested file format. Note you can output in multiple file formats:
End of explanation
gd_diagram.draw(format="circular", circular=True, pagesize=(20*cm,20*cm),
start=0, end=len(record), circle_core=0.7)
gd_diagram.write("data/plasmid_circular.png", "PNG")
Image("data/plasmid_circular.png")
Explanation: Lets have a look at the previous one:
<img src="plasmid_linear.png">
Notice that the fragments argument which we set to four controls how many pieces the genome gets broken up into.
If you want to do a circular figure, then try this:
End of explanation
record = SeqIO.read("data/NC_005816.gb", "genbank")
#Create the feature set and its feature objects,
gd_feature_set = GenomeDiagram.FeatureSet()
for feature in record.features:
if feature.type != "gene":
#Exclude this feature
continue
if len(gd_feature_set) % 2 == 0:
color = colors.blue
else:
color = colors.lightblue
gd_feature_set.add_feature(feature, color=color, label=True)
#(this for loop is the same as in the previous example)
#Create a track, and a diagram
gd_track_for_features = GenomeDiagram.Track(name="Annotated Features")
gd_diagram = GenomeDiagram.Diagram("Yersinia pestis biovar Microtus plasmid pPCP1")
#Now have to glue the bits together...
gd_track_for_features.add_set(gd_feature_set)
gd_diagram.add_track(gd_track_for_features, 1)
Explanation: These figures are not very exciting, but we’ve only just got started.
A bottom up example
Now let’s produce exactly the same figures, but using the bottom up approach. This means we create the different objects directly (and this can be done in almost any order) and then combine them.
End of explanation
from Bio.SeqFeature import SeqFeature, FeatureLocation
my_seq_feature = SeqFeature(FeatureLocation(50,100),strand=+1)
Explanation: You can now call the draw and write methods as before to produce a linear or circular diagram, using the code at the end of the top-down example above. The figures should be identical.
Features without a SeqFeature
In the above example we used a SeqRecord’s SeqFeature objects to build our diagram. Sometimes you won’t have SeqFeature objects, but just the coordinates for a feature you want to draw. You have to create minimal SeqFeature object, but this is easy:
End of explanation
gdd = GenomeDiagram.Diagram('Test Diagram')
gdt_features = gdd.new_track(1, greytrack=False)
gds_features = gdt_features.new_set()
#Add three features to show the strand options,
feature = SeqFeature(FeatureLocation(25, 125), strand=+1)
gds_features.add_feature(feature, name="Forward", label=True)
feature = SeqFeature(FeatureLocation(150, 250), strand=None)
gds_features.add_feature(feature, name="Strandless", label=True)
feature = SeqFeature(FeatureLocation(275, 375), strand=-1)
gds_features.add_feature(feature, name="Reverse", label=True)
gdd.draw(format='linear', pagesize=(15*cm,4*cm), fragments=1,
start=0, end=400)
gdd.write("data/GD_labels_default.png", "png")
Image("data/GD_labels_default.png")
Explanation: For strand, use +1 for the forward strand, -1 for the reverse strand, and None for both. Here is a short self contained example:
End of explanation
gd_feature_set.add_feature(feature, color=color, label=True)
Explanation: The top part of the image in the next subsection shows the output (in the default feature color, pale green).
Notice that we have used the name argument here to specify the caption text for these features. This is discussed in more detail next.
Feature captions
Recall we used the following (where feature was a SeqFeature object) to add a feature to the diagram:
End of explanation
gd_feature_set.add_feature(feature, color=color, label=True, name="My Gene")
Explanation: In the example above the SeqFeature annotation was used to pick a sensible caption for the features. By default the following possible entries under the SeqFeature object’s qualifiers dictionary are used: gene, label, name, locus_tag, and product. More simply, you can specify a name directly:
End of explanation
#Large font, parallel with the track
gd_feature_set.add_feature(feature, label=True, color="green",
label_size=25, label_angle=0)
#Very small font, perpendicular to the track (towards it)
gd_feature_set.add_feature(feature, label=True, color="purple",
label_position="end",
label_size=4, label_angle=90)
#Small font, perpendicular to the track (away from it)
gd_feature_set.add_feature(feature, label=True, color="blue",
label_position="middle",
label_size=6, label_angle=-90)
Explanation: In addition to the caption text for each feature’s label, you can also choose the font, position (this defaults to the start of the sigil, you can also choose the middle or at the end) and orientation (for linear diagrams only, where this defaults to rotated by 45 degrees):
End of explanation
gdd.draw(format='linear', pagesize=(15*cm,4*cm), fragments=1,
start=0, end=400)
gdd.write("data/GD_labels.png", "png")
Image("data/GD_labels.png")
Explanation: Combining each of these three fragments with the complete example in the previous section should give something like this:
End of explanation
#Default uses a BOX sigil
gd_feature_set.add_feature(feature)
#You can make this explicit:
gd_feature_set.add_feature(feature, sigil="BOX")
#Or opt for an arrow:
gd_feature_set.add_feature(feature, sigil="ARROW")
#Box with corners cut off (making it an octagon)
gd_feature_set.add_feature(feature, sigil="OCTO")
#Box with jagged edges (useful for showing breaks in contains)
gd_feature_set.add_feature(feature, sigil="JAGGY")
#Arrow which spans the axis with strand used only for direction
gd_feature_set.add_feature(feature, sigil="BIGARROW")
Explanation: We’ve not shown it here, but you can also set label_color to control the label’s color.
You’ll notice the default font is quite small - this makes sense because you will usually be drawing many (small) features on a page, not just a few large ones as shown here.
Feature sigils
The examples above have all just used the default sigil for the feature, a plain box, which was all that was available in the last publicly released standalone version of GenomeDiagram. Arrow sigils were included when GenomeDiagram was added to Biopython 1.50:
End of explanation
#Full height shafts, giving pointed boxes:
gd_feature_set.add_feature(feature, sigil="ARROW", color="brown",
arrowshaft_height=1.0)
#Or, thin shafts:
gd_feature_set.add_feature(feature, sigil="ARROW", color="teal",
arrowshaft_height=0.2)
#Or, very thin shafts:
gd_feature_set.add_feature(feature, sigil="ARROW", color="darkgreen",
arrowshaft_height=0.1)
Explanation: These are shown below. Most sigils fit into a bounding box (as given by the default BOX sigil), either above or below the axis for the forward or reverse strand, or straddling it (double the height) for strand-less features. The BIGARROW sigil is different, always straddling the axis with the direction taken from the feature’s stand.
Arrow sigils
We introduced the arrow sigils in the previous section. There are two additional options to adjust the shapes of the arrows, firstly the thickness of the arrow shaft, given as a proportion of the height of the bounding box:
End of explanation
#Short arrow heads:
gd_feature_set.add_feature(feature, sigil="ARROW", color="blue",
arrowhead_length=0.25)
#Or, longer arrow heads:
gd_feature_set.add_feature(feature, sigil="ARROW", color="orange",
arrowhead_length=1)
#Or, very very long arrow heads (i.e. all head, no shaft, so triangles):
gd_feature_set.add_feature(feature, sigil="ARROW", color="red",
arrowhead_length=10000)
Explanation: The results are shown below:
Secondly, the length of the arrow head - given as a proportion of the height of the bounding box (defaulting to 0.5, or 50%):
End of explanation
#A large arrow straddling the axis:
gd_feature_set.add_feature(feature, sigil="BIGARROW")
Explanation: The results are shown below:
Biopython 1.61 adds a new BIGARROW sigil which always stradles the axis, pointing left for the reverse strand or right otherwise:
End of explanation
record = SeqIO.read("data/NC_005816.gb", "genbank")
gd_diagram = GenomeDiagram.Diagram(record.id)
gd_track_for_features = gd_diagram.new_track(1, name="Annotated Features")
gd_feature_set = gd_track_for_features.new_set()
for feature in record.features:
if feature.type != "gene":
#Exclude this feature
continue
if len(gd_feature_set) % 2 == 0:
color = colors.blue
else:
color = colors.lightblue
gd_feature_set.add_feature(feature, sigil="ARROW",
color=color, label=True,
label_size = 14, label_angle=0)
#I want to include some strandless features, so for an example
#will use EcoRI recognition sites etc.
for site, name, color in [("GAATTC","EcoRI",colors.green),
("CCCGGG","SmaI",colors.orange),
("AAGCTT","HindIII",colors.red),
("GGATCC","BamHI",colors.purple)]:
index = 0
while True:
index = record.seq.find(site, start=index)
if index == -1 : break
feature = SeqFeature(FeatureLocation(index, index+len(site)))
gd_feature_set.add_feature(feature, color=color, name=name,
label=True, label_size = 10,
label_color=color)
index += len(site)
gd_diagram.draw(format="linear", pagesize='A4', fragments=4,
start=0, end=len(record))
gd_diagram.write("data/plasmid_linear_nice.png", "png")
Image("data/plasmid_linear_nice.png")
gd_diagram.draw(format="circular", circular=True, pagesize=(20*cm,20*cm),
start=0, end=len(record), circle_core = 0.5)
gd_diagram.write("data/plasmid_circular_nice.png", "png")
Image("data/plasmid_circular_nice.png")
Explanation: All the shaft and arrow head options shown above for the ARROW sigil can be used for the BIGARROW sigil too.
A nice example
Now let’s return to the pPCP1 plasmid from Yersinia pestis biovar Microtus, and the top down approach used above, but take advantage of the sigil options we’ve now discussed. This time we’ll use arrows for the genes, and overlay them with strand-less features (as plain boxes) showing the position of some restriction digest sites.
End of explanation
A_rec = SeqIO.read("data/NC_002703.gbk", "gb")
B_rec = SeqIO.read("data/AF323668.gbk", "gb")
Explanation: Multiple tracks
All the examples so far have used a single track, but you can have more than one track – for example show the genes on one, and repeat regions on another. In this example we’re going to show three phage genomes side by side to scale, inspired by Figure 6 in Proux et al. (2002). We’ll need the GenBank files for the following three phage:
NC_002703 – Lactococcus phage Tuc2009, complete genome (38347 bp)
AF323668 – Bacteriophage bIL285, complete genome (35538 bp)
NC_003212 – Listeria innocua Clip11262, complete genome, of which we are focussing only on integrated prophage 5 (similar length).
You can download these using Entrez if you like. For the third record we’ve worked out where the phage is integrated into the genome, and slice the record to extract it, and must also reverse complement to match the orientation of the first two phage:
End of explanation
from reportlab.lib.colors import red, grey, orange, green, brown, blue, lightblue, purple
A_colors = [red]*5 + [grey]*7 + [orange]*2 + [grey]*2 + [orange] + [grey]*11 + [green]*4 \
+ [grey] + [green]*2 + [grey, green] + [brown]*5 + [blue]*4 + [lightblue]*5 \
+ [grey, lightblue] + [purple]*2 + [grey]
B_colors = [red]*6 + [grey]*8 + [orange]*2 + [grey] + [orange] + [grey]*21 + [green]*5 \
+ [grey] + [brown]*4 + [blue]*3 + [lightblue]*3 + [grey]*5 + [purple]*2
Explanation: The figure we are imitating used different colors for different gene functions. One way to do this is to edit the GenBank file to record color preferences for each feature - something Sanger’s Artemis editor does, and which GenomeDiagram should understand. Here however, we’ll just hard code three lists of colors.
Note that the annotation in the GenBank files doesn’t exactly match that shown in Proux et al., they have drawn some unannotated genes.
End of explanation
name = "data/Proux Fig 6"
gd_diagram = GenomeDiagram.Diagram(name)
max_len = 0
for record, gene_colors in zip([A_rec, B_rec], [A_colors, B_colors]):
max_len = max(max_len, len(record))
gd_track_for_features = gd_diagram.new_track(1,
name=record.name,
greytrack=True,
start=0, end=len(record))
gd_feature_set = gd_track_for_features.new_set()
i = 0
for feature in record.features:
if feature.type != "gene":
#Exclude this feature
continue
gd_feature_set.add_feature(feature, sigil="ARROW",
color=gene_colors[i], label=True,
name = str(i+1),
label_position="start",
label_size = 6, label_angle=0)
i+=1
gd_diagram.draw(format="linear", pagesize='A4', fragments=1,
start=0, end=max_len)
gd_diagram.write(name + ".png", "png")
Image(name + ".png")
Explanation: Now to draw them – this time we add three tracks to the diagram, and also notice they are given different start/end values to reflect their different lengths.
End of explanation
#Tuc2009 (NC_002703) vs bIL285 (AF323668)
A_vs_B = [
(99, "Tuc2009_01", "int"),
(33, "Tuc2009_03", "orf4"),
(94, "Tuc2009_05", "orf6"),
(100,"Tuc2009_06", "orf7"),
(97, "Tuc2009_07", "orf8"),
(98, "Tuc2009_08", "orf9"),
(98, "Tuc2009_09", "orf10"),
(100,"Tuc2009_10", "orf12"),
(100,"Tuc2009_11", "orf13"),
(94, "Tuc2009_12", "orf14"),
(87, "Tuc2009_13", "orf15"),
(94, "Tuc2009_14", "orf16"),
(94, "Tuc2009_15", "orf17"),
(88, "Tuc2009_17", "rusA"),
(91, "Tuc2009_18", "orf20"),
(93, "Tuc2009_19", "orf22"),
(71, "Tuc2009_20", "orf23"),
(51, "Tuc2009_22", "orf27"),
(97, "Tuc2009_23", "orf28"),
(88, "Tuc2009_24", "orf29"),
(26, "Tuc2009_26", "orf38"),
(19, "Tuc2009_46", "orf52"),
(77, "Tuc2009_48", "orf54"),
(91, "Tuc2009_49", "orf55"),
(95, "Tuc2009_52", "orf60"),
]
Explanation: I did wonder why in the original manuscript there were no red or orange genes marked in the bottom phage. Another important point is here the phage are shown with different lengths - this is because they are all drawn to the same scale (they are different lengths).
The key difference from the published figure is they have color-coded links between similar proteins – which is what we will do in the next section.
Cross-Links between tracks
Biopython 1.59 added the ability to draw cross links between tracks - both simple linear diagrams as we will show here, but also linear diagrams split into fragments and circular diagrams.
Continuing the example from the previous section inspired by Figure 6 from Proux et al. 2002, we would need a list of cross links between pairs of genes, along with a score or color to use. Realistically you might extract this from a BLAST file computationally, but here I have manually typed them in.
My naming convention continues to refer to the three phage as A, B and C. Here are the links we want to show between A and B, given as a list of tuples (percentage similarity score, gene in A, gene in B).
End of explanation
def get_feature(features, id, tags=["locus_tag", "gene"]):
Search list of SeqFeature objects for an identifier under the given tags.
for f in features:
for key in tags:
#tag may not be present in this feature
for x in f.qualifiers.get(key, []):
if x == id:
return f
raise KeyError(id)
Explanation: For the first and last phage these identifiers are locus tags, for the middle phage there are no locus tags so I’ve used gene names instead. The following little helper function lets us lookup a feature using either a locus tag or gene name:
End of explanation
from Bio.Graphics.GenomeDiagram import CrossLink
from reportlab.lib import colors
#Note it might have been clearer to assign the track numbers explicitly...
for rec_X, tn_X, rec_Y, tn_Y, X_vs_Y in [(A_rec, 2, B_rec, 1, A_vs_B)]:
track_X = gd_diagram.tracks[tn_X]
track_Y = gd_diagram.tracks[tn_Y]
for score, id_X, id_Y in X_vs_Y:
feature_X = get_feature(rec_X.features, id_X)
feature_Y = get_feature(rec_Y.features, id_Y)
color = colors.linearlyInterpolatedColor(colors.white, colors.firebrick, 0, 100, score)
link_xy = CrossLink((track_X, feature_X.location.start, feature_X.location.end),
(track_Y, feature_Y.location.start, feature_Y.location.end),
color, colors.lightgrey)
gd_diagram.cross_track_links.append(link_xy)
gd_diagram.draw(format="linear", pagesize='A4', fragments=1,
start=0, end=max_len)
gd_diagram.write("data/cross.png", "png")
Image("data/cross.png")
Explanation: We can now turn those list of identifier pairs into SeqFeature pairs, and thus find their location co-ordinates. We can now add all that code and the following snippet to the previous example (just before the gd_diagram.draw(...) line – see the finished example script <a href="data/Proux_et_al_2002_Figure_6.py">Proux_et_al_2002_Figure_6.py</a> included in the Doc/examples folder of the Biopython source code) to add cross links to the figure:
End of explanation
from ftplib import FTP
ftp = FTP('ftp.ncbi.nlm.nih.gov')
print("Logging in")
ftp.login()
ftp.cwd('genomes/archive/old_genbank/A_thaliana/OLD/')
print("Starting download - This can be slow!")
for chro, name in [
("CHR_I", "NC_003070.fna"), ("CHR_I", "NC_003070.gbk"),
("CHR_II", "NC_003071.fna"), ("CHR_II", "NC_003071.gbk"),
("CHR_III", "NC_003072.fna"), ("CHR_III", "NC_003072.gbk"),
("CHR_IV", "NC_003073.fna"), ("CHR_IV", "NC_003073.gbk"),
("CHR_V", "NC_003074.fna"), ("CHR_V", "NC_003074.gbk")]:
print("Downloading", chro, name)
ftp.retrbinary('RETR %s/%s' % (chro, name), open('data/%s' % name, 'wb').write)
ftp.quit()
print('Done')
Explanation: There are several important pieces to this code. First the GenomeDiagram object has a cross_track_links attribute which is just a list of CrossLink objects. Each CrossLink object takes two sets of track-specific co-ordinates (here given as tuples, you can alternatively use a GenomeDiagram.Feature object instead). You can optionally supply a colour, border color, and say if this link should be drawn flipped (useful for showing inversions).
You can also see how we turn the BLAST percentage identity score into a colour, interpolating between white (0%) and a dark red (100%). In this example we don’t have any problems with overlapping cross-links. One way to tackle that is to use transparency in ReportLab, by using colors with their alpha channel set. However, this kind of shaded color scheme combined with overlap transparency would be difficult to interpret. The result:
There is still a lot more that can be done within Biopython to help improve this figure. First of all, the cross links in this case are between proteins which are drawn in a strand specific manor. It can help to add a background region (a feature using the ‘BOX’ sigil) on the feature track to extend the cross link. Also, we could reduce the vertical height of the feature tracks to allocate more to the links instead – one way to do that is to allocate space for empty tracks. Furthermore, in cases like this where there are no large gene overlaps, we can use the axis-straddling BIGARROW sigil, which allows us to further reduce the vertical space needed for the track. These improvements are demonstrated in the example script <a href="data/Proux_et_al_2002_Figure_6.py">Proux_et_al_2002_Figure_6.py</a>.
Beyond that, finishing touches you might want to do manually in a vector image editor include fine tuning the placement of gene labels, and adding other custom annotation such as highlighting particular regions.
Although not really necessary in this example since none of the cross-links overlap, using a transparent color in ReportLab is a very useful technique for superimposing multiple links. However, in this case a shaded color scheme should be avoided.
Chromosomes
The Bio.Graphics.BasicChromosome module allows drawing of chromosomes. There is an example in Jupe et al. (2012) (open access) using colors to highlight different gene families.
Simple Chromosomes
Important: To continue this example you have first to download a few chromosomes from Arabidopsis thaliana, the code to help you is here:
Very important: This is slow and clogs the network, you only need to do this once (even if you close the notebook as the download will be persistent)
End of explanation
from Bio import SeqIO
entries = [("Chr I", "NC_003070.fna"),
("Chr II", "NC_003071.fna"),
("Chr III", "NC_003072.fna"),
("Chr IV", "NC_003073.fna"),
("Chr V", "NC_003074.fna")]
for (name, filename) in entries:
record = SeqIO.read("data/" + filename, "fasta")
print(name, len(record))
Explanation: Here is a very simple example - for which we’ll use Arabidopsis thaliana.
You can skip this bit, but first I downloaded the five sequenced chromosomes from the NCBI’s FTP site (per the code above) and then parsed them with Bio.SeqIO to find out their lengths. You could use the GenBank files for this, but it is faster to use the FASTA files for the whole chromosomes:
End of explanation
from reportlab.lib.units import cm
from Bio.Graphics import BasicChromosome
entries = [("Chr I", 30432563),
("Chr II", 19705359),
("Chr III", 23470805),
("Chr IV", 18585042),
("Chr V", 26992728)]
max_len = 30432563 #Could compute this
telomere_length = 1000000 #For illustration
chr_diagram = BasicChromosome.Organism(output_format="png")
chr_diagram.page_size = (29.7*cm, 21*cm) #A4 landscape
for name, length in entries:
cur_chromosome = BasicChromosome.Chromosome(name)
#Set the scale to the MAXIMUM length plus the two telomeres in bp,
#want the same scale used on all five chromosomes so they can be
#compared to each other
cur_chromosome.scale_num = max_len + 2 * telomere_length
#Add an opening telomere
start = BasicChromosome.TelomereSegment()
start.scale = telomere_length
cur_chromosome.add(start)
#Add a body - using bp as the scale length here.
body = BasicChromosome.ChromosomeSegment()
body.scale = length
cur_chromosome.add(body)
#Add a closing telomere
end = BasicChromosome.TelomereSegment(inverted=True)
end.scale = telomere_length
cur_chromosome.add(end)
#This chromosome is done
chr_diagram.add(cur_chromosome)
chr_diagram.draw("data/simple_chrom.png", "Arabidopsis thaliana")
Image("data/simple_chrom.png")
Explanation: This gave the lengths of the five chromosomes, which we’ll now use in the following short demonstration of the BasicChromosome module:
End of explanation
entries = [("Chr I", "NC_003070.gbk"),
("Chr II", "NC_003071.gbk"),
("Chr III", "NC_003072.gbk"),
("Chr IV", "NC_003073.gbk"),
("Chr V", "NC_003074.gbk")]
max_len = 30432563 #Could compute this
telomere_length = 1000000 #For illustration
chr_diagram = BasicChromosome.Organism(output_format="png")
chr_diagram.page_size = (29.7*cm, 21*cm) #A4 landscape
for index, (name, filename) in enumerate(entries):
record = SeqIO.read("data/" + filename,"genbank")
length = len(record)
features = [f for f in record.features if f.type=="tRNA"]
#Record an Artemis style integer color in the feature's qualifiers,
#1 = Black, 2 = Red, 3 = Green, 4 = blue, 5 =cyan, 6 = purple
for f in features: f.qualifiers["color"] = [index+2]
cur_chromosome = BasicChromosome.Chromosome(name)
#Set the scale to the MAXIMUM length plus the two telomeres in bp,
#want the same scale used on all five chromosomes so they can be
#compared to each other
cur_chromosome.scale_num = max_len + 2 * telomere_length
#Add an opening telomere
start = BasicChromosome.TelomereSegment()
start.scale = telomere_length
cur_chromosome.add(start)
#Add a body - again using bp as the scale length here.
body = BasicChromosome.AnnotatedChromosomeSegment(length, features)
body.scale = length
cur_chromosome.add(body)
#Add a closing telomere
end = BasicChromosome.TelomereSegment(inverted=True)
end.scale = telomere_length
cur_chromosome.add(end)
#This chromosome is done
chr_diagram.add(cur_chromosome)
chr_diagram.draw("data/tRNA_chrom.png", "Arabidopsis thaliana")
Image("data/tRNA_chrom.png")
Explanation: This example is deliberately short and sweet. The next example shows the location of features of interest.
Continuing from the previous example, let’s also show the tRNA genes. We’ll get their locations by parsing the GenBank files for the five Arabidopsis thaliana chromosomes. You’ll need to download these files from the NCBI FTP site.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.